Skip to content

spatiotemporal convnet for modeling visual neuron responses

Notifications You must be signed in to change notification settings

TuragaLab/stcnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

stcnn

stcnn is a spatiotemporal convolutional neural network written with Pytorch for modeling visual neuron responses to an input movie. We trained the model using published (Lobula columnar) visual neuron responses to a variety of visual stimuli.

Getting started

This repository contains code for training the network and visualizing the outputs.

Package dependencies can be installed via

  • pip install -r requirements.txt
  • or conda env create --name your_env_name --file environments.yml

Model Training

To train a model using default configurations, run python train.py from the stcnn/ directory.

Model configuration is specified using YAML and loaded using Hydra.

  • config.yaml contains core configurations, including training options and parameter bounds
  • dataset directory contains configurations for calcium recordings and stimulus movies
  • model directory contains layer setup for lens optics sampling layer, hidden network layers, and celltype specific readout layers

Dataset and model visualization

Several Jupyter notebooks are provided for checking model training and outputs.

  • visualize dataset loads dataset and creates animations of calcium trace and corresponding stimulus movie
lc4_looming.mp4

param distributions

Contributions

This work was supported by the Howard Hughes Medical Institute. Roman Vaxenburg contributed the core convolution code and guidance on the overall model setup. We thank members of the Turaga lab for inputs on the model, especially Roman Vaxenburg and Janne Lappalainen.

About

spatiotemporal convnet for modeling visual neuron responses

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published