stcnn is a spatiotemporal convolutional neural network written with Pytorch for modeling visual neuron responses to an input movie. We trained the model using published (Lobula columnar) visual neuron responses to a variety of visual stimuli.
This repository contains code for training the network and visualizing the outputs.
Package dependencies can be installed via
pip install -r requirements.txt
- or
conda env create --name your_env_name --file environments.yml
To train a model using default configurations, run
python train.py
from the stcnn/ directory.
Model configuration is specified using YAML and loaded using Hydra.
- config.yaml contains core configurations, including training options and parameter bounds
- dataset directory contains configurations for calcium recordings and stimulus movies
- model directory contains layer setup for lens optics sampling layer, hidden network layers, and celltype specific readout layers
Several Jupyter notebooks are provided for checking model training and outputs.
- visualize dataset loads dataset and creates animations of calcium trace and corresponding stimulus movie
lc4_looming.mp4
- visualize model parameters plots training loss and convolution filter weights
- generate ca predictions generates calcium traces for user specified stimulus
This work was supported by the Howard Hughes Medical Institute. Roman Vaxenburg contributed the core convolution code and guidance on the overall model setup. We thank members of the Turaga lab for inputs on the model, especially Roman Vaxenburg and Janne Lappalainen.