###Sequence and timing for the experiment
- Phase 1: used phase1_protocol.xml, specifically the "Initiate Visual Pres Protocol" section. There were no sound cues other than to alert "correct" or "incorrect" choices. For the first few days we had to put peanut butter on the lick valves because animals didn't seem interested, but they quickly caught on after a few rewards. The reward volume was set at a constant 0.02, and it did not increment. Animals were trained up to plateau with the exception of AB3, who never performed as well as the others. Briefly, the objective of this phase is to teach the animal how the task works and determine whether/when they can discriminate between objects A and B. As the animal performs better, the objects drift from their own sides of the screen to the center (i.e. spatial cues help them learn how the task works, but when both stimuli are present in the center of the screen they must discriminate between them. All stimuli in this phase are 40 degrees visual angle in size and 0 degrees rotation. The animals generally reached plateau within 20-40 sessions, as seen in this nice, sigmoidal learning curve.
- Phase 2: used phase2_protocol.xml, specifically the "Staircase Through Shape Parameters" section. Again, the only audio cues were to reinforce "correct" or "incorrect" (i.e. FlagCueStimSound == 0) and reward volume did not vary. In this phase, the range of sizes animals were tested on (upper bound 40 degrees visual angle ; lower bound different for each animal) gradually expanded (i.e. sizes got smaller) as they performed above 70% correct. Some animals reached lower sizes than others, but all animals reached 5 degrees visual angle at some point during the staircase. Performance also peaked rapidly, as indicated by negligible negligible (if any) performance improvement beyond 8 sessions.
- Phase 3: used phase3_protocol.xml, specifically the "Test All Transformations" section. Phase 3 was the same as phases 1 & 2 for sound and reward quantity. However, in this phase, stimulus rotation in depth was staircased at size 30 degrees visual angle. Rotation in depth to the left was staircased first, then rotation in depth to the right. This transition was manually initiated once animals reached +60 rotation with good performance. In order to make the transition, the following variables were switched in the protocol: FlagStaircasePosHR/HL/VU/VD == 0, FlagStaircaseRotCW/ACW == 0. FlagStaircaseDeptRotRight == 1 or 0, depending on whether the animal was on the right or left rotation in depth staircase (only one RotRight or RotLeft could equal 1 at any given time--we never staircased both directions at the same time. Animals rapidly generalized the novel rotations, so this phase was relatively short.
- Phase 4 used phase4_protocol.xml (MUST DO THIS) Most variables same as phase 3, except the MWorks client variable FlagShowOnlyTrainedAxes == 1. With this variable set to 1, animals could be tested on any stimulus among the "cross" set at any time--no more staircasing.
###Directory structure and data analysis workflow for generating graphs from session (.mwk) files To analyze session data, users must change the working directory to the 3-port-analysis repository on their filesystem. In my case...
cd ~/Repositories/3-port-analysis
From there, the user makes a folder called "input" then subfolders called phase1, phase2, phase3, etc. for each phase of data collected. Each subfolder will contain subfolders whose names are the names of each animal. Finally, the animal names folders each contain all the .mwk files corresponding to sessions the animal had for that phase (the user knows which sessions correspond to each phase--the script does not determine this). Below is an example of the directory structure required for analysis scripts to run properly. Note that only folders for phase2 and phase3 are shown in the example above; AB1, AB2...AB7 are the animal names; these are arbitrary.
Once you've navigated to the repository and set up the filesystem hierarchy, you're ready to do some analysis. It's as easy as issuing the command:
python phase1_analysis.py
The above command will analyze all phase 1 data. To analyze phase 2 data, just run the phase 2 script:
python phase2_analysis.py
Etc etc.
These commands should spit out graphs that look similar to what you see in the sample results for phase 1, phase 2, and phase 3. Check it out; results are awesome!