Skip to content

dongzelian/multi-view-gaze

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 

Repository files navigation

Multi-View-Gaze

"Multi-view Multi-task Gaze Estimation with Deep Convolutional Neural Networks"

This paper is accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS) 2018.

Our ShanghaiTechGaze dataset can be downloaded from OneDrive.

About MPIIGaze dataset, please refer to the paper: "Appearance-Based Gaze Estimation in the Wild".

About UT Multiview dataset, please refer to the paper: "Learning-by-Synthesis for Appearance-based 3D Gaze Estimation".

Requirements

  • python >= 3.6
  • pytorch >= 0.4.1
  • tensorboardX >= 1.4
  • torchvision >= 0.2.1

Usage

Training our code

single-view on our ShanghaiTechGaze dataset

python Train_Single_View_ST.py

multi-view on our ShanghaiTechGaze dataset

python Train_Multi_View_ST.py

multi-view multi-task on the ShanghaiTechGaze and MPIIGaze dataset

python Train_Multi_View_Multi_Task_ST_MPII.py

Sorry for the lack of pretrained model and partial code due to the migration of clustered environments. Now the author is rewriting and reimplementing the code and experiments.

TODO: add multi-view multi-task implementation and inference code with our pretrained model.

Citation

@article{lian2018tnnls,
    Author    = {Dongze Lian, Lina Hu, Weixin Luo, Yanyu Xu, Lixin Duan, Jingyi Yu, Shenghua Gao.},
    Title     = {Multi-view Multi-task Gaze Prediction with Deep Convolutional Neural Networks.},
    Journal   = {TNNLS},
    Year      = {2018}
    }

About

Multi-view gaze estimation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages