Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about training #4

Open
phinoo opened this issue Jun 17, 2022 · 22 comments
Open

Question about training #4

phinoo opened this issue Jun 17, 2022 · 22 comments

Comments

@phinoo
Copy link

phinoo commented Jun 17, 2022

Anybody have reproduced the training process of the model?

@phinoo
Copy link
Author

phinoo commented Jun 17, 2022

When I train the model use FF-FS datasets, the function of 'getitem' of FFdata.py will first resize the input image to 256*256, then the dlib.get_frontal_face_detector() can't find a face rectange area from the image...

@dandelion915
Copy link

Could U please tell me how to get the Mask documents?

@phinoo
Copy link
Author

phinoo commented Jun 27, 2022

Could U please tell me how to get the Mask documents?

you can get the mask file from the video accoding to the file directory masks of each.

@KingSF5
Copy link

KingSF5 commented Jun 30, 2022

Could anybody tell how can I generate mask file?& How json file is like?

@towzeur
Copy link

towzeur commented Jun 30, 2022

Read this: https://github.com/ondyari/FaceForensics/tree/master/dataset

python .\faceforensics_download_v4.py "D:/output_path/" 
    -d all # 
    -c c40 # quality {raw, c23, c40}
    -t masks # type {videos, **masks**, models}
    --server EU2

It will look like this:
FaceForensics++\manipulated_sequences\Deepfakes\masks\videos\xxx_yyy.mp4

"The ground-truth forgery mask Mgt is created depending on categories of the input images"

if the input image is:

  • adversarial forgery : Mgt is the resized deformed final mask (i.e. Mgt = Md);
  • original forgery from the training dataset: As most datasets provide the ground truth forgery region, we can directly use
    them as Mgt
  • original pristine from the training dataset: Mgt is an all-zero matrix (i.e. Mgt = 0), indicating there is no forgery region in the input

@KingSF5
Copy link

KingSF5 commented Jul 1, 2022

Read this: https://github.com/ondyari/FaceForensics/tree/master/dataset

python .\faceforensics_download_v4.py "D:/output_path/" 
    -d all # 
    -c c40 # quality {raw, c23, c40}
    -t masks # type {videos, **masks**, models}
    --server EU2

It will look like this: FaceForensics++\manipulated_sequences\Deepfakes\masks\videos\xxx_yyy.mp4

"The ground-truth forgery mask Mgt is created depending on categories of the input images"

if the input image is:

  • adversarial forgery : Mgt is the resized deformed final mask (i.e. Mgt = Md);
  • original forgery from the training dataset: As most datasets provide the ground truth forgery region, we can directly use
    them as Mgt
  • original pristine from the training dataset: Mgt is an all-zero matrix (i.e. Mgt = 0), indicating there is no forgery region in the input

Thanks!!! But I have not seen the code of generating adversarial forgery(Maybe it's because of my carelessness). Is the generation of adversarial forgery included in this released codes(repo)?

@MZMMSEC
Copy link

MZMMSEC commented Jul 11, 2022

1. has anyone meet this warning in the training process?
Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters.
2. Meanwhile, the training is so slow when I run the code? is this normal?

@ghost
Copy link

ghost commented Nov 22, 2022

When I train the model use FF-FS datasets, the function of 'getitem' of FFdata.py will first resize the input image to 256*256, then the dlib.get_frontal_face_detector() can't find a face rectange area from the image...

It seems I am facing the same question. Have you solved it? Ignoring the resize step seems inappropriate. I would appreciate it if you could help me out.

@ucalyptus2
Copy link

@ProgrammingTD i think the code presumes all images are face-cropped already so 256*256 will just resize to fit model input size

@ghost
Copy link

ghost commented Nov 23, 2022

i think the code presumes all images are face-cropped already so 256*256 will just resize to fit model input size

In README.md, the data structure didn't mention the face-cropped step... I tried to resize the input images to 512 * 512, then the dlib.get_frontal_face_detector() could work, but 512 * 512 doesn't match the pretrained XceptionNet. Have you utilized the face-crop methods on FF++ to reproduce this project? Thanks for your comment!

@AmiraAlsamawi
Copy link

@phinoo @ProgrammingTD I am facing the same problem (Face detector failed). Have you managed to find a solution?

@ghost
Copy link

ghost commented Nov 29, 2022

@phinoo @ProgrammingTD I am facing the same problem (Face detector failed). Have you managed to find a solution?

according to 4.1 in paper, the author said "we resize the aligned faces to 256 × 256 for all the samples in training and test datasets" so i think @forkbabu is right . we should crop the faces in original videos and masks. Then the detector could work as planned. but i only tried for a small dataset(about 50 videos)

@AmiraAlsamawi
Copy link

@ProgrammingTD

How did you align the masks? I used dlib to extract and align the faces, but I could not do the same for the masks.

@ghost
Copy link

ghost commented Dec 6, 2022

@ProgrammingTD

How did you align the masks? I used dlib to extract and align the faces, but I could not do the same for the masks.

for each video in FF++, i got the face location parameters from the original video, and use the same parameters in corresponding fake and mask videos to crop them

@githuboflk
Copy link

Can you share your code? @ProgrammingTD

@xi4444x
Copy link

xi4444x commented Jan 10, 2023

Could anybody tell how can I generate mask file?& How json file is like?

Did you solve it? I have the same question as you. Looking forward to your reply.

@ghost
Copy link

ghost commented Feb 15, 2023

Can you share your code? @ProgrammingTD

in closed issue, the author offers data processing code

@zpshs
Copy link

zpshs commented Feb 16, 2023

Which code is used to mask data?

@zpshs
Copy link

zpshs commented Feb 16, 2023

Could U please tell me how to get the Mask documents?

Did you solve it? I have the same question as you. Looking forward to your reply.

@ghost
Copy link

ghost commented Mar 31, 2023

@ProgrammingTD
How did you align the masks? I used dlib to extract and align the faces, but I could not do the same for the masks.

for each video in FF++, i got the face location parameters from the original video, and use the same parameters in corresponding fake and mask videos to crop them

Can you provide the code? Thank you

in closed issue 10, the author offers data processing code

@Pudge-tao
Copy link

how to do 'image extract processing'? could anyone share the code?

@Leesoon1984
Copy link

Have solved it? Can you share the code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests