Adding FID statistics calculation as an option (can now do "train", "eval", or "fid_stats")#5
Adding FID statistics calculation as an option (can now do "train", "eval", or "fid_stats")#5AlexiaJM wants to merge 6 commits intoyang-song:mainfrom
Conversation
|
After quickly looking through the code, I think you should always disable uniform dequantization, and in addition:
|
AlexiaJM
left a comment
There was a problem hiding this comment.
removed lines that should not be there
|
Alright, will change this. |
|
Will now test to see if I can get the same FID on a model. |
|
Very close results, but not exactly the same, sadly! FID: 534.9150390625 Let me know if you figure out anything to be changed. At least it seems very close now. |
|
Our stats files were computed on TPUs, where they replace Why are the FID scores so large? |
|
Then, that might be fine, floating points errors are acceptable. It's trained for 8 iterations on a tiny batch 😂. |
|
Good news, I tried with your pre-trained model (cifar10_continuous_ve) on chkpt 24 on a FID on only 2k samples (so it finishes quickly enough). The results from two runs with the FID statistics from the new code:
The results from one run with the FID statistics from your google drive:
So the new code works. Thanks for your help! Alexia |
|
Hey, Thanks! I am using @AlexiaJM's fork with my custom config. I get the following error: Update Feel free to correct me if I am wrong. I am not sure if this is the best/right solution. |
With these small changes, you can get the fid statistics by running with --mode "fid_stats". It loops through the dataset for 1 epoch and extract the FID statistics. That makes it easier to add new datasets.
The only issue is that I am not getting the same FID when evaluating a model on the google drive FID statistics as opposed to the one made by this new mode. Can you verify that my implementation is correct?
I could be misusing the scaler or inverse scaler.