Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why doesn't LR encoding network g_θ need to be invertible? #26

Open
qiufengmama opened this issue Mar 8, 2021 · 3 comments
Open

why doesn't LR encoding network g_θ need to be invertible? #26

qiufengmama opened this issue Mar 8, 2021 · 3 comments

Comments

@qiufengmama
Copy link

No description provided.

@martin-danelljan
Copy link
Collaborator

Because it is only used for conditioning. Please see the paper if you want the in depth explanation and derivation.

@nachifur
Copy link

nachifur commented Jul 1, 2021

@martin-danelljan Thank you for your wonderful work. The place I am puzzled is: why the variance of z is 0, and the network can still output a better psnr super-resolution image, because this does not add high-frequency details to the network. I look forward to your reply.

@nachifur
Copy link

nachifur commented Jul 1, 2021

My other problem is that when training to DF2K_4x, the verification output is completely black when training to 160000. It is normal for 80000. For more iterations, the output is all black. This seems to be abnormal. Can you explain this?

0_000080000_h050_s1:
0_000080000_h050_s1
0_000160000_h050_s1:
0_000160000_h050_s1

The drop in loss seems to be correct.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants