Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Customizable loss reduction breaks assumptions #77

Open
bhancock8 opened this issue Oct 29, 2018 · 2 comments
Open

Customizable loss reduction breaks assumptions #77

bhancock8 opened this issue Oct 29, 2018 · 2 comments
Assignees

Comments

@bhancock8
Copy link
Contributor

PR #75 introduced a customizable reduction for the loss function, but there are two places later in the code where we make the assumption that its the total loss (reduction='sum') when we go to calculate the average loss per example:

  1. running_loss = epoch_loss / (len(data[0]) * (batch_num + 1))
  2. train_loss = epoch_loss / len(train_loader.dataset)

I think we either need to set it to sum and allow post-processing if they want to combine in some other way, or add some special handling so that we're not dividing by train set size twice. I think I'd probably be in favor of always calculating some and then dividing later. What was the motivation for allowing it to be changed?

@ajratner
Copy link
Contributor

Good call, will fix!

@brahmaneya
Copy link

Active development is being moved to our main repo, https://github.com/snorkel-team/snorkel. The model there does not have this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants