Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump torch from 1.1.0 to 1.9.0 #283

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot-preview[bot]
Copy link
Contributor

Bumps torch from 1.1.0 to 1.9.0.

Release notes

Sourced from torch's releases.

PyTorch 1.9 Release, including Torch.Linalg and Mobile Interpreter

PyTorch 1.9 Release Notes

  • Highlights
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug Fixes
  • Performance
  • Documentation

Highlights

We are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. Highlights include:

  • Major improvements to support scientific computing, including torch.linalg, torch.special, and Complex Autograd
  • Major improvements in on-device binary size with Mobile Interpreter
  • Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core
  • Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support
  • New APIs to optimize performance and packaging for model inference deployment
  • Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler

We’d like to thank the community for their support and work on this latest release. We’d especially like to thank Quansight and Microsoft for their contributions.

You can find more details on all the highlighted features in the PyTorch 1.9 Release blogpost.

Backwards Incompatible changes

Python API

  • torch.divide with rounding_mode='floor' now returns infinity when a non-zero number is divided by zero ([#56893](pytorch/pytorch#56893)). This fixes the rounding_mode='floor' behavior to return the same non-finite values as other rounding modes when there is a division by zero. Previously it would always result in a NaN value, but a non-zero number divided by zero should return +/- infinity in IEEE floating point arithmetic. Note this does not effect torch.floor_divide or the floor division operator, which currently use rounding_mode='trunc' (and are also deprecated for that reason).

... (truncated)

Changelog

Sourced from torch's changelog.

Releasing PyTorch

General Overview

Releasing a new version of PyTorch generally entails 3 major steps:

  1. Cutting a release branch and making release branch specific changes
  2. Drafting RCs (Release Candidates), and merging cherry picks
  3. Promoting RCs to stable

Cutting release branches

Release branches are typically cut from the branch viable/strict as to ensure that tests are passing on the release branch.

Release branches should be prefixed like so:

release/{MAJOR}.{MINOR}

An example of this would look like:

release/1.8

Please make sure to create branch that pins divergent point of release branch from the main branch, i.e. orig/release/{MAJOR}.{MINOR}

Making release branch specific changes

These are examples of changes that should be made to release branches so that CI / tooling can function normally on them:

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

@dependabot-preview dependabot-preview bot added the dependencies Pull requests that update a dependency file label Jun 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants