Skip to content

Releases: helmholtz-analytics/heat

Tuning dependencies, minor documentation edits

14 Sep 07:27
Compare
Choose a tag to compare

v1.1.1

  • #864 Dependencies: constrain torchvision version range to match supported pytorch version range.

Heat 1.1: distributed slicing/indexing overhaul, dealing with load imbalance, and more

16 Jul 04:46
Compare
Choose a tag to compare

Highlights

  • Slicing/indexing overhaul for a more NumPy-like user experience. Special thanks to Ben Bourgart @ben-bou and the TerrSysMP group for this one. Warning for distributed arrays: breaking change! Indexing one element along the distribution axis now implies the indexed element is communicated to all processes.
  • More flexibility in handling non-load-balanced distributed arrays.
  • More distributed operations, incl. meshgrid.

For other details, see the CHANGELOG.

Heat 1.0: Data Parallel Neural Networks, and more

30 Apr 06:56
Compare
Choose a tag to compare

Release Notes

Heat v1.0 comes with some major updates:

  • new module nn for data-parallel neural networks
  • Distributed Asynchronous and Selective Optimization (DASO) to accelerate network training on multi-GPU architectures
  • support for complex numbers
  • major documentation overhaul
  • support channel on StackOverflow
  • support PyTorch 1.8
  • stop supporting Python 3.6
  • many more updates and bug fixes, check out the CHANGELOG

Pinning PyTorch version 1.6 for now, plus bug fixes

08 Feb 12:54
Compare
Choose a tag to compare

We're pinning PyTorch to version 1.6 after having run into problems with the recently released 1.7. This is a temporary solution!

Also, bug fixes:

  • #678 Bug fix: Internal functions now use explicit device parameters for DNDarray and torch.Tensor initializations.
  • #684 Bug fix: distributed reshape now works on booleans as well.

v0.5.0 - Distributed kmedian, kmedoids and knn, statistical functions, DNDarray manipulations, random sampling

25 Sep 11:08
Compare
Choose a tag to compare

HeAT 0.5.0 Release Notes

New features

  • Parallel high-level algorithms: more clustering methods with cluster.KMedian, cluster.KMedoids, and one classification method (K-Nearest Neighbors, classification.knn). Also new: Manhattan distance metric (spatial.manhattan).
  • Parallel statistical functions: percentileand median, skew, kurtosis.
  • Parallel DNDarray manipulations: pad, fliplr, rot90, stack, column_stack, row_stack.
  • Parallel linear algebra: outer.
  • Parallel random sampling: random.permutation, random.random_sample, random.random, random.sample, random.ranf, random.random_integer

Performance

QR solver, Tensor manipulations, Halos, Spectral clustering, and more

27 May 07:57
Compare
Choose a tag to compare

The HeAT v0.4.0 release is now available.

We are striving to be as NumPy-API-compatible as possible while providing MPI-parallelized implementation of all features.

Highlights

  • #429 Submodule for Linear Algebra: Implemented QR, sped-up matrix multiplication
  • #511 New: reshape
  • #518 New: Spectral Clustering
  • #522 Added CUDA-aware MPI detection for MVAPICH, MPICH and ParaStation
  • #535 Introduction of BaseEstimator and clustering, classification and regression mixins
  • #541 Introduction of basic halo scheme for inter-rank operations
  • #558 Added support for PyTorch 1.5.0

Other new features

  • Updated documentation theme to "Read the Docs"
  • #429 Implemented a tiling class to create Square tiles along the diagonal of a 2D matrix
  • #496 flipud()
  • #498 flip()
  • #501 flatten()
  • #520 SplitTiles class, computes tiles with theoretical and actual split axes
  • #524 cumsum() & cumprod()
  • #534 eye() now supports all 2D split combinations and matrix configurations.

Bug fixes

  • #483 Underlying torch tensor moves to the correct device on heat.array initialization
  • #483 DNDarray.cpu() changes heat device to cpu
  • #499 MPI datatype mapping: torch.int16 now maps to MPI.SHORT instead of MPI.SHORT_INT
  • #506 setup.py has correct version parsing
  • #507 sanitize_axis changes axis of scalars to None
  • #515 Numpy-API compliance: ht.var() now returns the unadjusted sample variance by default, Bessel's correction can be applied by setting ddof=1.
  • #519 parallel slicing with empty list or scalar as input; nonzero() of empty (process-local) tensor.
  • #520 resplit returns correct values for all split configurations.
  • #521 Added documentation for the generic reduce_op in Heat's core
  • #526 float32 is now consistent default dtype for factories.
  • #531 Tiling objects are not separate from the DNDarray
  • #558 sanitize_memory_layout assumes default memory layout of the input tensor
  • #562 split semantics of ht.squeeze()
  • #567 setitem to ignore split axis differences, exception will come from torch if shapes mismatch

v0.3.0

19 Feb 20:58
1feeb93
Compare
Choose a tag to compare
v0.3.0 Pre-release
Pre-release
  • #454 Update lasso example
  • #473 Matmul now will not split any of the input matrices if both have split=None. To toggle splitting of one input for increased speed use the allow_resplit flag.
  • #473 dot handles 2 split None vectors correctly now
  • #470 Enhancement: Accelerate distance calculations in kmeans clustering by introduction of new module spatial.distance
  • #478 ht.array now typecasts the local torch tensors if the torch tensors given are not the torch version of the specified dtype + unit test updates
  • #479 Completion of spatial.distance module to support 2D input arrays of different splittings (None or 0) and different datatypes, also if second input argument is None

v0.2.2

30 Jan 22:02
Compare
Choose a tag to compare
v0.2.2 Pre-release
Pre-release

v0.2.2

This version adds support for PyTorch 1.4.0. There are also several minor feature improvements and bug fixes listed below.

  • #443 added option for neutral elements to be used in the place of empty tensors in reduction operations (operations.__reduce_op) (cf. #369 and #444)
  • #445 var and std both now support iterable axis arguments
  • #452 updated pull request template
  • #465 bug fix: x.unique() returns a DNDarray both in distributed and non-distributed mode (cf. [#464])
  • #463 Bugfix: Lasso tests now run with both GPUs and CPUs

0.2.1

20 Dec 08:56
Compare
Choose a tag to compare
0.2.1 Pre-release
Pre-release

v0.2.1

This version fixes the packaging, such that installed versions of HeAT contain all required Python packages.

v0.2.0

This version varies greatly from the previous version (0.1.0). This version includes a great increase in
functionality and there are many changes. Many functions which were working previously now behave more closely
to their numpy counterparts. Although a large amount of progress has been made, work is still ongoing. We
appreciate everyone who uses this package and we work hard to solve the issues which you report to us. Thank you!

Package Requirements

  • python >= 3.5
  • mpi4py >= 3.0.0
  • numpy >= 1.13.0
  • torch >= 1.3.0

Optional Packages

  • h5py >= 2.8.0
  • netCDF4 >= 1.4.0, <= 1.5.2
  • pre-commit >= 1.18.3 (development requirement)

Additions

GPU Support

#415 GPU support was added for this release. To set the default device use ht.use_device(dev) where dev can be either
"gpu" or "cpu". Make sure to specify the device when creating DNDarrays if the desired device is different than the
default. If no device is specified then that device is assumed to be "cpu".

Basic Operations

Basic Multi-DNDarray Operations

Developmental

  • Code of conduct
  • Contribution guidelines
    • pre-commit and black checks added to Pull Requests to ensure proper formatting
  • Issue templates
  • #357 Logspace factory
  • #428 lshape map creation
  • Pull Request Template
  • Removal of the ml folder in favor of regression and clustering folders
  • #365 Test suite

Linear Algebra and Statistics

Regression, Clustering, and Misc.

  • #307 lasso regression example
  • #308 kmeans scikit feature completeness
  • #435 Parter matrix

Bug Fixes

  • KMeans bug fixes
    • Working in distributed mode
    • Fixed shape cluster centers for init='kmeans++'
  • __local_op now returns proper gshape
  • allgatherv fix -> elements now sorted in the correct order
  • getitiem fixes and improvements
  • unique now returns a distributed result if the input was distributed
  • AllToAll on single process now functioning properly
  • optional packages are truly optional for running the unit tests
  • the output of mean and var (and std) now set the correct split axis for the returned DNDarray

0.0.5-citation

04 Jan 10:52
8ecc2a7
Compare
Choose a tag to compare
0.0.5-citation Pre-release
Pre-release
Merge pull request #74 from helmholtz-analytics/features/reduceops

Addressing issue #72