Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement transform_bounding_boxes for random_flip #20468

Merged
merged 4 commits into from
Nov 8, 2024

Conversation

shashaka
Copy link
Contributor

@shashaka shashaka commented Nov 7, 2024

I've implemented the transform_bounding_boxes method for random_flip.py. If you notice any areas that need improvement, please feel free to let me know. Thank you!

here is my gist

@codecov-commenter
Copy link

codecov-commenter commented Nov 7, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.09%. Comparing base (ccb07df) to head (b7a3c58).

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #20468      +/-   ##
==========================================
+ Coverage   82.03%   82.09%   +0.05%     
==========================================
  Files         515      515              
  Lines       47346    47379      +33     
  Branches     7427     7431       +4     
==========================================
+ Hits        38842    38896      +54     
+ Misses       6705     6682      -23     
- Partials     1799     1801       +2     
Flag Coverage Δ
keras 81.94% <97.05%> (+0.05%) ⬆️
keras-jax 65.01% <91.17%> (+0.06%) ⬆️
keras-numpy 59.97% <91.17%> (+0.06%) ⬆️
keras-tensorflow 66.04% <91.17%> (+0.06%) ⬆️
keras-torch 64.93% <91.17%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM -- thank you for the contribution!

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Nov 8, 2024
@fchollet fchollet merged commit 8409e18 into keras-team:master Nov 8, 2024
7 checks passed
@google-ml-butler google-ml-butler bot removed ready to pull Ready to be merged into the codebase kokoro:force-run labels Nov 8, 2024
@shashaka shashaka deleted the bbox_randomflip branch November 8, 2024 00:16
wang-xianghao pushed a commit to wang-xianghao/keras-dev that referenced this pull request Nov 20, 2024
* Implement transform_bounding_boxes for random_flip

* fix test case for torch env

* Add channel first test cases also

* Add condition for channel_first
fchollet added a commit that referenced this pull request Jan 29, 2025
* Allow some TF kernels fusion: tf.nn.bias_add as special case of tf.add (#20386)

* tf.nn.bias_add as special case of tf.add

* More comments

* Update softmax.py (#20400)

Updated keras.layers.activations.Softmax() to keras.layers.Softmax().  otherwise will get an error as AttributeError

* Add GLU activation (#20392)

* Add GLU activation function

* Add test cases for GLU

* Update assert statement to ValueError

* Updated keras.layers.activations.ReLU API with keras.layers.ReLU in Example from relu.py file (#20403)

`keras.layers.activations.ReLU API` throwing `AttributeError: module 'keras.api.layers' has no attribute 'activations'`.  Modified      it with`keras.layers.ReLU API.

* [Visualization utils] Add visualization utils for plotting images(plain, with bounding boxes and segmentation masks) (#20401)

* api gen

* add plot image gallery function

* add `plot_ bounding_box_gallery`

* correct label key

* add segmentation mask draw and plot functions

* few arg corrections and docstrings

* nit

* add missing args for plotting segmenation masks use cols for each mask to make aspect ratio of each subplot correct

* add missing argument for color

* Fix serialization / deserialization. (#20406)

- Serialization was not taking the registered name and package from the registry.
- Deserialization was selecting symbols by postfix as a fallback.

* Fixed the Value error in Example from activation.py (#20404)

* Fixed the Value error in Example from activation.py

Passing Python list directly to the keras layer object in Example from activation.py is throwing Value error.  Fixed the error by passing tensor as a input. Here is the [gist](https://colab.sandbox.google.com/gist/LakshmiKalaKadali/caefd982bfff4ff6c4139784236c3a17/quickstart_colab.ipynb#scrollTo=F3hV2zfCb7Nu).

Thank You

* Update activation.py

* Add hard_tanh activation function (#20405)

* Add hard_tanh activation function

* Fix the logic to match dtype of output

* Patch to support TF1 in TF numpy backend (#20413)

eb5c5ae broke Dense layers in TF1, since `.shape` returns a list of
Dimensions which are unhashable types. Adding `.as_list()` enables this
check in both TF1 and TF2.

```
{tf.constant([1, 2]).shape.as_list()[0],}
```

* Add `mean_with_sample_weight` reduction to `Loss` (#20410)

* Add `normalize_by_sample_weight` to `Loss`

* Add `"mean_with_sample_weight"` reduction for `Loss`

* Minimize code changes

* Fix CI bug

* Jax tracing fix (#20412)

* `JAXTrainer`: refactoring and fixes
Fix for https://github.com/keras-team/keras/issues/20402
Fix for https://github.com/keras-team/keras/issues/20411

* CI setup

* Fix tests

* Revert CI branch to master

* `function` -> `iterator_step`

* Add log_sigmoid activation (#20416)

* correct misspelling and test case (#20417)

* Fix additional shape comparison for TF1 compatibility (#20422)

I missed this in #20413. Confirmed this fixes the issue in Colab.

* Add error for empty PyDataset

* Add `tree.flatten_with_path` and `tree.assert_same_paths` methods. (#20431)

* Add `tree.flatten_with_path` and `tree.assert_same_paths` methods.

* Add methods in `__init__.py`

* Fix api generated files.

* `CompileLoss`: Allow different but reconcilable structures for `y_true` and `y_pred` (#20426)

* - Allow different but reconcilable structures for `y_true` and `y_pred`

* Fix test

* fix too much relaxation

* Use `assert_same_paths` for structures reconciliation checks

* Add `from_sorted_ids` option to `SparseTopKCategoricalAccuracy`. (#20433)

to consume sorted IDs of top N categories instead of scores for all categories.

* Move project metadata from setup.py to pyproject.toml (#20427)

* Move project metadata from setup.py to pyproject.toml

* Override black target version (for now) to avoid other changes

* PR feedback

* Move explicit list of dependencies from setup.py to pyproject.toml

* pathlib was already imported

* Fix 5D shape validation issues with concat layer

* Bump the python group with 5 updates (#20436)

Updates the requirements on [tensorflow-cpu](https://github.com/tensorflow/tensorflow), [tensorflow](https://github.com/tensorflow/tensorflow), torch, torchvision and [tensorflow[and-cuda]](https://github.com/tensorflow/tensorflow) to permit the latest version.

Updates `tensorflow-cpu` to 2.18.0
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/compare/v2.17.0...v2.18.0)

Updates `tensorflow` to 2.18.0
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/compare/v2.17.0...v2.18.0)

Updates `torch` from 2.4.1+cu121 to 2.5.1+cu121

Updates `torchvision` from 0.19.1+cu121 to 0.20.1+cu121

Updates `tensorflow[and-cuda]` to 2.18.0
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](https://github.com/tensorflow/tensorflow/compare/v2.17.0...v2.18.0)

---
updated-dependencies:
- dependency-name: tensorflow-cpu
  dependency-type: direct:production
  dependency-group: python
- dependency-name: tensorflow
  dependency-type: direct:production
  dependency-group: python
- dependency-name: torch
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: python
- dependency-name: torchvision
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: python
- dependency-name: tensorflow[and-cuda]
  dependency-type: direct:production
  dependency-group: python
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump the github-actions group with 2 updates (#20435)

Bumps the github-actions group with 2 updates: [actions/upload-artifact](https://github.com/actions/upload-artifact) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `actions/upload-artifact` from 4.4.0 to 4.4.3
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/50769540e7f4bd5e21e526ee35c689e35e0d6874...b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882)

Updates `github/codeql-action` from 3.26.10 to 3.27.0
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/e2b3eafc8d227b0241d48be5f425d47c2d750a13...662472033e021d55d94146f66f6058822b0b39fd)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Fix typos (#20434)

* Fix typos

* Manually fix E501, lines too long

* Fix keras.ops.quantile implementation for floating point inputs that are not tf.float32. (#20438)

* Use temporary folder for testing model saving in file editor (#20439)

* Fix encoding issue (#20443)

* Fix encoding issue

* Fix CI

* Replace isort and flake8 with Ruff checker (#20442)

* Replace isort and flake8 with Ruff checker

* Resolve issue with shell/api_gen.sh and correction to fix/check logic

* Resolve E721 to use `is` and `is not` for type comparisons

* Workaround for pydataset hanging issue

* Replace Black with Ruff formatter (#20445)

* adding `ifft2` method to ops (#20447)

* adding ifft2 method to ops

* fixes all test checks

* using built-in versions in backends

* Fix profiling for Tensorflow and JAX (#20450)

* Fix profiling for tensorflow and JAX

* Update doc

* Test fix

* Fix for https://github.com/keras-team/keras/issues/20425 (#20453)

The issue was caused by the fact that the iterator was not fully consumed and `on_epoch_end` was not called.

Added an exception to catch this situation in the future.

Added a unit test to test `model.fit()` with all the combinations of data adapters.

* Tweaked documentation of `Model`'s `fit`, `evaluate` and `predict`. (#20454)

Clearly documented what all the options are for `x` and all the implications for other arguments.

Also made the documentation more consistent between the arguments and between `fit`, `evaluate` and `predict`.

* Suppress warnings for mismatched tuples and lists in functional models. (#20456)

* Add Circle Loss Function for Similarity/Metric Learning Tasks. (#20452)

* update keras/src/losses/__init__.py, losses.py, losses_test.py and numerical_utils.py

* ruff fixes

* hotfix for logsumexp numerical unstability with -inf values

* actual fix for logsumexp -inf unstability

* Add tests, fix numpy logsumexp, and update Circle Loss docstrings.

* run api_gen.sh

* Docstring nits

* `TensorFlowTrainer`: Add low-level API `unrolled_steps_per_execution` parameter (#20451)

* `TensorFlowTrainer`: Add `unrolled_steps_per_execution` parameter.

* Fix test

* Get rid of mask related warnings when using MHA layer with mask

* Fix steps for `TensorBoard` callback for evaluation and batch metrics. (#20461)

This bug caused batch level metrics and evaluation metrics to all be reported for step 0, which would not show a graph.

Epoch level metrics, were not affected by this bug.

* Attempt to fix nightly

* Add support for direct tensor as initializer (#20457)

* Add support for direct tensor as initializer

* Update docstrings and improve fn for direct tensor as initializer

* Switch jnp.reshape from newshape to shape paramter. (#20469)

The newshape parameter was deprecated in JAX v0.4.28, and will soon be removed.

* Enable flash attention (#20448)

* Enable flash attention

* code reformat

* address review comments

* add docstring

* update docstring

* add numerical correctness test

* code reformat

* use causal mask from call method

* address review comments

* update if

* fix tests

* update tests

* enable flash attention on TPU JAX

* update code

* minor fix

* address review comments

* fix tests

* run api_gen

* code reformat

* fix mask issue

* disable causal mask in dpa because it is comuted in comput_attention_mask

* fix masks tests

* code reformat

* disable tests of env is not supported

* fix code reformat error

* fix torch GPU tests

* fix torch gpu tests

* make everything contigious

* check if mask is not before callng contigious

* disable pytorch GPU test

* merge master

* code reformat

* set bias to None

* disable GPU test

* Implement transform_bounding_boxes for random_flip (#20468)

* Implement transform_bounding_boxes for random_flip

* fix test case for torch env

* Add channel first test cases also

* Add condition for channel_first

* `CompileLoss`: fix for partially defined loss with different `y_pred` and `y_true` structures (#20477)

* `CompileLoss`: fix for partially defined loss with different `y_pred` and `y_true` structures.

* - added test

* Update CompileLoss to report unweighted metric values (breaking change) (#20476)

Fixes #20343. Thanks to rivershah@ for pointing this out.

This changes CompileLoss metrics to reporting the values before
weights are applied, which brings it in line with Keras 2 behavior.

* Add loss call fastpath

* Attempt to fix torch gpu CI

* Add hard_shrink activation function (#20470)

* Add hard_shrink activation function

* Correct test case failed

* Change threshold name from lambd to threshold

* Change threshold name from lambd to threshold

* Docstring nits

* - Better handling of partial loss configs (#20478)

* Double backup (#20465)

* Double backup

* Do not remove previous backup in case some epoch will fails twice

* Fix PR comments

* Allow np object arrays containing strings as sliceable inputs

* Add tanh_shrink activation (#20480)

* remove arg `return_attention_scores` from `_compute_attention` (#20482)

* Improve the consistency of the names of initializers (#20484)

* Fix `Orthogonal` initializer and improve consistency of the names of initializers.

* Rename `STFTInitializer` to `STFT`

* Fix CI

* Fix `attention_mask` computation in `MultiHeadAttention` (#20488)

* Fix `dot_product_attention` in `MultiHeadAttention`

* Simplify tests

* Refactor `dot_product_attention` to use flash attention when available (#20489)

* Refactor `dot_product_attention`

* Fix CI and improve compatibility for torch backend.

* Minor condition update.

* Fix CI.

* Fix CI

* Fix GPU CI

* Minor updates for tests

* Fixing example code for BinaryFocalCrossentropy in losses.py file (#20492)

* Add soft_shrink activation (#20494)

* Enhance the robustness of the flash attention check (#20495)

* Enhance the robustness of the flash attention check.

* Fix CI

* Fix CI again

* Fix GPU CI again and again...

* No raise in tests

* Pin coverage==7.6.1

* Fix the comment

* Unpin coverage (#20499)

* implement transform_bounding_boxes for center_crop (#20491)

* implement transform_bounding_boxes for center_crop

* Add test case

* Add support for XPU device for torch

* Fix rendering issue (#20501)

* Add exp2 op (#20506)

* Add Exp2

* Api

* fix and format

* fix format

* fix

* Fix Docstring

* Update API files

* More flexible output_shape computation in keras.layers.MultiHeadAttention (#20503)

* Made the compute_output_shape method more flexible; now _output_shape can be either an integer or a tuple (as previously required).
Fix discussed in #19769

* Added unit test

* Minor changes to comments in unit test

* Minor changes to comments in unit test

* Minor fix

* Fix tensorflow `_dot_product_attention_xla` and update `enable_flash_attention` (#20510)

* Fix tensorflow `_dot_product_attention_xla` and update MHA tests

* Fix tests

* Add squareplus activation (#20508)

* Add squareplus activation

* correct spelling

* Fix docstrings

* `MultiHeadAttention._compute_attention_mask()` always returns a bool tensor. (#20511)

Previously, if an non-bool `attention_mask` was passed and no other mask was passed, the original `attention_mask` was returned unchanged.

Now, it is always cast to bool.

* Allow `convert_to_tensor` to take a value with the wrong `dtype` on Tensorflow. (#20513)

`ops.convert_to_tensor(1.0, "int32")` would fail with the TensorFlow backend. This case is now supported.

Note that other backends already supported this.

* Avoid call to deprecated xla_bridge.get_backend() (#20512)

This function was deprecated in JAX v0.4.32, and will soon be removed.

* Update GQA to use flash attention and move the config to `backend.config` (#20514)

* Make test resilient to spurious warnings. (#20516)

Test was counting warnings, but some other components can throw unrelated warnings.

This makes sure we only count the warnings we're looking for.

* Update losses.py (#20523)

* Fix and update GQA tests (#20522)

* Fix incorrect argument name and update description in RNN documentation (#20525)

* Replace `np.iinfo` and `np.finfo` with `ml_dtypes` (#20528)

* Raise error when calling `save_weights` and `load_weights` with the unbuilt model (#20530)

* Allow EarlyStopping to be reused between multiple `fit`s. (#20533)

All values were already reset properly in `on_train_begin` except `best`.

Fixes https://github.com/keras-team/keras/issues/20521

* implement transform_bounding_boxes for random_zoom (#20526)

* implement transform_bounding_boxes for random_zoom

* Add test cases

* Update test case & correct code

* Revert "Update test case & correct code"

This reverts commit 3288fc7164f802a66948b27905df7f4bce9d7df9.

* Update test case & correct code

* move inline method to layer level

* Fix `BaseOptimizer` with mixed `tf.Variable` and `KerasVariable` (#20534)

* Add support for symbolic tensors to `convert_to_tensor`. (#20539)

`convert_to_tensor(x, sparse=False)` is the API to densify sparse tensors. When used in that manner, the input is already a backend tensor. For this scenario, it makes sense to support symbolic tensors so that one can build a functional model using `convert_to_tensor`.

Also improved the documentation of `convert_to_tensor`.

* implement transform_bounding_boxes for random_translation (#20540)

* Propagate the `aggregation` property when creating a `tf.Variable` (#20541)

* Fix TF variable aggregation

* Add `none` to aggregation

* Fix torch GPU CI, I suppose...

* Add inner op (#20532)

* add inner op

* Fix tensorflow implementation

* fix

* api

* fix lint

* format

* Remove `output_shape` property in MHA (#20543)

* Simplify `output_shape` logic in MHA and remove `output_shape` property.

* Fix CI

* Update test

* Update test

* Fix issue with list/dict losses

* Tiny bit of battle testing function dict inputs

* Fix CI

* Improve `keras.Variable` by exposing docstrings and ensuring consistency in the codebase (#20544)

* Improve `keras.Variable` by exposing docstrings and ensuring consistency in the codebase

* Fix CI

* Update docstrings

* Fix cloning of Sequential models w. input_tensors argument (#20550)

* Fix cloning for Sequential w. input tensor

* Add missing test for input_tensor argument

* Add Sequential wo. Input to test, build model to ensure defined inputs

* Better input validation for InputLayer with input_tensor provided

* Multiple Example Title has removed (#20553)

Multiple Example Title has removed in metrics cosineSimilarity method.

* Add diagflat op (#20547)

* diagflat

* api

* Add sparse_plus activation (#20545)

* Add sparse_plus activation

* correct test cases failed

* Tiny fix

* Update ModelCheckpoint support ".h5" support (#20561)

* Update ModelCheckpoint support ".h5" support

* ModelCheckpoint support ".h5" and ".keras" both filetype

* Minor touch ups

* Addition of Sparsemax activation (#20558)

* add: sprsemax ops

* add: sparsemax api references to inits

* add: sparsemax tests

* edit: changes after test

* edit: test case

* rename: function in numpy

* add: pointers to rest inits

* edit: docstrings

* change: x to logits in docstring

* Add parameter axis to tversky loss (#20563)

* Add axis to tversky loss

* Add tests for tversky loss

* Fiz line too long error

* Reformat code

* Example title two times removed in regression_metrics.py (#20565)

* Un-disable legacy saving tests.

* FIX BUG in load_weights_from_hdf5_group_by_name" legacy_h5_format.py (#20537)

* FIX BUG in load_weights_from_hdf5_group_by_name" legacy_h5_format.py

* add top_level_model_weights to get_subclassed_model

* Minor fixes.

* Major rework of `optree_impl` and `dmtree_impl` for consistency. (#20481)

The `optree` implementation and the `dmtree` implementation of the `tree` API had a nummber of discreptancies. Running unit tests without `optree` installed would fail on a number of tests.

The documentation and behavior of the `tree` API was not internally consistent. There was contradicting documentation about the handling of `OrderedDict`s. The behavior of the `optree` implementation was to use the key sorted order for `pack_sequence_as` but use the sequence order for `flatten`, resulting in `flatten` + `pack_sequence_as` not being idempotent (as discovered in https://github.com/keras-team/keras/issues/20538 )

The exceptions used to report non-matching structures where different between the two implementation. When `optree` uses `ValueError` for all mimatches, `dmtree` would distinguish between `ValueError` and `TypeError` for some cases. This caused a number of bugs because `TypeError` was often not caught, only `ValueError`.

The `assert_same_structure` argument of `check_types` is deprecated and no longer does anything. The `optree` implementation would check the types of the *leaves*, whereas the `dmtree` would check the types of the *collections*. So `check_types=False` with `optree` was fairly, although not completely similar, to `check_types=True` with `dmtree`. The rules is that no two collection types are considered the same, except for `dict`, `OrderedDict` and `defaultdict`.

Because `optree` is the default implementation used and `dmtree` is only a fallback, this PR changes the `tree` API behavior to conform to the `optree` approach everywhere. This makes the `optree` implementation a thin wrapper on top of `optree`, whereas large portions of the `dmtree` wrapper are now reimplemented in Python. Note that the `tree` API was initially modelled after `dmtree`, not `optree`.

There are a couple of fairly niche known discrepancies between the `optree` implementation and the `dmtree` implementation. They are documented in `dmtree_impl.py`.

- Fixed references to `unflatten_as` in documentation, which doesn't exist.
- Fixed contradicting documentation in `flatten` and `pack_sequence_as` related to the handling of `OrderedDict`s. The documentation now states that the sequence order is always used.
- Made handling of `OrderedDict`s follow the spec with both `optree` and `dmtree`.
- Made the exceptions raised consistent and documented them. `TypeError` is only for major programmer error (e.g. `func` is not callable), and `ValueError` is used for all structure mismatches.
- Removed all cases where `TypeError` was caught after a `assert_same_structure`.
- Fixed the discrepancy in the behavior for `namedtuple`s. The `optree` behavior is now followed, meaning that the path for fields are indices, not field names.
- Deprecated the `check_types` argument in `assert_same_structure` and implemented the `optree` behavior in `dmtree`.
- Remove `sequence_fn` argument of `pack_sequence_as`, which was not used and force the `optree` implementation to be fully rewritten in Python.
- Added `MAP_TO_NONE` to the API, added support for it in both implementations of `traverse`. This feature was documented, but not accessible and not actually implemented.
- Added support for registered classes with `dmtree`, both with `flatten` and `unflatten` passed at registration time and methods on the class.
- Tracked collections are now supported with `dmtree` (`TrackedList`, `TrackedSet` and `TrackedDict`). In particular, `TrackedSet` would be handled as a leaf and never traversed.
- Removed dependency of tracked collections on `optree` in `tree_flatten` and `tree_unflatten`.
- Tensorflow's `ListWrapper` and `_DictWrapper` are now supported with `dmtree`.
- Implemented a more efficient way for `optree` to verify structures are the same while traversing them with `map_structure` and `map_structure_up_to`. This avoids multiple traversals.
- Added documentation for `list_to_tuples` and `map_shape_structure`.
- Completely rewrote all tree API unit tests, which are now painfully thorough.
- `map_shape_structure` is now unit tested.
- Fixed unintented use of `tree` instead of `keras.tree` in unit test.
- Ran unit tests for all backends with `optree` uninstalled.

Fixes https://github.com/keras-team/keras/issues/20538

* Fix CI

* Add threshold activation (#20555)

* Add threshold activation

* Add implementations for ops, activations.py & test cases

* Adjust arg names

* Fix dtype of tf argmax

* Bump the github-actions group with 2 updates (#20571)

Bumps the github-actions group with 2 updates: [codecov/codecov-action](https://github.com/codecov/codecov-action) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `codecov/codecov-action` from 4 to 5
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

Updates `github/codeql-action` from 3.27.0 to 3.27.5
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/662472033e021d55d94146f66f6058822b0b39fd...f09c1c0a94de965c15400f5634aa42fac8fb8f88)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Fix the compatibility of the quantization for MHA  (#20562)

* Fix MHA with int8 quant

* Propagate and delete mask in MHA

* Fix CI

* Random seed doc (#20575)

* Fixed error in doc of random number generators concerning seed argument.

* Update class documentation for SeedGenerator

Clarify the facts that 

- a global SeedGenerator is used by all random number generating functions in keras,
- a SeedGenerator is required for jit compilation with the JAX backend.

* Minor reformulation.

* Refined remark on JAX and tracing.

* Fixed column length.

* Fixed line length of documentation.

* Reformatted with black.

* Reformatted with black.

* Still some lines too long?

* Another long column that was intrduced by black.

* Minor nits

* Add unravel_index op (#20559)

* Add unravel_index

* Fix Tensorflow

* Fix Tensorflow impl

* fix default np.int64

* fix

* Fix torch

* fix numpy and torch

* api

* fix

* Fix tensorflow impl and docstring

* fix

* shape None case

* shape None case

* fix

* Add support for jax named scope. (#20580)

* Better handling of variable creation with tensor as initializer. (#20557)

- when the variable dtype is not specified, the dtype of the tensor/array passed as initializer is used instead of defaulting to `backend.floatx()`.
- with the JAX backend, don't needlessly create a copy of the initializer value, reuse it if possible.

* Add Equalization Layer (#20570)

* Add Equalization Layer

* api and fix format

* lint

* Add tf-data test

* data format

* Update Doc String

* Minor fix

* Multiple Example title has been removed in OneHotIoU (#20587)

Multiple Example title has been removed in OneHotIoU file

* [TextVectorization Layer] Added tests for testing the funtionality (#20586)

* [TextVectorization Layer] Added tests for funtionality

* Fix formatting

* Skip flaky TF test

* Fix masking when `_keras_mask` is unset during `call` (#20594)

* Fix using a Python number as an initializer in JAX (#20595)

* Fix the issue when using python number as the initializer in jax

* Add rise

* Fix using lambda expression as the initializer

* Fix the issue when using `Model.compile` multiple times. (#20602)

* Fix loss scaling with `tf.distribute.MirroredStrategy` and `keras.regularizers` (#20609)

* Fix loss scaling when using `tf.distribute.MirroredStrategy`

* Fix regularizer

* Remove unused fn

* Add implementations for mix_up (#20590)

* Add implementations for mix_up

* Add updated init files

* Applied some corrections

* Remove sample beta method

* Add test cases

* Correct failed test cases

* Correct failed test cases

* Add tf compatibility test case

* Update example in the code

* Fix failed test case

* Update for numpy 2.2 bool array changes (#20614)

* Update constraints_test.py

* Turn double negative into positive assert

* Fix `SeedGenerator` in `tf.distribute.MirroredStrategy` (#20612)

* Unscale loss value in TF (#20610)

* Fix issue with unsorted dict (#20613)

* Code style fix

* Use lower precision in DPA (#20615)

* Fix confusion matrix type (#20584)

* fix: fix confusion matrix float32 problem

use int

* Use float32 for threshold comparisons and include warnings when the weight are float but the dtype is int

* fix torch test on mix_up (#20623)

* Fix torch gpu ci

* update distribution_lib files docstrings. (#20625)

* Improve implementation of TF shuffle and make it XLA compilable

* Fix CI I guess

* Correct bug for MixUp initialization. (#20630)

* Correct bug for MixUp initialization.

* Update format indent

* Fix Layer normalization issue with scalar mean & variance (#20626)

* Fix normalization issue with scalar mean & variance

* Add unit test for normalization with scalar mean and variance

* Fix code format

* Add `IOU`, `CIOU` and minor fixes to bounding boxes (#20635)

* Add computer affine matrix method and reformat some of the bounding box arguments

* Add rotation for boxes

* proper reshape of the rotation matrix

* iou and random rotation using affine

* bounding boxes iou

* - add encode and decode to deltas for bounding boxes
- add iou and ciou methods

* add api points for encode and decode methods of bounding boxes

* fix arg name and proper for args for test_affine

* correct dtype mul

* Fix torch gpu ci

* Fix GPU CI (#20637)

* Fix GPU CI

* Fix dtype issue

* Remove duplicate tests

* Fix typos in simple_rnn (#20636)

I observed a  few typos in simple_rnn

* Add implementations for random_hue (#20620)

* Add implementations for random_hue

* Correct failed test cases

* Correct misspellings

* Update example on description

* Correct test case failed.

* Fix code style

* Docstring nit

* FEAT add scikit-learn wrappers (#20599)

* FEAT add scikit-learn wrappers

* import cleanup

* run black

* linters

* lint

* add scikit-learn to requirements-common

* generate public api

* fix tests for sklearn 1.5

* check fixes

* skip numpy tests

* xfail instead of skip

* apply review comments

* change names to SKL* and add transformer example

* fix API and imports

* fix for new sklearn

* sklearn1.6 test

* review comments and remove random_state

* add another skipped test

* rename file

* change imports

* unindent

* docstrings

* Rework `Model.export` and `keras.export.ExportArchive` to support exporting in TFLite and ONNX formats in the future (#20631)

* Rework `Model.export` and `keras.export.ExportArchive`

* Try fixing PyDatasetAdapterTest CI issues

* Fix random hue layer

* Update example and logic for mix_up (#20643)

* Add sklearn (#20644)

* Update example and logic for mix_up (#20642)

* Update example and logic for mix_up

* remove tf from example

* Add RandomGrayscale Layer (#20639)

* Add RandomGrayscale Layer

* Fix torch tests

* format

* fix

* fix

* Fix torch tests

* Fix torch ci

* Fix typo

* Fix issues with randomgrayscale layer

* Fix Randomhue (#20652)

* Small fix in random hue

* use self.backend for seed

* test: add test for class weights (py_dataset adapter) (#20638)

* test: add test for class weights (py_dataset adapter)

* "call _standardize_batch from enqueuer"

m

* add more tests, handle pytorch astype issue

m

* convert to numpy to ensure consistent handling of operations

* Add implementations for random_saturation (#20646)

* Correct bug for MixUp initialization.

* Update format indent

* Add implementations for random_saturation

* change parse_factor method to inner method.

* correct test cases failed.

* correct failed test cases

* Add training argument check condition

* correct source code

* add value_range args description

* update description example

* change _apply_random_saturation method to inline

* Fix random_saturation

* Fix paths for pytest in contribution guide (#20655)

* Add preliminary support of OpenVINO as Keras 3 backend (#19727)

* [POC][OV] Support OpenVINO as Keras 3 backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark all unsupported ops from numpy space

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark unsupported ops in core, image, and linalg spaces

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark unsupported ops in math, nn, random, and rnn spaces

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorting imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Format imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorting imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorting imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix inference

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove openvino specific code in common part

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix typo

* Clean-up code

Signed-off-by: Kazantsev, Roman <[email protected]>

* Recover imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Sort imports properly

Signed-off-by: Kazantsev, Roman <[email protected]>

* Format source code

Signed-off-by: Kazantsev, Roman <[email protected]>

* Format the rest of source code

Signed-off-by: Kazantsev, Roman <[email protected]>

* Continue format adjustment

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add OpenVINO dependency

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix inference using OV backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Support bert_base_en_uncased and mobilenet_v3_small from Keras Hub

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove extra openvino specific code from layer.py

Signed-off-by: Kazantsev, Roman <[email protected]>

* Apply code-style formatting

Signed-off-by: Kazantsev, Roman <[email protected]>

* Apply code-style formatting

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix remained code-style issue

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run tests for OpenVINO backend in GHA

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add config file for openvino backend validation

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add import test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix error in import_test.py

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add import_test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add openvino specific integration tests in GHA

Signed-off-by: Kazantsev, Roman <[email protected]>

* Exclude coverage for OpenVINO

Signed-off-by: Kazantsev, Roman <[email protected]>

* remove coverage for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Try layer tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run layer tests for openvino backend selectively

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark enabled tests for openvino backend in a different way

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update .github/workflows/actions.yml

* Fix import for BackendVariable

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix errors in layer tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add test for Elu via openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorted imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Extend testing for attention

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update keras/src/layers/attention/attention_test.py

* Switch on activation tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on attention tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update keras/src/layers/attention/additive_attention_test.py

* Update keras/src/layers/attention/grouped_query_attention_test.py

* Run conv tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix convolution in openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Work around constant creation for tuple

Signed-off-by: Kazantsev, Roman <[email protected]>

* Work around constant creation in reshape

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run depthwise conv tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix get_ov_output for other x types

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix elu translation

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix softmax and log_softmax for None axis

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run nn tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix numpy operations for axis to be None

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run operation_test for openvino_backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on math_test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on image tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on linalg test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Extend OpenVINOKerasTensor with new built-in methods and fix shape op

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on core tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Use different way of OpenVINO model creation that supports call method

Signed-off-by: Kazantsev, Roman <[email protected]>

* Unify integration test for openvino

Signed-off-by: Kazantsev, Roman <[email protected]>

* Support new operations abs, mod, etc.

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add support for more operations like squeeze, max

Signed-off-by: Kazantsev, Roman <[email protected]>

* Try to use excluded test files list

Signed-off-by: Kazantsev, Roman <[email protected]>

* Apply formatting for normalization_test.py

Signed-off-by: Kazantsev, Roman <[email protected]>

* Correct GHA yml file

Signed-off-by: Kazantsev, Roman <[email protected]>

* Test that openvino backend is used

Signed-off-by: Kazantsev, Roman <[email protected]>

* Revert testing change in excluded test files list

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include testing group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include legacy test group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Exclude legacy group of tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include initializers tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Skip tests for initializers group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove export test group from ignore

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include dtype_policies test group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Reduce ignored tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix ops.cast

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add decorator for custom_gradient

Signed-off-by: Kazantsev, Roman <[email protected]>

* Shorten line in custom_gradient

Signed-off-by: Kazantsev, Roman <[email protected]>

* Ignore dtype_policy_map test

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include callback tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on backend tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Exclude failing tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Correct paths to excluded tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on some layers tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove pytest.mark.openvino_backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Register mark requires_trainable_backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Ignore test files in a different way

Signed-off-by: Kazantsev, Roman <[email protected]>

* Try different way to ignore test files

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix GHA yml

Signed-off-by: Kazantsev, Roman <[email protected]>

* Support tuple axis for logsumexp

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on some ops tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on some callbacks tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add openvino export

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update sklearn tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add a comment to skipp numerical_test

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add custom requirements file for OpenVINO

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add reqs of openvino installation for api changes check

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix types of Variables and switch on some variables tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix nightly code check

Signed-off-by: Kazantsev, Roman <[email protected]>

---------

Signed-off-by: Kazantsev, Roman <[email protected]>

* Make sklearn dependency optional (#20657)

* Add a condition to verify training status during image processing (#20650)

* Add a condition to verify training status during image processing

* resolve merge conflict

* fix transform_bounding_boxes logic

* add transform_bounding_boxes test

* Fix recurrent dropout for GRU. (#20656)

The simplified implementation, which used the same recurrent dropout masks for all the previous states didn't work and caused the training to not converge with large enough recurrent dropout values.

This new implementation is now the same as Keras 2. Note that recurrent dropout requires "implementation 1" to be turned on.

Fixes https://github.com/keras-team/keras/issues/20276

* Fix example title in probabilistic_metrics.py (#20662)

* Change recurrent dropout implementation for LSTM. (#20663)

This change is to make the implementation of recurrent dropout consistent with GRU (changed as of https://github.com/keras-team/keras/pull/20656 ) and Keras 2.

Also fixed a bug where the GRU fix would break when using CUDNN with a dropout and no recurrent dropout. The solution is to create multiple masks only when needed (implementation == 1).

Added coverage for the case when dropout is set and recurrent dropout is not set.

* Never pass enable_xla=False or native_serialization=False in tests (#20664)

These are invalid options in the latest version of jax2tf, they
will just immediately throw.

* Fix `PyDatasetAdapterTest::test_class_weight` test with Torch on GPU. (#20665)

The test was failing because arrays on device and on cpu were compared.

* Fix up torch GPU failing test for mix up (#20666)

We need to make sure to use get any tensors places on cpu before using
them in the tensorflow backend during preprocessing.

* Adjust value_range for random_contrast and random_hue (#20671)

* Adjust value_range for random_contrast and random_hue

* Add value_range description

* Correct failed test cases

* Multiple Example Title has removed in OneHotMeanIoU funtion (#20669)

Multiple Example Title has removed in OneHotMeanIoU funtion.

* Add random_color_jitter processing layer (#20673)

* Add implementations for random_saturation

* change parse_factor method to inner method.

* Add implementations for random_color_jitter

* Fix Randomhue (#20652)

* Small fix in random hue

* use self.backend for seed

* test: add test for class weights (py_dataset adapter) (#20638)

* test: add test for class weights (py_dataset adapter)

* "call _standardize_batch from enqueuer"

m

* add more tests, handle pytorch astype issue

m

* convert to numpy to ensure consistent handling of operations

* Fix paths for pytest in contribution guide (#20655)

* Add preliminary support of OpenVINO as Keras 3 backend (#19727)

* [POC][OV] Support OpenVINO as Keras 3 backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark all unsupported ops from numpy space

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark unsupported ops in core, image, and linalg spaces

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark unsupported ops in math, nn, random, and rnn spaces

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorting imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Format imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorting imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorting imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix inference

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove openvino specific code in common part

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix typo

* Clean-up code

Signed-off-by: Kazantsev, Roman <[email protected]>

* Recover imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Sort imports properly

Signed-off-by: Kazantsev, Roman <[email protected]>

* Format source code

Signed-off-by: Kazantsev, Roman <[email protected]>

* Format the rest of source code

Signed-off-by: Kazantsev, Roman <[email protected]>

* Continue format adjustment

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add OpenVINO dependency

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix inference using OV backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Support bert_base_en_uncased and mobilenet_v3_small from Keras Hub

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove extra openvino specific code from layer.py

Signed-off-by: Kazantsev, Roman <[email protected]>

* Apply code-style formatting

Signed-off-by: Kazantsev, Roman <[email protected]>

* Apply code-style formatting

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix remained code-style issue

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run tests for OpenVINO backend in GHA

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add config file for openvino backend validation

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add import test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix error in import_test.py

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add import_test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add openvino specific integration tests in GHA

Signed-off-by: Kazantsev, Roman <[email protected]>

* Exclude coverage for OpenVINO

Signed-off-by: Kazantsev, Roman <[email protected]>

* remove coverage for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Try layer tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run layer tests for openvino backend selectively

Signed-off-by: Kazantsev, Roman <[email protected]>

* Mark enabled tests for openvino backend in a different way

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update .github/workflows/actions.yml

* Fix import for BackendVariable

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix errors in layer tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add test for Elu via openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix sorted imports

Signed-off-by: Kazantsev, Roman <[email protected]>

* Extend testing for attention

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update keras/src/layers/attention/attention_test.py

* Switch on activation tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on attention tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update keras/src/layers/attention/additive_attention_test.py

* Update keras/src/layers/attention/grouped_query_attention_test.py

* Run conv tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix convolution in openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Work around constant creation for tuple

Signed-off-by: Kazantsev, Roman <[email protected]>

* Work around constant creation in reshape

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run depthwise conv tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix get_ov_output for other x types

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix elu translation

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix softmax and log_softmax for None axis

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run nn tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix numpy operations for axis to be None

Signed-off-by: Kazantsev, Roman <[email protected]>

* Run operation_test for openvino_backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on math_test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on image tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on linalg test for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Extend OpenVINOKerasTensor with new built-in methods and fix shape op

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on core tests for openvino backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Use different way of OpenVINO model creation that supports call method

Signed-off-by: Kazantsev, Roman <[email protected]>

* Unify integration test for openvino

Signed-off-by: Kazantsev, Roman <[email protected]>

* Support new operations abs, mod, etc.

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add support for more operations like squeeze, max

Signed-off-by: Kazantsev, Roman <[email protected]>

* Try to use excluded test files list

Signed-off-by: Kazantsev, Roman <[email protected]>

* Apply formatting for normalization_test.py

Signed-off-by: Kazantsev, Roman <[email protected]>

* Correct GHA yml file

Signed-off-by: Kazantsev, Roman <[email protected]>

* Test that openvino backend is used

Signed-off-by: Kazantsev, Roman <[email protected]>

* Revert testing change in excluded test files list

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include testing group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include legacy test group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Exclude legacy group of tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include initializers tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Skip tests for initializers group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove export test group from ignore

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include dtype_policies test group

Signed-off-by: Kazantsev, Roman <[email protected]>

* Reduce ignored tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix ops.cast

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add decorator for custom_gradient

Signed-off-by: Kazantsev, Roman <[email protected]>

* Shorten line in custom_gradient

Signed-off-by: Kazantsev, Roman <[email protected]>

* Ignore dtype_policy_map test

Signed-off-by: Kazantsev, Roman <[email protected]>

* Include callback tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on backend tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Exclude failing tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Correct paths to excluded tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on some layers tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Remove pytest.mark.openvino_backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Register mark requires_trainable_backend

Signed-off-by: Kazantsev, Roman <[email protected]>

* Ignore test files in a different way

Signed-off-by: Kazantsev, Roman <[email protected]>

* Try different way to ignore test files

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix GHA yml

Signed-off-by: Kazantsev, Roman <[email protected]>

* Support tuple axis for logsumexp

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on some ops tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Switch on some callbacks tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add openvino export

Signed-off-by: Kazantsev, Roman <[email protected]>

* Update sklearn tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add a comment to skipp numerical_test

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add custom requirements file for OpenVINO

Signed-off-by: Kazantsev, Roman <[email protected]>

* Add reqs of openvino installation for api changes check

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix types of Variables and switch on some variables tests

Signed-off-by: Kazantsev, Roman <[email protected]>

* Fix nightly code check

Signed-off-by: Kazantsev, Roman <[email protected]>

---------

Signed-off-by: Kazantsev, Roman <[email protected]>

* Make sklearn dependency optional (#20657)

* Add a condition to verify training status during image processing (#20650)

* Add a condition to verify training status during image processing

* resolve merge conflict

* fix transform_bounding_boxes logic

* add transform_bounding_boxes test

* Fix recurrent dropout for GRU. (#20656)

The simplified implementation, which used the same recurrent dropout masks for all the previous states didn't work and caused the training to not converge with large enough recurrent dropout values.

This new implementation is now the same as Keras 2. Note that recurrent dropout requires "implementation 1" to be turned on.

Fixes https://github.com/keras-team/keras/issues/20276

* Fix example title in probabilistic_metrics.py (#20662)

* Change recurrent dropout implementation for LSTM. (#20663)

This change is to make the implementation of recurrent dropout consistent with GRU (changed as of https://github.com/keras-team/keras/pull/20656 ) and Keras 2.

Also fixed a bug where the GRU fix would break when using CUDNN with a dropout and no recurrent dropout. The solution is to create multiple masks only when needed (implementation == 1).

Added coverage for the case when dropout is set and recurrent dropout is not set.

* Never pass enable_xla=False or native_serialization=False in tests (#20664)

These are invalid options in the latest version of jax2tf, they
will just immediately throw.

* Fix `PyDatasetAdapterTest::test_class_weight` test with Torch on GPU. (#20665)

The test was failing because arrays on device and on cpu were compared.

* Fix up torch GPU failing test for mix up (#20666)

We need to make sure to use get any tensors places on cpu before using
them in the tensorflow backend during preprocessing.

* Add random_color_jitter processing layer

* Add random_color_jitter test

* Update test cases

* Correct failed test case

* Correct failed test case

* Correct failed test case

---------

Signed-off-by: Kazantsev, Roman <[email protected]>
Co-authored-by: IMvision12 <[email protected]>
Co-authored-by: Enrico <[email protected]>
Co-authored-by: Marco <[email protected]>
Co-authored-by: Roman Kazantsev <[email protected]>
Co-authored-by: Matt Watson <[email protected]>
Co-authored-by: hertschuh <[email protected]>
Co-authored-by: Jasmine Dhantule <[email protected]>

* Add training status condition during image processing (#20677)

* Add training status condition during image processing

* Revert "Add training status condition during image processing"

This reverts commit 8fc5ae2c28c239663fe0f2e8ac7fa15037f41a7d.

* Reapply "Add training status condition during image processing"

This reverts commit 25a4bd1332c7a5794dc872f5aa6ddddf6ed1606b.

* Revert center_crop

* Import `pydot` first before trying backups (#20682)

* Fix: Return Attention Scores when `return_attention_scores=True` (#20684)

* Fix: Ensure Attention Layer Returns Attention Scores when `return_attention_scores=True`

This pull request addresses an issue in the Attention layer where the return_attention_scores parameter wasn't correctly handled in the compute_output_shape method. This fix ensures that attention scores are returned when return_attention_scores=True.

## Changes Made
Modified compute_output_shape method to return the shape of both the attention output and the attention scores when return_attention_scores=True.

* Formatting

* Fixed score return and added unit tests for return_attention_scores=True

* Removed debug print statement

* Add random_color_degeneration processing layer (#20679)

* Add random_color_degeneration processing layer

* Fix mistypo

* Correct failed test case

* fix attention output with symbolic tensors and attention scores (#20689)

* minor: Fix Functional API guide (#20694)

Add an empty line so the list is rendered as a list, not as a single line of text

* Introduces support for exporting `SavedModel` in the torch backend using `torch-xla` (#20685)

* Add support for exporting savedmodel in the torch backend

* Fix `actions.yml`

* Fix CI

* Remove unused `_mangle_tf_root_scope_name` and add `import_error_msg` to `LazyModule`

* Ignore `export_lib_test` in torch GPU CI

* Add random_posterization processing layer (#20688)

* Add random_posterization processing layer

* Add test cases

* correct failed case

* Fix torch gpu CI (#20696)

* Add random_sharpness processing layer (#20697)

* Add random_sharpness.py

* Update random_sharpness

* Add test cases

* Fix failed test case

* Add random_shear processing layer (#20702)

* Add random_shear processing layer

* Update method name

* Fix failed test case

* Fix failed test case

* Fix failed test case

* Fix the aggregation in the codebase (#20703)

* Bump the github-actions group with 2 updates (#20707)

Bumps the github-actions group with 2 updates: [actions/upload-artifact](https://github.com/actions/upload-artifact) and [github/codeql-action](https://github.com/github/codeql-action).


Updates `actions/upload-artifact` from 4.4.3 to 4.5.0
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882...6f51ac03b9356f520e9adb1b1b7802705f340c2b)

Updates `github/codeql-action` from 3.27.5 to 3.28.0
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/f09c1c0a94de965c15400f5634aa42fac8fb8f88...48ab28a6f5dbc2a99bf1e0131198dd8f1df78169)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: Torch MPS backend failing test (#20709)

* implement transform_bounding_boxes for random_shear (#20704)

* Fix torch GPU CI

* Update BackupAndRestore class example (#20714)

* Update BackupAndRestore class example

* Update backup_and_restore.py

---------

Co-authored-by: François Chollet <[email protected]>

* Update version number

* Refactor `keras/src/export/export_lib` and add `export_onnx` (#20710)

* Refactor export_lib and add export_onnx

Add tf2onnx requirements

* Add onnxruntime dep

* Update numpy dep

* Resolve comments

* Patch `tf2onnx` to ensure compatibility with `numpy>=2.0.0` (#20725)

* Patch tf2onnx to support numpy 2

* Fix warnings

* Update export_onnx

* Add build method to supress warning (#20729)

* Specify window_length dtype requirement in tf.keras.ops.istft in math.py (#20728)

The `window_length` parameter in `tf.keras.ops.istft` requires `tf.int32` dtype, but this isn't documented. This can cause unexpected `ValueError` when using `tf.int64` and `tf.int16`

Here is the Example case:
```
import tensorflow as tf

input_dict = {
    'stfts': tf.constant([[-0.87817144+1.14583987j, -0.32066484+0.25565411j]], dtype=tf.complex128),
    'frame_length': tf.constant(256, dtype=tf.int16),
    'frame_step': tf.constant(5120,dtype=tf.int64)
}
result = tf.signal.inverse_stft(**input_dict)
print(result)
```
The code throws the following error:
```
ValueError: window_length: Tensor conversion requested dtype int32 for Tensor with dtype int64
```

* Add rand_augment processing layer (#20716)

* Add rand_augment init

* Update rand_augment init

* Add rand_augment

* Add NotImplementedError

* Add some test cases

* Fix failed test case

* Update rand_augment

* Update rand_augment test

* Fix random_rotation bug

* Add build method to supress warning.

* Add implementation for transform_bboxes

* Fixing batch_dim_name attribute (#20674)

* fixing wrong trainer assumption that batch dim is always the first one in the mesh

* need functools partial

* lint

* fix test failure when distribution=None

* lint2

* fix for test failure

* added data sharding for 3D+ meshes

* lint3

* added @property for batch_dim_name + refactoring

* fix typo

* Add support for `dtype` / `DTypePolicy` to `JaxLayer` and `FlaxLayer`. (#20732)

The `dtype` / `DTypePolicy` is applied to all float variables.

* Allow dynamic shape in `STFTSpectrogram` layer. (#20736)

by simply using `ops.shape(x)` instead of `x.shape`.

* Remove duplicate export tests in `model_test`. (#20735)

The same tes…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants