Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update output_padding argument in convolution transpose layer #20737

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

sonali-kumari1
Copy link
Contributor

Update output_padding argument in convolution transpose layer. Fixes #19870 and #20408

@codecov-commenter
Copy link

codecov-commenter commented Jan 8, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 29.93%. Comparing base (25d6d80) to head (9ba0806).

❗ There is a different number of reports uploaded between BASE (25d6d80) and HEAD (9ba0806). Click for more details.

HEAD has 8 uploads less than BASE
Flag BASE (25d6d80) HEAD (9ba0806)
keras 5 1
keras-numpy 1 0
keras-torch 1 0
keras-tensorflow 1 0
keras-jax 1 0
Additional details and impacted files
@@             Coverage Diff             @@
##           master   #20737       +/-   ##
===========================================
- Coverage   81.99%   29.93%   -52.06%     
===========================================
  Files         555      555               
  Lines       51805    51805               
  Branches     8012     8012               
===========================================
- Hits        42475    15506    -26969     
- Misses       7378    35734    +28356     
+ Partials     1952      565     -1387     
Flag Coverage Δ
keras 29.93% <ø> (-51.89%) ⬇️
keras-jax ?
keras-numpy ?
keras-openvino 29.93% <ø> (ø)
keras-tensorflow ?
keras-torch ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Please add a unit test for this change.

@@ -116,6 +124,7 @@ def __init__(
kernel_size=kernel_size,
strides=strides,
padding=padding,
output_padding=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be propagated, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @fchollet, When i am using output_padding=0 in keras(3.8.0), I am getting ValueError: The output_padding argument must be a tuple of 1 integers. Received output_padding=0, including values {0} that do not satisfy value > 0 but setting output_padding=None, it works without error. So, should I modify the error message to raise AttributeError or remove this argument ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @sonali-kumari1 ,

Which backend had this error? What was the trace for the exception?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hertschuh ,

I have checked with tensorflow, torch and jax backends. I am getting the following traceback with all 3 backends:

[<ipython-input-6-1292b74f2ae3>](https://localhost:8080/#) in Model_HqTi1yjRFc7Dz1RLSJOvA9XX16R5Wp01(x)
     15     x = keras.activations.relu(x)
     16     _zeropadding_x = keras.layers.ZeroPadding1D(padding=(0, 0))(x)
---> 17     x = keras.layers.Conv1DTranspose(filters=128, kernel_size=1, strides=1, padding="valid", output_padding=0, data_format="channels_last", dilation_rate=1, use_bias=True, name="conv4_mutated")(x)
     18 
     19     x = x

[/usr/local/lib/python3.11/dist-packages/keras/src/layers/convolutional/conv1d_transpose.py](https://localhost:8080/#) in __init__(self, filters, kernel_size, strides, padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, **kwargs)
    111         **kwargs,
    112     ):
--> 113         super().__init__(
    114             rank=1,
    115             filters=filters,

[/usr/local/lib/python3.11/dist-packages/keras/src/layers/convolutional/base_conv_transpose.py](https://localhost:8080/#) in __init__(self, rank, filters, kernel_size, strides, padding, output_padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, trainable, name, **kwargs)
    109             self.output_padding = None
    110         else:
--> 111             self.output_padding = standardize_tuple(
    112                 output_padding,
    113                 rank,

[/usr/local/lib/python3.11/dist-packages/keras/src/utils/argument_validation.py](https://localhost:8080/#) in standardize_tuple(value, n, name, allow_zero)
     49             f" that do not satisfy `value {req_msg}`"
     50         )
---> 51         raise ValueError(error_msg)
     52 
     53     return value_tuple

ValueError: The `output_padding` argument must be a tuple of 1 integers. Received output_padding=0, including values {0} that do not satisfy `value > 0`

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Here:
https://github.com/keras-team/keras/blob/master/keras/src/layers/convolutional/base_conv_transpose.py#L115
You should add allow_zero=True, as an extra argument to standardize_tuple(), that's what's causing the error with a zero output_padding.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, the call chain looks something like this:
Conv1DTranspose (as initialized here) but applies to 2D and 3D as well → BaseConvTranspose processes the output_padding parameter using standardize_tuple() → Requires values to be greater than 0 (by default without the allow_zero = True parameter, but zero is a perfectly valid use case, and is even passed as the value to the output_padding parameter here, hence the error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hertschuh -

The output_padding argument is defined in the base class (BaseConvTranspose), but it is not mentioned in the Conv1DTranspose documentation. When setting output_padding=None in Conv1DTranspose, it works fine, possibly due to the inherited property from the base class (BaseConvTranspose).

However, when setting output_padding=0, a ValueError occurs. So, as suggested by you, I will add allow_zero=True, as an extra argument to standardize_tuple().

Can we update the Conv1DTranspose documentation to include the output_padding argument, as users typically follow the documentation?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output_padding argument is defined in the base class (BaseConvTranspose), but it is not mentioned in the Conv1DTranspose documentation. When setting output_padding=None in Conv1DTranspose, it works fine, possibly due to the inherited property from the base class (BaseConvTranspose).

It is not inherited. It works fine because there is no output padding of you set output_padding=None.

However, when setting output_padding=0, a ValueError occurs. So, as suggested by you, I will add allow_zero=True, as an extra argument to standardize_tuple().

Yes, and then you can propagate ouput_padding in super().__init__(...).

Can we update the Conv1DTranspose documentation to include the output_padding argument, as users typically follow the documentation?

Yes, please update the documentation as you did here.

Thanks for updating the PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Assigned Reviewer
6 participants