Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pd: support dpa2 #4418

Open
wants to merge 81 commits into from

Conversation

HydrogenSulfate
Copy link
Contributor

@HydrogenSulfate HydrogenSulfate commented Nov 25, 2024

Support DPA-2 in paddle backend. This PR will be updated after #4414 is merged.

Training curve:

training_curves_comparison_dpa2

Accuracy test(left: paddle, right: torch):

image

Ralated optimization of Paddle framework:

Summary by CodeRabbit

  • New Features

    • Introduced new classes for molecular descriptors: DescrptDPA2, DescrptBlockRepformers, DescrptSeTTebd, and DescrptBlockSeTTebd.
    • Added new functions for tensor operations and descriptor management, enhancing the capabilities of the module.
    • Updated JSON configurations for multitask models to refine selection criteria and data paths.
  • Bug Fixes

    • Improved error handling and parameter validation across various descriptor classes.
  • Documentation

    • Enhanced test coverage for new descriptor functionalities and configurations.
  • Tests

    • Added new test classes to validate the functionality of DescrptDPA2 and multitask training scenarios.
    • Expanded test capabilities for descriptor classes based on installed dependencies.
    • Updated existing tests to support new configurations and functionalities.

HydrogenSulfate and others added 30 commits November 2, 2024 11:14
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (6)
deepmd/pd/model/descriptor/se_t_tebd.py (3)

823-827: Consider using einsum for better readability

The commented-out einsum operations have been replaced with manual tensor operations. While this might be for performance reasons, consider documenting why einsum was avoided or if it's still a valid alternative.

Original einsum operations:

# env_ij = paddle.einsum("ijm,ikm->ijk", rr_i, rr_j)
# res_ij = paddle.einsum("ijk,ijkm->im", env_ij, gg)

Also applies to: 909-913


386-386: Remove unused variable assignments

Several variables are assigned but never used in the code:

  • env_mat in the deserialize method
  • nall in multiple locations

Apply these changes:

- env_mat = data.pop("env_mat")
+ data.pop("env_mat")  # Required for compatibility

- nall = extended_coord.reshape([nframes, -1]).shape[1] // 3
+ _ = extended_coord.reshape([nframes, -1]).shape[1] // 3  # Kept for validation

- nall = extended_coord.reshape([nb, -1, 3]).shape[1]
+ _ = extended_coord.reshape([nb, -1, 3]).shape[1]  # Kept for validation

Also applies to: 454-454, 787-787

🧰 Tools
🪛 GitHub Check: CodeQL

[notice] 386-386: Unused local variable
Variable env_mat is not used.


831-848: Consider optimizing tensor operations

The tensor operations in the tebd_input_mode == "concat" branch involve multiple reshapes and concatenations. Consider:

  1. Pre-allocating tensors for frequently used shapes
  2. Combining multiple reshapes into a single operation where possible
  3. Using in-place operations where applicable

Example optimization:

# Pre-allocate expanded tensors with correct shapes
nlist_tebd_i = paddle.empty([nfnl, self.nnei, self.nnei, self.tebd_dim])
nlist_tebd_j = paddle.empty([nfnl, self.nnei, self.nnei, self.tebd_dim])

# Use a single reshape operation
nlist_tebd = atype_tebd_nlist.reshape([nfnl, nnei, self.tebd_dim])

# Use in-place expand operations
nlist_tebd.expand_([nfnl, self.nnei, self.nnei, -1], out=nlist_tebd_i)
nlist_tebd.expand_([nfnl, self.nnei, self.nnei, -1], out=nlist_tebd_j)
deepmd/pd/model/descriptor/repformer_layer.py (3)

41-82: Fix docstring parameter description

The docstring for _scale parameter in get_residual function describes it as "The initial scale of the residual tensor" but doesn't fully explain its usage in different initialization modes.

Add mode-specific details to the docstring:

    _scale
-        The initial scale of the residual tensor. See `_mode` for details.
+        The initial scale of the residual tensor:
+        - In "norm" mode: Used as standard deviation for normal distribution
+        - In "const" mode: Used as constant value for all elements

580-616: Improve parameter organization and input validation

The class initialization has too many parameters and uses assertions for validation.

Consider these improvements:

  1. Use a configuration class to group parameters:
@dataclass
class RepformerConfig:
    rcut: float
    rcut_smth: float
    sel: int
    ntypes: int
    g1_dim: int = 128
    g2_dim: int = 16
    axis_neuron: int = 4
    # ... other parameters with defaults
  1. Replace assertions with proper validation:
-        assert len(sel) == 1
+        if len(sel) != 1:
+            raise ValueError("sel must be a list with exactly one element")

1413-1484: Enhance error handling in deserialization

The deserialize method could benefit from more robust error handling for malformed data.

Consider adding validation for required fields and type checking:

    @classmethod
    def deserialize(cls, data: dict) -> "RepformerLayer":
        data = data.copy()
+        # Validate required fields
+        required_fields = ["@version", "@class", "linear1"]
+        missing_fields = [f for f in required_fields if f not in data]
+        if missing_fields:
+            raise ValueError(f"Missing required fields: {missing_fields}")
+
        check_version_compatibility(data.pop("@version"), 2, 1)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 64fc62e and ed80c6d.

📒 Files selected for processing (3)
  • deepmd/pd/model/descriptor/__init__.py (1 hunks)
  • deepmd/pd/model/descriptor/repformer_layer.py (1 hunks)
  • deepmd/pd/model/descriptor/se_t_tebd.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/pd/model/descriptor/init.py
🧰 Additional context used
🪛 Ruff (0.8.0)
deepmd/pd/model/descriptor/se_t_tebd.py

129-129: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


136-136: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


387-387: Local variable env_mat is assigned to but never used

Remove assignment to unused variable env_mat

(F841)


454-454: Local variable nall is assigned to but never used

Remove assignment to unused variable nall

(F841)


521-521: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


528-528: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


709-713: Use ternary operator sampled = merged() if callable(merged) else merged instead of if-else-block

(SIM108)


733-733: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


787-787: Local variable nall is assigned to but never used

Remove assignment to unused variable nall

(F841)

deepmd/pd/model/descriptor/repformer_layer.py

967-967: Local variable ng2 is assigned to but never used

Remove assignment to unused variable ng2

(F841)


1160-1163: Use ternary operator gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None instead of if-else-block

Replace if-else-block with gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None

(SIM108)


1281-1281: Local variable nitem is assigned to but never used

Remove assignment to unused variable nitem

(F841)

🪛 GitHub Check: CodeQL
deepmd/pd/model/descriptor/se_t_tebd.py

[notice] 386-386: Unused local variable
Variable env_mat is not used.


[notice] 451-451: Unused local variable
Variable nall is not used.


[notice] 770-770: Unused local variable
Variable nall is not used.

deepmd/pd/model/descriptor/repformer_layer.py

[notice] 967-967: Unused local variable
Variable ng2 is not used.


[notice] 1277-1277: Unused local variable
Variable nitem is not used.

🔇 Additional comments (2)
deepmd/pd/model/descriptor/se_t_tebd.py (1)

129-129: ⚠️ Potential issue

Fix mutable default arguments in constructors

Using mutable objects (lists) as default arguments can lead to unexpected behavior because the same list object is shared across all function calls.

Apply these changes to all affected constructors:

# In DescrptSeTTebd.__init__
-   neuron: list = [2, 4, 8],
+   neuron: Optional[list] = None,
-   exclude_types: list[tuple[int, int]] = [],
+   exclude_types: Optional[list[tuple[int, int]]] = None,

# In DescrptBlockSeTTebd.__init__
-   neuron: list = [25, 50, 100],
+   neuron: Optional[list] = None,
-   exclude_types: list[tuple[int, int]] = [],
+   exclude_types: Optional[list[tuple[int, int]]] = None,

# In reinit_exclude
-   exclude_types: list[tuple[int, int]] = [],
+   exclude_types: Optional[list[tuple[int, int]]] = None,

Add initialization in the constructors:

def __init__(self, ...):
    if neuron is None:
        neuron = [2, 4, 8]  # or [25, 50, 100] for DescrptBlockSeTTebd
    if exclude_types is None:
        exclude_types = []

Also applies to: 136-136, 521-521, 528-528, 733-733

🧰 Tools
🪛 Ruff (0.8.0)

129-129: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

deepmd/pd/model/descriptor/repformer_layer.py (1)

155-243: 🛠️ Refactor suggestion

Add input validation and remove unused variable

The forward method could benefit from explicit input validation and has an unused variable.

Consider these improvements:

  1. Remove unused variable:
-        nb, nloc, nnei, _ = g2.shape
-        ng2 = g2.shape[-1]
+        nb, nloc, nnei, _ = g2.shape
  1. Add input shape validation:
+        # Validate input shapes
+        if g2.ndim != 4 or h2.ndim != 4:
+            raise ValueError("g2 and h2 must be 4-dimensional tensors")
+        if h2.shape[-1] != 3:
+            raise ValueError("h2 last dimension must be 3")
+        if g2.shape[:-1] != h2.shape[:-1]:
+            raise ValueError("g2 and h2 must have compatible shapes")

Likely invalid or redundant comment.

deepmd/pd/model/descriptor/repformer_layer.py Show resolved Hide resolved
@HydrogenSulfate HydrogenSulfate changed the base branch from master to devel December 2, 2024 03:09
Copy link

codecov bot commented Dec 2, 2024

Codecov Report

Attention: Patch coverage is 88.28762% with 158 lines in your changes missing coverage. Please review.

Project coverage is 84.58%. Comparing base (cfe17a3) to head (abeae6d).

Files with missing lines Patch % Lines
deepmd/pd/model/descriptor/se_t_tebd.py 82.29% 57 Missing ⚠️
deepmd/pd/model/descriptor/repformer_layer.py 91.80% 44 Missing ⚠️
deepmd/pd/model/descriptor/dpa2.py 88.62% 33 Missing ⚠️
deepmd/pd/model/descriptor/repformers.py 92.97% 13 Missing ⚠️
deepmd/pd/utils/spin.py 0.00% 11 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4418      +/-   ##
==========================================
+ Coverage   84.41%   84.58%   +0.16%     
==========================================
  Files         670      675       +5     
  Lines       62150    63498    +1348     
  Branches     3488     3487       -1     
==========================================
+ Hits        52465    53709    +1244     
- Misses       8559     8663     +104     
  Partials     1126     1126              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
source/tests/pd/model/test_descriptor_dpa2.py (2)

24-118: Add docstring and improve test data organization.

While the test setup is thorough, consider these improvements for better maintainability:

  1. Add a docstring explaining the test setup and the meaning of the test data
  2. Consider moving the hardcoded test values to separate data files
  3. Add comments explaining the significance of the reference values

Example docstring:

 class TestDPA2(unittest.TestCase):
     def setUp(self):
+        """Initialize test data for DPA2 descriptor testing.
+        
+        Sets up:
+        - Simulation cell parameters
+        - Atomic coordinates
+        - Atom types
+        - Reference descriptor values
+        - Paths to model parameters and embeddings
+        """

119-186: Enhance test coverage and improve test organization.

The test covers the happy path but could be improved:

  1. Split into separate test methods for different configurations
  2. Add error cases and edge cases
  3. Add input validation tests

Consider refactoring like this:

def test_descriptor_basic_config(self):
    """Test DPA2 descriptor with basic configuration."""
    # Current test logic with concat_output_tebd=False

def test_descriptor_with_concat_output(self):
    """Test DPA2 descriptor with concatenated output."""
    # Current test logic with concat_output_tebd=True

def test_descriptor_invalid_inputs(self):
    """Test DPA2 descriptor with invalid inputs."""
    # Add tests for invalid coordinates, types, etc.

def test_descriptor_edge_cases(self):
    """Test DPA2 descriptor with edge cases."""
    # Add tests for empty system, single atom, etc.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 617a258 and 6e5ebb3.

📒 Files selected for processing (1)
  • source/tests/pd/model/test_descriptor_dpa2.py (1 hunks)
🔇 Additional comments (1)
source/tests/pd/model/test_descriptor_dpa2.py (1)

1-23: LGTM! Well-organized imports following best practices.

The imports are properly organized (standard library → third-party → local) and use modern Python practices like Path for file operations.

source/tests/pd/model/test_descriptor_dpa2.py Show resolved Hide resolved
@HydrogenSulfate HydrogenSulfate mentioned this pull request Dec 4, 2024
9 tasks
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6e5ebb3 and 3cfb90f.

📒 Files selected for processing (2)
  • source/tests/pd/test_multitask.py (2 hunks)
  • source/tests/pd/test_training.py (2 hunks)
🔇 Additional comments (3)
source/tests/pd/test_training.py (2)

27-27: LGTM!

The import statement follows the established pattern and is properly placed with other model imports.


199-213: LGTM! Verify test data availability.

The test class implementation follows the established pattern and maintains consistency with other test classes. The implementation looks correct and well-structured.

Let's verify the test data availability:

#!/bin/bash
# Description: Verify that the required test data exists
# Test data path: water/data/data_0

# Check if the test data directory exists
if [ -d "source/tests/pd/water/data/data_0" ]; then
    echo "Test data directory exists"
    # List contents to verify structure
    ls -la source/tests/pd/water/data/data_0
else
    echo "Warning: Test data directory not found"
fi

# Check if the configuration file exists
if [ -f "source/tests/pd/water/se_atten.json" ]; then
    echo "Configuration file exists"
else
    echo "Warning: Configuration file not found"
fi
source/tests/pd/test_multitask.py (1)

33-33: LGTM!

The import statement follows the established pattern and is properly placed with other model imports.

source/tests/pd/test_multitask.py Outdated Show resolved Hide resolved
repinit_variable.pop("embeddings_strip")
)
obj.repinit["davg"] = t_cvt(statistic_repinit["davg"])
obj.repinit["dstd"] = t_cvt(statistic_repinit["dstd"])

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'env_mat' is unnecessary as it is
redefined
before this value is used.
This assignment to 'env_mat' is unnecessary as it is
redefined
before this value is used.
NetworkCollection.deserialize(
repinit_three_body_variable.pop("embeddings_strip")
)
)

Check warning

Code scanning / CodeQL

Variable defined multiple times Warning

This assignment to 'env_mat' is unnecessary as it is
redefined
before this value is used.
obj.repformers.layers = paddle.nn.LayerList(
[RepformerLayer.deserialize(layer) for layer in repformer_layers]
)
return obj

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable env_mat is not used.
# h2: nb x nloc x nnei x 3
# msk: nb x nloc x nnei
nb, nloc, nnei, _ = g2.shape
ng2 = g2.shape[-1]

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable ng2 is not used.
for ii in range(1, nitem):
uu = uu + scale * update_list[ii]
return uu

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable nitem is not used.
[False, True], # use_econf_tebd
[True], # new sub-structures (use_sqrt_nnei, g1_out_conv, g1_out_mlp)
):
dtype = PRECISION_DICT[prec]

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable atol is not used.
)
dd0.repformers.stddev = paddle.to_tensor(dstd_2, dtype=dtype).to(
device=env.DEVICE
)

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable model is not used.
def _test_unused(self, model_params):
self.model = get_model(model_params).to(env.DEVICE)
natoms = 5
generator = paddle.seed(GLOBAL_SEED)

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable generator is not used.
coord = paddle.rand([natoms, 3], dtype=dtype).to(device=env.DEVICE)
coord = paddle.matmul(coord, cell)
atype = paddle.to_tensor([0, 0, 0, 1, 1]).to(env.DEVICE)
idx_perm = [1, 0, 4, 3, 2]

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable idx_perm is not used.
source/tests/pd/test_multitask.py Fixed Show fixed Hide fixed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
deepmd/pd/model/task/fitting.py (1)

214-215: Improve tensor initialization and device placement.

The change from using place argument to .to(device) is good for consistency with PyTorch-style device placement. However, consider using paddle.zeros_like for better efficiency when reinitializing tensors.

-                paddle.zeros(self.dim_case_embd, dtype=self.prec).to(device=device),
+                paddle.zeros_like(paddle.empty([self.dim_case_embd], dtype=self.prec)).to(device=device),
deepmd/pd/utils/multi_task.py (2)

99-101: LGTM! Consider adding parameter type hints.

The addition of the return type annotation -> None is a good practice as it clearly indicates the function modifies the input in-place. Consider also adding type hints for the parameters to further improve code clarity.

     def replace_one_item(
-        params_dict, key_type, key_in_dict, suffix="", index=None
+        params_dict: dict,
+        key_type: str,
+        key_in_dict: str,
+        suffix: str = "",
+        index: Optional[int] = None
     ) -> None:

99-101: Document sharing levels in function docstring.

The function handles sophisticated parameter sharing with different sharing levels (e.g., "my_des1:1"), but this feature isn't well-documented. Consider adding a docstring explaining:

  • The sharing level syntax (e.g., ":1")
  • How sharing levels affect parameter sharing
  • Special handling of hybrid descriptors
def replace_one_item(
    params_dict, key_type, key_in_dict, suffix="", index=None
) -> None:
    """Replace parameters with shared values based on sharing configuration.
    
    Args:
        params_dict: Dictionary to modify with shared parameters
        key_type: Type of parameter (e.g., "descriptor", "fitting_net")
        key_in_dict: Key in shared_dict, optionally with sharing level (e.g., "my_des1:1")
        suffix: Optional suffix for hybrid models
        index: Optional index for hybrid model list
        
    Sharing levels:
        - No suffix (e.g., "my_des1"): Share all parameters (level 0)
        - With suffix (e.g., "my_des1:1"): Share subset of parameters (level 1+)
    """
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3cfb90f and 575726a.

📒 Files selected for processing (5)
  • deepmd/pd/model/task/fitting.py (1 hunks)
  • deepmd/pd/utils/multi_task.py (1 hunks)
  • source/tests/pd/model/water/multitask.json (1 hunks)
  • source/tests/pd/model/water/multitask_sharefit.json (2 hunks)
  • source/tests/pd/test_multitask.py (4 hunks)
🔇 Additional comments (5)
deepmd/pd/model/task/fitting.py (1)

214-215: Verify case embedding usage patterns across the codebase.

Let's verify how case embeddings are used across the codebase to ensure this change doesn't impact other components.

✅ Verification successful

Based on the search results, I can now generate the final response:

Change from zeros to eye initialization appears safe

The change from paddle.zeros() to commented-out paddle.eye()[0] for case embeddings initialization is safe because:

  1. The actual case embedding used during inference is overridden in the get_case_embd method (line 273) with:
self.case_embd = paddle.eye(self.dim_case_embd, dtype=self.prec).to(device)[case_idx]
  1. The initial value is only used as a placeholder during model initialization and doesn't affect the model's behavior since it gets replaced with the proper one-hot encoded case embedding during runtime.

  2. The device placement is consistent in both versions since both use the same .to(device=device) pattern.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for case embedding usage patterns
echo "Searching for case embedding usage..."
rg -l "case_embd" --type py

# Search for potential device placement patterns
echo "Checking device placement patterns..."
rg "\.to\(device" --type py -A 2

Length of output: 39817

source/tests/pd/model/water/multitask_sharefit.json (1)

Line range hint 94-118: LGTM! Path updates are consistent with Paddle backend support.

The changes correctly update the data paths from PyTorch (pt) to Paddle (pd) backend.

source/tests/pd/test_multitask.py (2)

324-324: False positive: data_file variable is used.

The static analysis tool incorrectly flags data_file as unused. This variable is used in the configuration setup for both training and validation data systems.


Line range hint 239-394: LGTM! Test classes are well-structured and complete.

The new test classes (TestMultiTaskSeASharefit, TestMultiTaskDPA2, TestMultiTaskDPA2Tebd) are properly implemented with:

  • Correct inheritance from unittest.TestCase and MultiTaskTrainTest
  • Complete setUp methods with proper configuration initialization
  • Proper tearDown methods
  • Consistent test patterns with existing test classes
deepmd/pd/utils/multi_task.py (1)

99-101: Verify DPA-2 descriptor registration.

Since this PR adds DPA-2 support, we should verify that the DPA-2 descriptor is properly registered and can be handled by the parameter sharing mechanism.

✅ Verification successful

Let me gather more information to verify the DPA-2 descriptor registration and implementation.


Based on the search results, I can now provide a final response:

DPA-2 descriptor is properly integrated and registered

The codebase shows comprehensive DPA-2 descriptor implementation across multiple backends:

  • Paddle implementation in deepmd/pd/model/descriptor/dpa2.py
  • PyTorch implementation in deepmd/pt/model/descriptor/dpa2.py
  • JAX implementation in deepmd/jax/descriptor/dpa2.py
  • Base implementation in deepmd/dpmodel/descriptor/dpa2.py

The descriptor is also properly used in configuration files (*.json, *.yaml) and has corresponding test cases. The parameter sharing mechanism in multi_task.py will work correctly with DPA-2 descriptors.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify DPA-2 descriptor registration and usage in parameter sharing

# Check if DPA-2 descriptor is registered
rg -A 5 "register_descriptor.*dpa2" 

# Check for DPA-2 usage in configuration files
rg -g "*.json" -g "*.yaml" "type.*dpa2"

# Verify descriptor class implementation
ast-grep --pattern 'class DescrptDPA2(BaseDescriptor):
  $$$
'

Length of output: 807


Script:

#!/bin/bash
# Check for DPA-2 descriptor registration in registry files
rg -A 5 "register.*descriptor.*dpa" 

# Look for DPA-2 descriptor class definition
rg -A 10 "class.*Descrpt.*DPA2?"

# Check imports and usage of DPA-2 descriptor
rg "from.*descrpt.*dpa|import.*descrpt.*dpa"

Length of output: 17188

deepmd/pd/model/task/fitting.py Show resolved Hide resolved
source/tests/pd/model/water/multitask.json Show resolved Hide resolved
@HydrogenSulfate
Copy link
Contributor Author

@njzjz This PR is ready for review

@njzjz njzjz requested review from njzjz and iProzd December 19, 2024 08:21
deepmd/pd/model/descriptor/dpa2.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
deepmd/pd/model/descriptor/dpa2.py (3)

504-504: Rename unused loop variable 'ii'
You are not using the loop index anywhere within the loop body, so rename it to underscore or another meaningful placeholder.

- for ii, descrpt in enumerate(descrpt_list):
+ for _, descrpt in enumerate(descrpt_list):
🧰 Tools
🪛 Ruff (0.8.2)

504-504: Loop control variable ii not used within loop body

Rename unused ii to _ii

(B007)


689-689: Remove the unused assignment to 'env_mat'
Assigning to 'env_mat' serves no purpose if the variable is never referenced afterwards, so it can be removed to reduce clutter.

- env_mat = repformers_variable.pop("env_mat")
+ repformers_variable.pop("env_mat")
🧰 Tools
🪛 Ruff (0.8.2)

689-689: Local variable env_mat is assigned to but never used

Remove assignment to unused variable env_mat

(F841)


901-902: Implementation incomplete: 'enable_compression'
The method raises NotImplementedError, indicating it’s unfinished. If needed, you can delegate this to a future enhancement or provide a stub to guide the next steps.

Would you like help creating a placeholder implementation or opening a GitHub issue to track this?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 575726a and abeae6d.

📒 Files selected for processing (1)
  • deepmd/pd/model/descriptor/dpa2.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
deepmd/pd/model/descriptor/dpa2.py

90-90: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


504-504: Loop control variable ii not used within loop body

Rename unused ii to _ii

(B007)


689-689: Local variable env_mat is assigned to but never used

Remove assignment to unused variable env_mat

(F841)

🔇 Additional comments (2)
deepmd/pd/model/descriptor/dpa2.py (2)

289-291: Rename 'g1_shape_tranform' to 'g1_shape_transform'
This is a typographical error, which was also identified in a past review. Renaming it improves clarity.

Example fix:

- self.g1_shape_tranform = Identity()
+ self.g1_shape_transform = Identity()

90-90: ⚠️ Potential issue

Avoid mutable default arguments for 'exclude_types'
Repeated instantiations of this function share the same list object for the default argument. This could lead to surprising behavior.

Use a sentinel value like None for the default, and assign an empty list within the constructor, for example:

- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
...
    if exclude_types is None:
        exclude_types = []

Likely invalid or redundant comment.

🧰 Tools
🪛 Ruff (0.8.2)

90-90: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)

@njzjz njzjz enabled auto-merge December 23, 2024 22:03
@njzjz
Copy link
Member

njzjz commented Dec 23, 2024

@coderabbitai resolve

1 similar comment
@njzjz
Copy link
Member

njzjz commented Dec 25, 2024

@coderabbitai resolve

@njzjz njzjz added this pull request to the merge queue Dec 25, 2024
@HydrogenSulfate
Copy link
Contributor Author

@njzjz I noticed that there might be some subtle typos in the deepmd project. In Paddle, we have introduced a tool called typos in the pre-commit process, which can automatically detect potential typos. You can try it and added into deepmd if interested: PaddlePaddle/Paddle#69377.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants