-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache removed and filelist unit test #32
Conversation
Reviewer's Guide by SourceryThis PR removes the caching functionality and adds a unit test for the file list feature. The main changes involve removing cache-related code from the main module and adding a new test case that verifies the file list scanning functionality. Class diagram for ArgParser changesclassDiagram
class ArgParser {
+--file: str
+files: str[]
}
note for ArgParser "Updated argument definitions for file and files"
Class diagram for main function changesclassDiagram
class MainFunction {
-cache: Cache
+_result: dict
+_importer: dict
+results: list
}
note for MainFunction "Removed cache-related code and updated processing logic"
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @SeanTConrad - I've reviewed your changes - here's some feedback:
Overall Comments:
- The cache functionality appears to be commented out without explanation. Please either remove the cache code entirely if it's no longer needed, or explain why it's being disabled in the commit message.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟡 Testing: 2 issues found
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
def test_filelist_scan(): | ||
curr_dir = os.path.dirname(os.path.abspath(__file__)) | ||
project_root = os.path.dirname(curr_dir) | ||
stats_input_file = os.path.join( | ||
project_root, 'testfiles', 'samplefilelist.json' | ||
) | ||
stats_output_file = os.path.join(curr_dir, "test.stats.json") | ||
custom_args = [ | ||
f"--file={stats_input_file}", | ||
f"--output={stats_output_file}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Test should verify file list processing functionality more thoroughly
While the test checks basic file metrics, it should also verify: 1) handling of invalid file paths in the JSON, 2) empty file lists, 3) proper error handling for missing or malformed JSON files, and 4) relative/absolute path handling.
Suggested implementation:
def create_temp_json_file(content, tmp_path):
temp_file = tmp_path / "test_filelist.json"
with open(temp_file, 'w') as f:
json.dump(content, f)
return str(temp_file)
def run_modernmetric_with_files(input_file, tmp_path):
output_file = str(tmp_path / "output.stats.json")
custom_args = [
f"--file={input_file}",
f"--output={output_file}"
]
return modernmetric(custom_args=custom_args, license_identifier='unit_test'), output_file
def test_filelist_scan_valid_files(tmp_path):
"""Test processing of valid file list"""
curr_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.dirname(curr_dir)
stats_input_file = os.path.join(
project_root, 'testfiles', 'samplefilelist.json'
)
_, output_file = run_modernmetric_with_files(stats_input_file, tmp_path)
with open(output_file, 'r') as f:
stats = json.load(f)
files = stats['files']
assert files is not None
assert files["../testfiles/test.c"]["loc"] == 25
assert files["../testfiles/test.c"]["cyclomatic_complexity"] == 0
assert stats["overall"]["loc"] == 178
def test_filelist_scan_invalid_paths(tmp_path):
"""Test handling of invalid file paths in JSON"""
invalid_files = {
"files": [
"nonexistent_file.c",
"/invalid/path/file.cpp",
"../../../outside/scope.py"
]
}
input_file = create_temp_json_file(invalid_files, tmp_path)
_, output_file = run_modernmetric_with_files(input_file, tmp_path)
with open(output_file, 'r') as f:
stats = json.load(f)
assert stats['files'] == {}
assert stats["overall"]["loc"] == 0
def test_filelist_scan_empty_list(tmp_path):
"""Test handling of empty file list"""
empty_files = {"files": []}
input_file = create_temp_json_file(empty_files, tmp_path)
_, output_file = run_modernmetric_with_files(input_file, tmp_path)
with open(output_file, 'r') as f:
stats = json.load(f)
assert stats['files'] == {}
assert stats["overall"]["loc"] == 0
def test_filelist_scan_malformed_json(tmp_path):
"""Test handling of malformed JSON file"""
malformed_file = tmp_path / "malformed.json"
with open(malformed_file, 'w') as f:
f.write("{invalid json content")
with pytest.raises(json.JSONDecodeError):
run_modernmetric_with_files(str(malformed_file), tmp_path)
def test_filelist_scan_missing_file():
"""Test handling of missing input file"""
with pytest.raises(FileNotFoundError):
run_modernmetric_with_files("/nonexistent/input.json", Path("/tmp"))
def test_filelist_scan_relative_paths(tmp_path):
"""Test handling of relative and absolute paths"""
curr_dir = os.path.dirname(os.path.abspath(__file__))
files = {
"files": [
"./relative/path/file.c",
"../parent/path/file.cpp",
str(Path(curr_dir) / "existing_file.py")
]
}
input_file = create_temp_json_file(files, tmp_path)
_, output_file = run_modernmetric_with_files(input_file, tmp_path)
with open(output_file, 'r') as f:
stats = json.load(f)
assert stats['files'] == {} # No files should be processed as they don't exist
assert stats["overall"]["loc"] == 0
You'll need to:
- Add
import pytest
at the top of the file if not already present - Ensure the test framework supports the
tmp_path
fixture (most recent pytest versions do) - Create appropriate test files in the testfiles directory if they don't exist
- Update any existing error handling in the main modernmetric code if it doesn't properly handle these error cases
test/test_self_scan.py
Outdated
assert files is not None | ||
assert files["../testfiles/test.c"]["loc"] == 25 | ||
assert files["../testfiles/test.c"]["cyclomatic_complexity"] == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Test assertions could be more specific and descriptive
Consider using more specific assertions with meaningful error messages. For example: assert len(files) > 0, 'Files list should not be empty'
and verify the exact number of expected files. Also, the cyclomatic complexity of 0 seems suspicious - is this the expected value?
Suggested implementation:
files = stats['files']
assert files is not None, "Files dictionary should not be None"
assert len(files) == 1, "Expected exactly 1 file in analysis"
test_file = files["../testfiles/test.c"]
assert test_file is not None, "Test file '../testfiles/test.c' should be present in results"
assert test_file["loc"] == 25, "Test file should have exactly 25 lines of code"
# TODO: Verify if cyclomatic complexity of 0 is correct
assert test_file["cyclomatic_complexity"] >= 1, "Cyclomatic complexity should be at least 1 for any non-empty file"
assert stats["overall"]["loc"] == 178, "Overall LOC count should be 178"
Note: I've added a comment about cyclomatic complexity and changed the assertion to expect at least 1, since a value of 0 is suspicious for any non-empty file. The developer should:
- Verify the expected cyclomatic complexity value for test.c
- Update the assertion with the correct expected value once verified
- Consider adding more assertions for other metrics if they are available in the stats output
modernmetric/license.py
Outdated
@@ -3,7 +3,7 @@ | |||
requestx = httpx.Client(http2=True, timeout=None) | |||
|
|||
|
|||
def report(identifier: str | int, product: str, die: bool = False): | |||
def report(identifier: "str | int", product: str, die: bool = False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do this as a union imported from typing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
modernmetric/__main__.py
Outdated
@@ -147,7 +147,7 @@ def ArgParser(custom_args=None): | |||
# e.g. ["--file=path/to/filelist.json"] | |||
|
|||
|
|||
def main(custom_args=None, license_identifier: str | int = None): | |||
def main(custom_args=None, license_identifier: "str | int" = None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use a union
@@ -39,11 +39,5 @@ jobs: | |||
python -m pip install --upgrade pip | |||
pip install -r requirements.txt | |||
|
|||
- name: Run Pytest | |||
- name: Run Pytest with coverage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aren't you running pytest twice
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not exactly. test.yml runs PyTest 8 times. Mac and Linux x 4 Python versions. I moved Codecov out, so it doesn't upload the file 8 times, but it still needs to run pytest once to make the coverage file
Summary by Sourcery
Add a unit test for file list processing and remove caching from the main function to streamline the code.
Enhancements:
Tests: