Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix tests - 3 out of 4 cases are failing #14

Open
bobleesj opened this issue Aug 13, 2024 · 4 comments
Open

Fix tests - 3 out of 4 cases are failing #14

bobleesj opened this issue Aug 13, 2024 · 4 comments

Comments

@bobleesj
Copy link
Contributor

Same as the title. We should do fix them while we have time for

Re-visit for cookiecutting after releasing other diffpy packages with higher priority #13

@bobleesj
Copy link
Contributor Author

bobleesj commented Nov 5, 2024

@bobleesj (remind myself in the notification box to describe the issue in detail)

@bobleesj
Copy link
Contributor Author

bobleesj commented Nov 5, 2024

As of now, the code lacks unit tests. However, it does have a test function, primarily running the main(args=...) and then checking the expected output based on whether the output contains a print statement of, for example, Number of components: 3

Here is the test function:

@pytest.mark.parametrize("tm", test_map)
def test_nmf_mapping_code(tm, temp_dir, capsys):
    data_dir = tm[0]
    working_dir = Path(temp_dir)
    os.chdir(working_dir)
    main(args=data_dir)
    out, err = capsys.readouterr()
    assert out == tm[2]
    results_dir = os.path.join(working_dir, "nmf_result")
    os.chdir(results_dir)
    expected_base = os.path.join(os.path.dirname(__file__), "output")
    test_specific_dir = os.path.join(expected_base, tm[1])
    for root, dirs, files in os.walk("."):
        for file in files:
            if file in os.listdir(test_specific_dir):
                fn1 = os.path.join(results_dir, file)
                with open(fn1, "r") as f:
                    actual = f.read()
                fn2 = os.path.join(test_specific_dir, file)
                with open(fn2, "r") as f:
                    expected = f.read()
                assert expected == actual

The arguments for main(args=data_dir) are parsed from test_map/tm referred to as tm[0], i..g. data_dir, "--xrange", "5,10"

test_map = [
    ([data_dir, "--xrange", "5,10"], "output_1", "Number of components: 3\n"),
    ([data_dir], "output_2", "Number of components: 3\n"),
    ([data_dir, "--xrange", "5,10", "12,15"], "output_3", "Number of components: 3\n"),
]

Problem

The print statement of the number of components does not match against what's expected test_map above.

>       assert out == tm[2]
E       AssertionError: assert 'Improvement ...ponents: 10\n' == 'Number of components: 3\n'
E         
E         + Improvement ratio of 1E-3 not met, attempting 1E-2...
E         + Improvement ratio of 1E-2 not met. Inspect data and impose manual cutoff
E         - Number of components: 3
E         ?                       ^
E         + Number of components: 10
E         ?                       ^^

/Users/imac/Downloads/dev/bg/diffpy.nmf_mapping/tests/test_NMF_analysis_code.py:48: AssertionError
====== 3 failed, 2 passed, 19 warnings in 14.36s ======

Proposed solution

  • Instead of checking the output of the print statement, we only test the expeted output (.json, .png file size and existence)
  • During the pytest run, it should not actually call plt.show() since the rest tests aren't executed until the current plot is closed manually.

@sbillinge
Copy link
Contributor

we can probably junk this test but let's take the opportunity of populating a few tests. We don't usually test the arg parser anyway.

@bobleesj
Copy link
Contributor Author

bobleesj commented Nov 12, 2024

For bookkeeping purposes, #32 this PR was created to refactor our existing test function, please carefully read the discussions provided in the PR before addressing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants