You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Per the conversation in PR#22, the test cases as written in test/two-fer/two_fer_test.py are not syntax highlighted and are difficult to scan. In their current multi-line string form, they are also brittle (an extra space or other error will cause failure) and hard to maintain.
Ideally, we'd refactor the tests and testing code to read sample files from a test/samples/ directory, or the test/ directory itself, and compare results with "known good" or "known output" -- similar to this approach in the Python test runner, and this approach in the Ruby Analyzer.
Because the current model is to have a test directory for every analyzer and an analyzer directory for every exercise, it's important we move to a clearer and more modular solution for test files.
The text was updated successfully, but these errors were encountered:
Per the conversation in PR#22, the test cases as written in
test/two-fer/two_fer_test.py
are not syntax highlighted and are difficult to scan. In their current multi-line string form, they are also brittle (an extra space or other error will cause failure) and hard to maintain.Ideally, we'd refactor the tests and testing code to read sample files from a
test/samples/
directory, or thetest/
directory itself, and compare results with "known good" or "known output" -- similar to this approach in thePython test runner
, and this approach in theRuby Analyzer
.We could also model the test input as
Pytest Fixtures
, similar to the approach used in the C# Analyzer.Because the current model is to have a test directory for every analyzer and an analyzer directory for every exercise, it's important we move to a clearer and more modular solution for test files.
The text was updated successfully, but these errors were encountered: