-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework Code Evaluation #195
Comments
I propose a slight modification to Solution 1. To make Forms's unittesting as general as possible, the API could accept python code in a The test template currently in forms is then moved to Code Jam backend, which sends a request like {
"setup": // essentially https://github.com/python-discord/forms-backend/blob/main/backend/routes/forms/unittesting.py#L104-L108
"output": "5 Tests Passed" // i.e., output of the unittests patched in by setup
} This maintains the current behaviour: you get to test object states and such, but its possible for the submitter to control stdout. Not an issue, since
For Esoteric Challenges, a [1] Simplest way is to run it within Forms backend, but that assumes we trust people who create forms. Marginally more complicated way is to run the setup in snekbox too, with the minor downside of having two snekbox requests per test. |
Running any user code directly on the backend is a no-go. The part I don’t understand about your proposal is what it ends up achieving. You still can’t check state the way you described because the same issue of not being able to safely communicate between the unit test suite and the backend remains. Also to touch on Zig’s last point: the discussion yesterday was that this issue is currently sitting on low priority, and adding support for things that aren’t the code jam isn’t something we’ll look into unless we have a good idea. |
It removes the possibility of exploiting stdout in tests where we directly match the user's prints, while letting you keep the current behaviour of code jams where you check prints by unittest That is to say, you can decide on a per-event (or even per-test) basis whether you want (unexploitable stdout + no unittests) or (exploitable eval + unittests) |
Isn’t that just suggestion 1? |
You don't get to keep code jam's unit tests with suggestion 1 |
Currently the code evaluation suffers from possible exploits. These stem from how code evaluation currently works on the forms app:
This means that if the user manages to control the stdout, they are able to control the output of the tests, as far as the forms app can understand it.
There are several means in place already to mitigate this issue, but to solve it, this ultimately requires a more thorough/drastic solution.
Solution 1
Separate user code from the tests.
This has the disadvantage of limiting what we can evaluate, as currently we are able to inspect Python objects during testing.
Solution 2
We can see whether we can add an additional way to supply information from snekbox, specifically for the purpose of getting test responses. For example via an encrypted byte-stream. Either way something that can't be accessed from the evaluated code. The details can be hashed out if this is something we want to do.
In any case, considering the protections we already have in place against something like that, and considering Code Jam qualifiers are meant to assess a person's Python knowledge, even if someone is able to exploit the current system, it's not the worst thing in the world (although preferrably it wouldn't happen anyway). Therefore, this issue should be relatively low priority.
The text was updated successfully, but these errors were encountered: