-
Notifications
You must be signed in to change notification settings - Fork 4
Jason Chang's problem writing guide
Note: this document is exclusively talking about code-writing problems.
I largely put my trust in 4 key factors that have a significant impact on the quality of a problem, particularly distinguishing “okay” and “good” problems. This is not to say that I think that these are the only factors that affect problem quality. As one that has spent many hours working with the CSM 61A content repo, it’s just that I have come to value these aspects the most when writing problems. Below I briefly talk about each, in no particular order.
I like to divide each problem into distinct pieces: description, doctests, skeleton, and solution. I’ll define a transition as the point where the student shifts their focus between distinct elements of a problem. Particularly, I want to focus on the transition from problem description to doctests.
The description’s job is two-pronged. First, it gives the student a high-level overview of what the problem is asking for. Second, it may give subtle (or overt) hints as to how to go about the problem. On the other hand, doctests are included to give the student a better understanding of how the problem works. In essence, the doctests are overt hints, dictating what behaviors the student should be expecting, and giving them a view of the “final product.”
In a way, we can think of the transition between these two pieces as the gap that distinguishes a problem’s high-level conceptualization and its more concrete visualization. In the student’s mind, it is crucial that by the time they shift from reading the problem description to going through the doctests, they have a clear sense of what the problem is asking for, at the very least. By looking at the doctests, the student should be seeking to reinforce their understanding of the problem by visualizing explicit outputs. It follows that should the problem description be insufficient in providing the student with either the foundational information necessary to solve the problem and/or meaningful hints/indicators as to how to go about developing a solution, it becomes that much harder for the student to find the doctests useful in supplementing their understanding.
I believe that the key to good transitions is ensuring that both ends of it share key hooks, or bits of information that serve as a connection. A simple example would be to use the skeleton code’s explicit variable names in the problem description. This helps the student narrow their focus onto relevant aspects of the description and place them appropriately within the structure of the skeleton.
By focusing on the quality of your transitions (not just the one detailed above), you help to ease the student’s traversal of each component of the problem. Especially when considering exam-level questions, these problems are technically challenging enough to make it tough on students. Removing the potential for students to get unnecessarily confused due to the structure of the problem only serves to streamline the students’ thought process and focus on the task at hand.
For code-writing problems that include it, skeleton code is an essential component. It serves two theoretically paradoxical functions. First, it provides the student with a springboard to start solving the problem. If the student has a vague idea of how the problem works and can apply their understanding of the problem correctly, then the skeleton helps to scaffold the student’s thoughts. However, this is somewhat related to the second purpose, which is to restrict the student’s ideas to the confines of the limited available space. By explicitly dictating the structure of the expected code, the student must balance developing a functioning solution and also fitting their ideas to the shape of the skeleton. Skeleton code is a bit of a double-edged sword-- it can help to jumpstart ideas and give hints to crucial components of the problem, but sometimes it can act as more of a burden on the student-- and needs to be treated as such. The placement of key blank spaces is a crucial component that can make or break problems.
So when should you put a blank rather than leaving in the code? I don’t believe that there is/will ever be a concrete formula that’s universally applicable, but I think the following situations are cover a pretty good range:
- A student is likely to fill in the line incorrectly if they have a particular misconception
- Ex: In a self-referencing problem, the inner function calls the outer function in its return statement, and the outer function returns the inner function. Leaving both the inner and outer functions’ return statements blank checks if the student correctly understands how self-referencing works.
- Key initial values, i.e. the problem has an “orientation” to it
- Ex: When traversing a linked list, the student should certainly understand that they must start at the first node. To check this understanding, have the student fill in the initial value for a pointer before they use it in a later loop.
- An essential line that the problem hinges on, or a particular line is a potential watershed moment
- Ex: rotate on the Sp20 Week 13 worksheet has the student fill in branch_labels, which is crucial to having access to an unchanging source for the node labels while destructively changing the tree nodes. [Sp20 Week 13 worksheet: https://drive.google.com/drive/u/1/folders/19gAu1NlIO7jcCq1CHyg2C20BN5IiAe7C]
When talking about rhythm, I’m specifically describing the average time you’d expect the student to spend on each component of the problem. It’s a fairly standard approach for the student to read the problem description, then the doctests, before searching for easier lines in the skeleton code to fill in and finally attempting to approach the more difficult lines. So, it’s important that as we’re writing problems we observe the tendency to follow patterns like this and emphasize individual steps properly.
This may seem a little more hand-wavy, but the rhythm of a problem varies significantly depending on what you’re going for as the writer. For example, a problem that is complex and requires a long problem description could pair well with either super detailed, complex doctests or simpler, easier-to-digest ones, depending on what the writer’s intent is.
Consider the case of complex doctests paired with complex problem description: this could work for a multi-part problem that requires the student to have an absolutely clear understanding of a function in order to feasibly approach the rest of the problem. This lends itself to the type of problem that tends to be more difficult to process at a conceptual level, testing the student’s ability to process information and successfully transform it into code.
On the other hand, if we pair a complex problem description with simpler doctests, we could potentially make the problem more lightweight in terms of the overhead that the student must go through to fully grasp the high-level concepts. That said, this also could have the effect of making it more challenging for the student to come up with an explicit solution that fits the skeleton, due to the problem conveying less comprehensive information to the student upfront. So, in contrast with the prior case, this combination would test the student’s ability to extrapolate from the information provided to get the correct solution.
Regardless of what you are trying to accomplish with your problem, the rhythm of your problem should match it. For example, if one of your goals is to test the student’s thoroughness and ability to consider cases outside of those provided by the doctests, it doesn’t make sense to have comprehensive doctests. On the flip side, if you want the student to focus more on the technical aspects due to the intensity required on a conceptual level, you may want to consider giving more comprehensive and easily digestible doctests as a jumping off point and allowing the student to dive into the code faster.
Just a forewarning, this section is more riffing than the previous sections and I don’t consider it as necessary to read through as the others. Think of it as a meta interpretation of the philosophy behind content creation in CSM 61A.
Let’s sit down for some real talk for a second, because I think what gets lost on a lot of mentors is the purpose of having content creation in CSM 61A. Within the scope of CSM 61A, we have a lot of problems (bad, good, and everywhere in between) already written by experienced and non-experienced mentors of the past and present. In theory, we could probably roll out the same review worksheets and review sessions each semester and just call it a day-- but that’s not the point. We want to continue to give students the best resources we can, and this includes leveraging our own mentors’ insights into how 61A as a class has evolved to keep our materials updated and relevant. As such, it’s pertinent for us to revise and create problems that better serve our students as we get new mentors that offer fresh perspectives.
On the flip side, we want to give our mentors a well-rounded and meaningful experience. I can’t think of another place within 61A’s pedagogy microcosm where you can experience content creation at an introductory level except in CSM. We want to provide opportunities for mentors to explore different aspects of pedagogy, including content creation, through systematic institutions such as task forces. It is through these experiences that we hope to allow mentors to get their feet wet and do some discovery, before maybe deciding later that they want to either pursue other pedagogical endeavors or take a deeper dive into something they’ve found that they're passionate about.
All of the above was some background before talking about this section’s main point-- with so much pre-existing content and an environment where the expectation is for mentors to engage with experimentation, I think creativity and originality is an element that shouldn’t be overlooked. There’s plenty of people who can regurgitate the problem description asking for the student to reverse a linked list, and some more that can come up with basic spinoffs. But do those kinds of problems really speak to the individual? This isn’t to say that those kinds of problems aren’t flavorful; if anything, given the right circumstances, they are honestly somewhat essential to gauging and pushing students’ understanding. What I’m getting at is that we’ve done our best to create an environment where, as you write your own problems, you have significant freedom to build something that is uniquely your creation, with infrastructure to support your ideas and improve your skills. Since you have the space to work with, try using it to the fullest extent!
Major contributors: Jason Chang