You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 30, 2020. It is now read-only.
We talked about the idea of doing case studies at the 1/24 Office Hours, for example on Sourcecred or IPFS, and about ways we can get feedback from users to improve Sourcecred, speed user adoption and generally be a force for good in the Open Source community.
It would be helpful to formulate a plan for how we intend to carry out this type of research.
An idea I wanted to re-visit from Office Hours was the case study concept where we run SourceCred on a repo and ask contributors for feedback. What kind of questions would we ask the maintainers?
I would be interested to solicit the opinions of maintainers on why they value certain contributions: Is it the difficulty of an implementation? The amount of effort required? The ingenuity of an idea? The projects dependency on the contribution? If we were to create a survey, it would help codify our own ideas about contribution value by creating a multiple choice list of ways that contributions might be valuable, with perhaps a text box for participants to add their own ideas.
Another idea is to ask people to list their top five pull requests and why they felt those pull requests were valuable. Then we could internally evaluate how Sourcecred ranked those pull requests, and also inspect Sourcecred's logic for doing so and compare against the participants reasons for valuing those PR's. We could also ask for PR's they felt were particularly trivial.
For example, one hypothesis/intuition I'm starting to develop is that difficult contributions will have more input from the maintainers and thus accumulate cred through their comments and reviews. We could look into that hypothesis by seeing whether contributions with a lot of maintainer reviews and comments were ranked highly by users for reason of difficulty.
We would also be able to adjust the weights, come up with new heuristics perhaps, and re-test. Asking for a "top five" could apply to any type of contribution- comments, issues, etc.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We talked about the idea of doing case studies at the 1/24 Office Hours, for example on Sourcecred or IPFS, and about ways we can get feedback from users to improve Sourcecred, speed user adoption and generally be a force for good in the Open Source community.
It would be helpful to formulate a plan for how we intend to carry out this type of research.
An idea I wanted to re-visit from Office Hours was the case study concept where we run SourceCred on a repo and ask contributors for feedback. What kind of questions would we ask the maintainers?
I would be interested to solicit the opinions of maintainers on why they value certain contributions: Is it the difficulty of an implementation? The amount of effort required? The ingenuity of an idea? The projects dependency on the contribution? If we were to create a survey, it would help codify our own ideas about contribution value by creating a multiple choice list of ways that contributions might be valuable, with perhaps a text box for participants to add their own ideas.
Another idea is to ask people to list their top five pull requests and why they felt those pull requests were valuable. Then we could internally evaluate how Sourcecred ranked those pull requests, and also inspect Sourcecred's logic for doing so and compare against the participants reasons for valuing those PR's. We could also ask for PR's they felt were particularly trivial.
For example, one hypothesis/intuition I'm starting to develop is that difficult contributions will have more input from the maintainers and thus accumulate cred through their comments and reviews. We could look into that hypothesis by seeing whether contributions with a lot of maintainer reviews and comments were ranked highly by users for reason of difficulty.
We would also be able to adjust the weights, come up with new heuristics perhaps, and re-test. Asking for a "top five" could apply to any type of contribution- comments, issues, etc.
The text was updated successfully, but these errors were encountered: