-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
💡 Moderation Functionality #2787
Comments
Hi @nebojsact - I have updated the ticket with the main requirements. I’ll leave it to you to add the two approaches we discussed during our call. |
Hi, as we discuss, i like to add that we can try automate process using AI but currently i have no idea about cost of that kind of services, all other main points you already describe very well :) |
I think the code of conduct to be followed in the forum, is a great idea. For moderation, perhaps we just have an option in the comment dropdown to 'flag inappropriate comment', and for now that simply comes to us, that is probably enough. Then we can check them weekly and bring the decision to the working group on how to action |
Great question! I just discussed that with @l-br1 and and he mentioned that we could get support from the GovTool Working Group. This way, they could take ownership of the moderator role and decide how to handle each flag. |
In any moderation system, it's crucial to ensure that content and comments are linked to legitimate identities. Sybils—fraudulent identities—can disrupt messaging and lead to unintended outcomes, especially in automated moderation. While manual moderation is an option, it often lacks scalability. Here are some of my first thoughts: 1. Weighted Flagging: Assign a weight to each user's flag based on their stake, as each user identity is associated with a stake key. This approach minimizes the impact of cheaply created sybils, as their flags would carry little to no weight. 2. Minimum Stake Requirement: Additionally, require each user to maintain a minimum stake of, for example, 1 ADA in their wallet. While this will not entirely prevent spam, it increases the cost for spammers to create sybils, thereby reducing spam incidents and lessening the burden on other moderators. Since the Proposal Discussion Forum (PDF) operates off-chain, spam prevention measures must be enforced at the backend. Frontend checks using browser wallets alone will likely not be sufficient. |
Hi @kickloop - Would you mind reviewing the solution proposed and add your comments? |
Summary of the Meeting User Interaction:
Escalation Process:
Security Measures: I am also creating a new ticket for Sandip's comment above. |
Area
Proposal Pillar
Is there new design needed?
Yes
###Objective
Ensure healthy, constructive, and inclusive discussions within the Proposal Discussion Forum by introducing a moderation system that promotes engagement while discouraging disruptive behavior.
Why?
How?
Key Features:
1) Flagging System:
2) Automated Warnings:
3) Provide Guidelines:
-Display forum guidelines to remind users of acceptable behavior.
NOTE: For the MVP, we aim to avoid introducing a moderator role and instead prioritize an automated system.
The text was updated successfully, but these errors were encountered: