You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some moderators may have particular categories of content they would prefer not to see. It would be useful to find ways of minimising risk by flagging these before a human reads them.
What needs to be done?
Capture which terms or criteria could be used to flag content
Implement this so that moderators can opt out of seeing some posts.
Potential options:
filter posts with keywords
filter posts where the user has not flagged triggering content previously
Who can help?
Any suggestions welcome! Please add to the comments.
The text was updated successfully, but these errors were encountered:
@helendduncan Do you mean by allowing keyword search on the experiences or by some more complex NLP to identify potentially triggering stories? My gut feeling is that it'll be really tough to reliably flag things based on this.
Instead of using keywords, I'd suggest using past publicly shared experiences as a proxy for whether a new story might or might not include triggers. In the simplest version of this we could add a bit of additional data to the moderation queue pages:
Highlight if an experience is the first one ever submitted by a user (similar to how GH highlights first-time contributors). This will allow prioritizing newcomers to make sure the have a smooth experience, but also flag that they are most likely to not have experience with submitting (i.e. might not properly label an experience)
Have a little flag of how many experiences submitted by a user have had trigger labels before (3/12 stories with warnings). It's not very granular, but allows moderators to make an informed guess that is not only based on the title.
EDIT: The other benefit of this approach is that it's technically rather easy to implement with our existing models etc.
Summary
Some moderators may have particular categories of content they would prefer not to see. It would be useful to find ways of minimising risk by flagging these before a human reads them.
What needs to be done?
Potential options:
Who can help?
Any suggestions welcome! Please add to the comments.
The text was updated successfully, but these errors were encountered: