Right now spacehey is vulnerable to account creation spam. During an attack hundreds of accounts with offensive images or names are created en-masse with the goal of eliciting a reaction from the user base. Personally I've done everything I can to show people how to limit their exposure to the content, and I've emphasized how posting blogs and bulletins simply to complain about the issue is exactly the reaction these people are looking for. There is only so much we can expect of the users, though, so this is my attempt to offer a systemic solution to the issue.
Part one: Adding a Queue
The target is typically the cool new people section because it's the easiest way to serve offensive content to the entire user base without any direct interaction. On an individual level most people who have taken action have hidden this section on their home page using an extension. This is effectively the same as removing the feature entirely, and doesn't discourage the attacks or limit their impact in any meaningful way.
My proposal for the simplest to implement and least mod-labor intensive solution starts with implementing a queue for the cool new people page. On account creation every new account starts with a "review status" flag that defaults to a "not reviewed" state. Only accounts that are both "new" and have a review status of "reviewed" would appear as new accounts in the browse and cool new people section.
Transitioning from "not reviewed" to "reviewed" would require some level of human intervention, but should overall be less labor intensive than seeking out and deleting the offensive accounts. Implementation could be as simple as a page displaying profile pictures and names, with a checkbox next to each. A human clicks the check box next to any offensive profile, and clicks a button that applies "reviewed" to approved profiles and "rejected" to profiles with check marks. Rejected profiles can be displayed in another page for further investigation and hidden from users until another action is taken.
This would have an immediate chilling effect on spam activity, as accounts would now have to first be created and approved before an attack could begin. It would also require spammers to retain credentials for all their accounts, and to log in and change the account images and content after approval. This greatly increases the time and sophistication required to launch the same type of attack.
To summarize the work required to implement this step:
- Adding a new field to user accounts in the database
- Changing the query used in the new people section and the cool new people widget
- Creating a page to display the queue for moderators with an action button to apply status changes.
Part two: Future development
Aside from the immediate chilling effect the first step would have when implemented there is a major benefit in scalability as well. With an account flagging system in place automation* can be used to supplement human labor using any arbitrary number of metrics. An endpoint to hash profile images, for example, could flag profiles for review by a moderator when the hashmap of the profile picture matches a database of banned hashes. Another endpoint can parse text in names and profiles to flag accounts for review. Account creation times can be used to map activity levels of offensive content and build a profile of the bad actors. These are just a few examples, there are many more options for reducing attack surface and increasing the costs required to attack the site.
Part three: ?????
I've done my best to lay out the simplest and least time consuming solution to the problem I can think of. I welcome suggestions for improvement in the comments and I'll update this blog with revisions if we come up with some better ideas.
I know there are developers in the spacehey community capable and willing to help implement this sort of feature. I would be willing to build some of the endpoints mentioned and I know of at least one skilled developer who has offered to write front-end features for the site in the past. I'm sure there are more people willing to contribute their time to help improve the site.
An, if you like this idea please implement it. If you want help, all you have to do is ask!
*To be clear: I'm not advocating for the automation of any moderation actions but instead automating detection of potential problems that human moderators can then resolve.