Discord details how it fights spammers in a 'never-ending game of cat and mouse'
How it currently defines it, categorises it, and acts.
Discord has been in the news for some bad reasons this year, most recently for having teased some sort of NFT integration—which it almost immediately walked back. Some of the best thirdparty apps for the software have been killed off, the company's sky-high valuation of $15 billion opens all sorts of questions about what its future looks like, and on the ground level it continues to face the day-to-day problems of any big social service: spammers and grifters and bots, oh my!
A recent blogpost by Discord's safety team goes into some detail about how Discord combats spammers generally, and is interesting because it reveals how a giant organisation like this breaks down the problem.
Discord categorises spam into one of three groups. Generated Accounts make up the "bulk majority" of Discord spam, which is spewed out by bots. Compromised Accounts cause "the highest user-impact spam" because they're using 'real' accounts to share more spam to unsuspecting users. Human Operations are among the "most long-lasting spam actors on the platform." These accounts are made and operated by real people.
"We have a full and growing team working on spam, but it's a never-ending game of cat and mouse that also requires us to make sure legitimate users don’t get caught in the crossfire," reads the post, before saying that Discord makes constant interventions and is always increasing its automated security measures for the simple reason that it makes spamming more expensive. "The more expensive it is for bad actors to engage in spam producing activity, the less likely they are to commit to it."
Discord claims it's currently in a place where it can automatically identify many spammers extremely fast, sometimes before they've sent a single message. There's also a 'safe server' feature on the way, which is currently being tested but "monitors servers for inauthentic behavior from new members, and proactively puts the server into safe mode, requiring captchas to engage with the community for a period of time." This functionality will also integrate into Membership Screening.
A suspicious link system is now in place, warning users not to click when Discord thinks a link is dodgy. Perhaps most surprising is Discord's own surprise at having recently implemented a more prominent method for users to report spam, and seeing huge results: "Thanks to community reporting, our ability to identify bad actors has increased by 1000%, allowing us to more rapidly discover and remove spammers while also improving our automated detection models."
More detail is apparently coming over the next few months about additional tools for community moderators and the further steps Discord's taking. It's all a far cry from the talk about this platform's future but in terms of your average experience on there, this is the kind of stuff that will hopefully make a difference. Anyway: please enjoy Discord while it's still good.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."