Not long ago, FACEIT announced that it is implementing an AI named Minerva to fight toxicity in CSGO. Minerva is not in full swing, but it can already recognize toxic chat messages and issue suitable punishments for misbehavior. However, to further develop Minerva, FACEIT has turned to the players themselves in order to help further increase the AI’s intelligence, GamesIndustry.biz reports.
FACEIT replicates Valve’s overwatch
The idea sounds awfully familiar. Valve has already implemented this way of catching cheaters in the official competitive CSGO servers. VACnet collaborates with overwatch, a mode where certain eligible players can turn into detectives and watch clips of suspected cheaters. The overwatch viewers then give their feedback, and if a lot of these players “detectives” agree, the suspect at hand will most likely be banned from CSGO. The difference between the two AIs is that VACnet is mainly concerned with using cheats to give oneself an unfair advantage. These include wallhacking, aimbots, or speed scripts.
For now, Minerva mostly focuses on toxic chat abuse, voice communications, and in-game behavior. This complicates things, as context is crucial to understanding the circumstances in which toxicity takes place. This is where the help of human judgment comes into play.
According to GamesIndustry.biz FACEIT’s upcoming Justice update “will launch an online portal where Counter-Strike players can review cases identified by Minerva”. Though Minerva has allegedly “issued 20,000 bans and 90,000 warnings” in six weeks post-launch, it is surely just a fraction of daily toxic behavior present on FACEIT’s servers in CSGO. How the Justice update improves Minerva’s accuracy will definitely be something to keep an eye on.
What do you think of developing an AI such as Minerva to battle toxicity? Let us know down in the comments below. As always, remember to follow us at Daily Esports for all your latest CSGO news.