With OpenAI’s ChatGPT and Google’s Bard recently shaking up the technological landscape, the capabilities of artificial intelligence are greater than ever before. Even though they are developed as tools, these systems represent one of the greatest threats to online discourse the world has ever faced. Once contained to local environments at best, trolls and propagandists with access to AI have unprecedented power, the implications of which could be profound for all people of all ages in every country.
Vulnerable Groups
Even though bringing up concerns about children is often a go-to tactic of fearmongers, in this instance, at least, they legitimately represent one of the vulnerable groups that AI could target. The primary avenue of this attack would likely come through social media, which, as a study by ExpressVPN shows, is used on an average of 21 minutes a day by children as young as four. As the usage here changes by country, stats from the US place YouTube (43%) and TikTok (28%) as some of the most commonly used social media platforms. These platforms are heavily leaned on by toxic personalities, which will be exploited further as AI advances.
It’s not just children that will be negatively affected by AI’s ingress onto the web either, as it will similarly affect users of all ages, with slightly different tactics. Though adults can be more difficult to fool, nobody is entirely immune to bad information or faulty arguments, and that’s where AI will excel.
Methods of Attack
At this early point in chatbot AI, limitations have been placed on certain topics and conversation lengths, as noted by the Independent. These limits won’t last, however, and with each iteration of chatbot AI, these systems have more data to draw from. This data can come in the form of bullying, misinformation, and propaganda tactics.
On the most fundamental level, the threat of chatbot AI to online discourse comes from removing barriers that bad actors would have to spew forth hate speech and propaganda. Rather than having an open discussion, a troll could simply have an AI write their responses, exhausting someone arguing in good faith until they quit. As many of us aren’t taught what constitutes an honest discussion or bad-faith tactics, outsiders might thus be convinced of the bad actor’s arguments.
This ties into many aspects of manipulative psychology in an online discussion, such as the famed Gish gallop. As explained by SpeakingofResearch, the Gish gallop is when a person floods a discussion with an argument with as many incorrect or half-true points as possible as quickly as possible. Since it can take five seconds to make an incorrect claim and five minutes to explain why a claim is incorrect, this common tactic is used to make an honest person falsely appear overwhelmed.
Hope for the Future
Even though it could be possible for AI to work to detect other AI and block it out, this solution is will never fully applied internet-wide. Instead, our best hope is to understand what constitutes honest debate and fact-checking. Instead of talking past each other, it’s imperative to stop at each point and fully discuss one issue before moving on, relying on research and academic consensus whenever possible.
“Machine Learning & Artificial Intelligence” (CC BY 2.0) by mikemacmarketing
Ultimately though, there is never going to be one solution that works to completely bypass the threat new chatbot AI presents. It’s essentially the dream tool of misinformation merchants, and it’s an issue that is going to have profound effects on the understanding and direction of humanity’s future. It’s not exactly a robot force kicking in your door like in The Terminator, but in terms of potential social, economic, and environmental harm, AI still can’t be underestimated.