Believable Disinformation is Mind-Bogglingly Easy

We Must Inoculate Against It

by Jessica E. Brown, Esq.

At Countercloud, a team of only two people created an automated disinformation machine for just $800, and it took a mere two months. The AI model churned out 20 news articles and 50 tweets daily which were convincing approximately 90% of the time. The machine also infiltrated comment sections with hate speech, generating content similar to that of Russian trolls.

Countercloud did this to prove how dangerously easy it is to spread disinformation. Experts believe that for only $4,000 bad actors may be able to 200 articles per day that counter 40+ news outlets with no human interaction needed. This naturally implicates election interference.

Experts suggest the following ways to counter AI-generated disinformation are practical and important steps toward mitigating its impact:

  1. Put AI Content Detection in Browsers:

    • This involves integrating AI-based content detection directly into web browsers. It would work by analyzing the content of web pages for signs of disinformation, and alerting users if any is detected. This can provide users with real-time warnings.

  2. Ask Platforms to Warn Users of AI-Generated Content:

    • Platforms, including social media platforms, and websites could implement systems that flag or label AI-generated content. This would help users distinguish between human-generated and AI-generated content, promoting awareness and critical thinking.

  3. Encourage Users to Download Brower Apps that Detect Harmful AI Content:

    • Internet service providers and hosting platforms can encourage users to implement filters or monitoring systems that detect and restrict the dissemination of harmful AI-generated content. This would act as an additional layer of defense against disinformation.

  4. Regulating Powerful AI Use:

    • Implementing regulations on the development and use of powerful AI systems is crucial, though difficult because it may restrict free speech. However, licenses for certain AI applications, especially those that have a high potential for misuse may be on the horizon.

  5. Educate the Public on AI-generated disinformation:

    • Public education is key in combating disinformation. It's important to inform the public about the existence and potential dangers of AI-generated content. This includes teaching people how to evaluate information they encounter online critically.

Implementing these measures would require a coordinated effort between governments, tech companies, and the public. It's a complex challenge, but taking proactive steps like these can go a long way in reducing the impact of AI-generated disinformation.

Previous
Previous

Artificial Intelligence and Data Privacy