Posted on

Disinformation Researchers Raise Alarms About A.I. Chatbots



Share

In 2020, researchers at the Center on Terrorism, Extremism and Counterterrorism at the Middlebury Institute of International Studies found that GPT-3, the underlying technology for ChatGPT, had “impressively deep knowledge of extremist communities” and could be prompted to produce polemics in the style of mass shooters, fake forum threads discussing Nazism, a defense of QAnon and even multilingual extremist texts.The Spread of Misinformation and FalsehoodsArtificial Intelligence: For the first time, A.I.-generated personas were detected in a state-aligned disinformation campaign, opening a new chapter in online manipulation.Deepfake Rules: In most of the world, the authorities can’t do much about deepfakes, as few laws exist to regulate the technology. China hopes to be the exception.Lessons for a New Generation: Finland is testing new ways to teach students about propaganda. Here’s what other countries can learn from its success.Covid Myths: Experts say the spread of coronavirus misinformation — particularly on far-right platforms like Gab — is likely to be a lasting legacy of the pandemic. And there are no easy solutionsOpenAI uses machines and humans to monitor content that is fed into and produced by ChatGPT, a spokesman said. The company relies on b …

Read More