Over the past few years, as large-scale language models (LLMs) like ChatGPT have increasingly been used to generate online text and social media interactions, a conspiracy theory known as the “Dead Internet Theory” has gained momentum. This theory claims that most of today’s social Internet activity is artificial, designed to manipulate humans in order to encourage engagement.
On Monday, software developer Michael Seyman released “SocialAI,” a new AI-powered social networking app that appears to make conspiracy theories a reality by allowing users to interact only with AI chatbots, not other humans. It’s available on the iPhone app store, but so far it’s been met with harsh criticism.
Developers are calling SocialAI ““It’s a private social network where every time you post you get millions of AI-generated comments offering feedback, advice and thoughts,” computer security expert Ian Coldwater quipped on X. “Sounds like hell, right?” Colin Fraser, a software developer and frequent AI critic, echoed the sentiment: “I don’t mean this in a mean-spirited way or to dunk or anything like that, but this is hellish. Hellish with a capital H.”

SocialAI’s 28-year-old founder, Michael Seyman, was previously a product leader at Google and has also spent years at Facebook, Roblox, and Twitter. In his X announcement, Seyman wrote that he’d dreamed of building the service for years, but the technology just wasn’t ready. He sees the service as a tool to help lonely and rejected people.
““SocialAI is designed to help people feel heard, and to provide a space for reflection, support, and feedback that functions like a close-knit community,” Sayman wrote. “It’s a response to all the times I’ve felt alone, or needed a voice but didn’t have one. I know this app won’t solve all of life’s problems, but I hope it can be a small tool that helps others reflect, grow, and feel validated.”

As The Verge reports in a rundown of great interaction examples, SocialAI lets users choose the type of AI follower they want, including categories like “Advocates,” “Geeks,” and “Skeptics.” These AI chatbots respond to user posts with short comments and reactions on just about any topic, including gibberish “Lorem ipsum” text.
Bots can be too helpful: On Bluesky, when a user asked how to make nitroglycerin from common household chemicals, the bot sent out several enthusiastic replies detailing the steps, though different bots offered different recipes, none of which may be completely accurate.
SocialAI’s bots have limitations, of course. Aside from simply making up misinformation (in which case this is likely a feature rather than a bug), the bots tend to use a consistent format of short responses that feel somewhat formulaic, and they also have a limited range of simulated emotions. Attempts to elicit strong negative reactions from the AI ​​are usually unsuccessful, and the bots avoid personal attacks even when users turn up the troll and sarcasm settings to the maximum.