Who Are You Talking To? The Discernment of AI and Human Content on Social Media
A Brief Slide Show Presentation
A Brief Summary
Social media has become a major part of everyday life for billions of people around the world. It’s been used for entertainment, socializing, and even shopping. However, the rise of social bots and artificial intelligence (AI) on social media has led some to believe that the internet is no longer filled with real human-to-human interaction. The “dead internet theory” speculates that the internet is dominated primarily by bot accounts that greatly outnumber accounts with real people behind them. Given the media’s ever-growing role in influencing public opinions and attitudes, along with the advancement of AI technology, it is getting harder to tell whether user interactions are occurring between real people or machines.
This study explores the degree to which people are able to differentiate between AI-generated and human-made content. Thirty-four Gen-Z participants were asked to identify whether various social media posts were AI-generated or human-made. The posts—based on memes, television, and political discussions—were fashioned after typical Reddit posts and included both text-only and image-based frameworks. In addition to the human evaluations by participants, we uploaded the posts to ChatGPT (GPT-4-turbo) to measure the AI’s ability to differentiate between the AI-generated and human-made posts.
The survey results showed that, overall, participants were able to most accurately differentiate between the AI and human content when they included images compared to solely text-based posts. This is likely due to visual clues such as overly smooth surfaces or shadowing inconsistencies that make AI-generated content easier to spot. However, not all users may have this level of awareness. Some people online may be unable to discern whether they are engaging with bots and AI instead of another person, which can be misleading. Interestingly, ChatGPT was able to identify each condition as AI-generated or human-made successfully. This can be attributed to the exact and concise patterns associated with AI-generated text, while human-written material tends to appear unpolished in comparison. This suggests that AI itself could be implemented to help social media platforms identify and manage non-human content, especially on platforms that have rules against AI accounts.
While this study was limited to a small sample size of only 34 participants—many of whom were personally connected to the researcher—it still raises important questions about how AI is changing the way people interact online. As AI becomes more refined, the ability for people to differentiate between AI and human content may dissipate. When users unknowingly interact with bot and AI accounts, they may experience changes in opinion or behavior in ways that are neither honest nor transparent. This goes against the original purpose of social media: real connection with real people. As AI continues to be developed and implemented in spaces such as social media, it is pertinent that researchers continue investigating how it relates to and interacts with people. Our findings indicate that social media platforms may benefit from implementing measures to reduce the amount of shared AI content, and that the public may support government action in establishing guidelines that ensure online spaces remain authentic and trustworthy.
For a full understanding of the study, please refer to the paper presented below.
View or Download Full Report (PDF)
APA Citation: Rajpal, R. R., Grayon, A. R., & Iino, P. (2025). Who Are You Talking To? The Discernment of AI and Human Content on Social Media. PPL Institute. https://www.pplinstitute.org/Who-Are-You-Talking-To-The-Discernment-of-AI-and-Human-Content-on-Social-Media.html
Download Report PDF