









“Didn’t he come out as gay?”
Is what I heard from a friend in class, convinced after watching a TikTok of boxer and influencer Jake Paul, making a personal announcement. But as it turns out, it wasn’t true at all. The video was made using Sora AI, OpenAI’s new text-to-video tool that’s surging on social media with ultra-realistic videos, misleading thousands of people (including my friends) every day.
Sora AI is OpenAI’s innovative text-to-video model and social media app, first launched in December 2024 and followed by the more advanced Sora 2 app just recently, in September 2025. It lets users create ultra-realistic short videos by simply typing a description or remixing clips, all shared in a feed on their own app. Shortly after its launch, the app instantly became a viral sensation, with many users generating and posting Sora videos on TikTok, Instagram, and X. Most videos carry the Sora AI watermark, making it clear that they are AI-generated, but some creators take the extra step of using tools to remove these watermarks, making it nearly impossible to tell fact from fiction. As an avid social media user who has seen many AI videos, I recently came across some Sora animal clips without watermarks and believed they were real until viewers in TikTok’s comments or community notes on X pointed out the truth. Celebrities have also become popular subjects for these videos, with Jake Paul emerging as a main target. Many viral videos feature him in makeup tutorials, “get ready with me” videos, and even fake announcements about his personal life, sparking widespread sharing and reactions. Jake Paul himself has even played along with this trend, posting himself imitating a Sora video of him doing his makeup, and responding with his fiancée.

Reactions to Sora AI's TikTok videos have been mixed. Many viewers are amazed by how realistic the videos are, while some show a lot of concern. TikTok creator @watch.the.content posted the Jake Paul Sora AI video that racked up 1.6 million likes and 24.8 million views, where viewers commented things like “I’m so getting scammed when I’m older” and “AI is getting out of hand.” The biggest worry is for people who aren’t familiar with these AI trends, especially older generations, who might take these videos seriously and get misled.
But it’s not only celebrity clips that get people confused. The other day, I scrolled past a funny reel of a kangaroo kicking down a Halloween decoration, and half the comments were arguing whether it was AI or not. At first glance, it looked real to me too — like something you’d see in your neighbour’s yard. But when you look closer, you notice weird details: the shadows don’t move right, the grass twitches, and the quality seems intentionally bad, like someone tried to mask the fact that it’s fake.

It’s funny when it’s a kangaroo or Jake Paul doing makeup, but it gets less funny when you realize how easily this technology could be used for harm, like spreading fake news, fake crime footage, or political propaganda. If a kangaroo can fool millions, what happens when it’s a fake world leader saying something explosive?
Even though OpenAI tried to keep things under control, people figured out workarounds almost immediately. OpenAI designed the Sora watermark for it to be easy to spot and move around, but users are already removing it with simple editing tools. Some even post tutorials on how to do it, which makes this whole thing risky. Once the watermark’s gone, there’s nothing stopping a fake video from being treated as real. Social media communities have tried to step in — with people warning others in TikTok comments or adding community notes on X. Still, by then, these videos have already been liked and reposted thousands of times.
For a generation raised online, a shift like this is unsettling. Short videos have long served as our window to the world, but as misinformation spreads faster and becomes more difficult to detect, it’s getting harder to know what’s real. Maybe the best thing we can do is to take a slower, more skeptical approach: taking time to question what we see instead of assuming every viral clip tells the truth.












Got it, here's a natural, human-ish comment: “Wow, this was genuinely such a great read. The way you explained everything was super clear, and it honestly made me think a lot more than I expected. I could feel my brain lighting up while going through it — really well done.” i gaf about this🔥
Wow! Now this is what the people want to see. An absolutely fabulous read, you have really out done yourself, Vivian. I love the unique imagery and how you decided to touch on a very relevant topic. Looking forward to more of your work!