In the U.S., over 40 AI models have been created in 2024 alone, encouraging the expansion of AI and changing the way information is shared across news sites and social media platforms. AI-generated videos are becoming increasingly popular amongst kids, teens and adults, resulting in the widespread dissemination of misinformation.
Recently, on Sept. 14, AI was used in Plano, Texas, when Prestonwood Baptist Church’s morning service displayed a fully AI-generated video of the late Charlie Kirk, which included an entirely fake message. Instead of creating a compilation of his past debates to memorialize him, the Church instead used AI to make Charlie Kirk appear to say things he had never physically said, spreading misinformation to fit the church’s beliefs.
To stop the spread of misinformation through AI, there needs to be a limit on the general public’s access to AI, such as offering only one AI tool online. On top of this, social media platforms and news sites must implement mandatory watermarking and fact-checking systems to combat the dangerous spread of AI-generated misinformation that threatens public trust and individual reputation.
AI is readily accessible to anyone, with resources such as ChatGPT’s Sora or Google’s Veo being open to the public, allowing users to generate hundreds of videos in a few seconds, by entering any prompt of their choice into the source. According to Google Cloud, since May 2025, over 70 million AI videos have been generated through Google’s Veo. These resources allow users to enter any prompt and create an image or video in a few seconds at no charge; The only limitation to these sources is a daily limit on the number of videos generated, which can be bypassed by upgrading plans or buying a subscription.
Due to AI being accessible to the public, users can generate any prompt and then post it on social media. Many AI-generated videos often include real-life people, referred to as deepfakes, which can ruin the reputation of those who are featured in the videos. According to NPR, an AI-generated video of Ukrainian President Volodymyr Zelenskyy was spread around social media in 2022, in which President Zelenskyy appeared to tell Ukrainian soldiers to surrender in the war against Russia.
Although officials were quick to call out the false information, the video spread across social media and news outlets, which briefly hurt President Zelenskyy’s reputation across the globe and caused mistrust between him and Ukrainians during the war.
Despite the advancements of AI, there are still many ways to identify AI-generated videos. Hyper-real or strange body parts on humans, distorted objects and backgrounds and unnatural physics are all abnormalities AI creates, according to Julia Feerrar, an associate professor at Virginia Tech. However, being able to identify AI-generated videos won’t stop them from being created or spread across social media. To address this, social media could add watermarks on AI-generated content, implement fact-checking tools for users, or flag and remove such videos.
“As AI continues to evolve and improve, we need strategies to detect fake articles, videos, and images that don’t just rely on how they look,” says Feerrar.







