Disturbing AI-generated videos depicting Black women as “Bigfoot” are rapidly spreading across social media platforms like Instagram and TikTok, drawing millions of views. These clips leverage advanced AI tools, such as Google’s Veo 3, to create caricatured figures engaged in stereotypical behaviors and speaking in exaggerated tones. This trend is sparking significant concern among experts who point to its deep roots in historical racist tropes used to dehumanize Black individuals. The ease of creating such content combined with the power of social media algorithms is accelerating its reach and impact, raising urgent questions about safeguards and accountability in the age of generative AI.
AI Tools Fueling Dehumanizing Content
The viral phenomenon known as “bigfoot baddies” uses powerful new artificial intelligence video generation tools. Google’s Veo 3 is one such tool being heavily utilized by creators. Launched in May 2025, Veo allows users to generate detailed video content, including scenery, characters, and spoken audio, simply from text prompts. This accessibility has lowered the barrier significantly for creating complex digital media.
Creators have repurposed the capability of generating cryptid or surreal influencer-style videos, which was initially showcased by Google itself as a selling point, into a vehicle for racial caricature. The resulting videos portray Black women as hybrid bigfoot-human figures. These AI creations often feature exaggerated physical traits, such as acrylic nails or brightly colored wigs. They are frequently depicted wearing items like bonnets and speaking in exaggerated voices or using African American Vernacular English (AAVE) in a mocking manner.
Examples of Harmful Stereotypes
Specific examples from these viral videos highlight the offensive nature of the content. One AI character speaks directly to an imaginary audience, stating she might “go on the run” because she’s “wanted for a false report on [her] baby daddy.” Another clip features an AI character using a country accent, implying she stored Hennessy liquor in her genitals. In yet another video, an AI-generated female bigfoot dodges bombs while on vacation, saying she hopes to be “resurrected with a BBL” if she dies. These scenarios draw on harmful stereotypes and present them in a derogatory light through the lens of dehumanizing “Bigfoot” figures.
Experts emphasize that these depictions are not merely crude jokes. Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, highlights the significant historical precedent behind this specific form of offense. She explains that in the early days of slavery, illustrations frequently exaggerated the features of Black people to emphasize primal characteristics. This tactic was used deliberately to dehumanize and ridicule. Turner Lee finds it “both disgusting and disturbing” that such historical racial tropes are now easily generated and widely distributed online using modern technology.
The use of AAVE in a caricatured way further reinforces negative stereotypes linked to Black identity. This adds another layer to the offensive content, signifying the figures’ Blackness through harmful linguistic portrayals.
Virality, Monetization, and Algorithmic Spread
These racist AI videos are achieving massive reach very quickly. Some individual clips have accumulated millions of views across platforms like Instagram and TikTok. One popular Instagram account dedicated to posting these videos saw five of its posts exceed a million views within less than a month of its creation.
The trend has spawned numerous copycat accounts. Other creators are reposting the “bigfoot baddie” clips or generating similar offensive content. A repost of one video on an AI-focused meme page garnered 1 million views on Instagram. Another video on a different account attracted nearly 3 million views. An account on TikTok dedicated to similar AI-generated content has over a million likes. This rapid spread demonstrates the potent combination of easily creatable content and powerful social media distribution networks.
Alarmingly, creators are also monetizing this harmful trend. At least one popular Instagram account selling a $15 online course teaches others how to create similar “bigfoot” videos using AI tools like Veo 3. The course description even suggests that “Veo 3 does the heavy lifting.” This indicates a deliberate effort to spread the capability for generating this offensive content for profit.
The design of social media algorithms plays a significant role in amplifying these videos. When a user engages with one “bigfoot baddie” video, algorithms like Instagram’s Reels feed quickly begin suggesting similar content. A test by WIRED found that watching a few of these videos led to the feed being filled with other racist AI-generated content, including a video depicting a Black man on a fishing boat catching fried chicken and referring to a chimpanzee as his son. This shows how algorithmic ecology can facilitate the consumption and spread of harmful content. Turner Lee notes that AI not only simplifies manipulation but algorithms make sharing and consumption easier.
The Challenge of AI Guardrails
Experts point to the difficulty in implementing sufficient safeguards within generative AI tools. Meredith Broussard, an NYU professor, compares the issues with generative AI to problems previously encountered with social media platforms. She argues that the creators of AI tools often “cannot conceive of all of the ways that people can be horrible to each other.” This lack of foresight makes it challenging to build robust guardrails against malicious use.
This phenomenon mirrors past concerns about AI bias, especially against groups speaking AAVE, but extends it to generative visual and audio content. The ease of creating photorealistic yet offensive videos with tools like Veo 3 contributes to the rapid proliferation of what some describe as a “new minstrelsy.” This concept, previously discussed in relation to earlier viral AI videos (like a famous AI-generated clip of Will Smith eating spaghetti), describes how AI can create deceptive and harmful caricatures.
The rapid advancement and increasing accessibility of AI tools mean that creating and distributing dehumanizing content is becoming easier. Without effective guardrails from AI developers and robust content moderation from platforms, trends using AI to attack minority groups are likely to continue. While Meta, Google, and TikTok were contacted for comment regarding these specific videos, they did not provide statements prior to the original article’s publication. The challenge of regulating illicit AI applications operating internationally also underscores the difficulty in controlling this type of harmful content spread.
Frequently Asked Questions
What are the “Bigfoot baddie” AI videos, and why are they offensive?
“Bigfoot baddie” AI videos are viral, AI-generated clips found on social media platforms like Instagram and TikTok. They depict Black women as ape-like figures (“Bigfoot”) engaging in stereotypical behaviors, often speaking in exaggerated voices or AAVE. Experts call them offensive because they rely on historical racist tropes used during slavery to dehumanize Black people by portraying them with exaggerated, primal characteristics for ridicule.
Which AI tools are being used for these videos, and how are they spreading?
Creators are using advanced AI video generation tools like Google’s Veo 3. These tools allow users to produce complex video scenes, characters, and audio from simple text prompts, making it easy to generate numerous clips. The videos spread virally through shares and reposts, but significantly, social media algorithms amplify them. Watching one such video prompts platforms to recommend similar racist content, rapidly increasing its reach to millions of users.
Why are experts concerned about this trend, and what does it highlight about AI development?
Experts are concerned due to the historical link to dehumanizing racist caricatures and the ease with which AI tools facilitate their creation and spread. They highlight that creators of AI tools often fail to anticipate all harmful uses, making it difficult to implement effective safeguards (“guardrails”). This trend exposes a significant challenge: powerful, accessible AI technology can easily be misused to create and distribute harmful content, echoing past problems with social media platforms and suggesting a need for better ethical considerations in AI development and deployment.
Conclusion
The emergence of viral AI videos depicting Black women as “Bigfoot baddies” serves as a stark reminder of the potential for advanced technology to be weaponized for harmful purposes. Leveraging accessible tools like Google’s Veo 3, creators are perpetuating deeply offensive, historically rooted racist tropes. The rapid spread of this content, amplified by social media algorithms and even monetized through online courses, underscores the urgent need for stronger ethical considerations in AI development and more robust content moderation on platforms. Addressing this challenge requires both technical solutions to build better safeguards into AI tools and greater accountability from the platforms that host and spread this dehumanizing material.