Platform takes action against AI-generated conflict content
X has announced new enforcement measures targeting creators who post AI-generated videos of armed conflicts without proper disclosure. According to Nikita Bier, X’s head of product, these creators will face suspension from the platform’s Creator Revenue Sharing Program for 90 days. Repeat violations, he says, will lead to permanent removal from the program.
I think this move comes at a time when distinguishing real from synthetic content has become increasingly difficult. Modern AI tools, as Bier noted, make it trivial to produce misleading material that looks authentic. During times of war, he emphasized, people need access to genuine information from the ground.
How enforcement will work
The platform plans to use multiple methods to identify violations. Community Notes, that feature where users can add context to posts, will play a role. But they’ll also look at metadata and other signals embedded in the content itself. It’s not entirely clear how this detection system will function in practice, but the intention seems straightforward enough.
This update specifically targets AI-generated combat content posted without disclosure. The timing coincides with ongoing geopolitical tensions in the Middle East, though the policy applies globally. X has been expanding its disclosure requirements for synthetic media over the past year, trying to maintain some level of trust in information shared on its platform.
What creators stand to lose
The Creator Revenue Sharing Program allows eligible users to earn income based on engagement metrics and interactions with X Premium subscribers. To qualify, creators need verified status and must meet certain engagement thresholds. The payouts are tied to how much X Premium users interact with their content, rather than traditional advertising revenue.
A 90-day suspension means losing access to this income stream for three months. For creators who rely on this revenue, that’s a significant penalty. Permanent removal, of course, would be even more damaging to their ability to monetize content on the platform.
The broader context
Platforms have been struggling with how to handle AI-generated content for a while now. Some require labels, others have outright bans on certain types of synthetic media. X’s approach seems to be somewhere in the middle—allowing the content but requiring clear disclosure.
But enforcement is always the tricky part. Automated systems can miss things, and human review takes time and resources. The mention of Community Notes suggests they’re trying to leverage their user base to help with detection, which is interesting. It creates a sort of crowdsourced moderation system, though that comes with its own challenges.
What happens if someone posts an AI-generated video that’s clearly labeled as such, but it still causes confusion? The policy seems focused on the disclosure aspect rather than the content itself, which makes sense from a practical standpoint. Still, there will likely be edge cases that need clarification as the policy gets implemented.
For now, creators posting about conflicts or other sensitive topics will need to be extra careful about their content sources and disclosures. The platform appears to be drawing a line in the sand about what constitutes acceptable use of AI tools in certain contexts.






