Kenya, January 26, 2026 - The European Union’s investigation into Elon Musk-owned platform X over the use of its AI tool Grok to generate sexualised deepfakes is not just a content moderation dispute. It is a test of who ultimately controls powerful artificial intelligence systems: private tech owners or democratic institutions charged with protecting the public.
By opening a formal probe under the Digital Services Act (DSA), the European Commission is signalling that decisions about how AI tools are deployed and the harms they may enable, cannot be left solely to platform operators, no matter how influential they are. At stake is whether platforms can roll out generative AI at scale first and address consequences later, or whether they must prove safety and accountability upfront.
Grok’s rapid adoption highlights the imbalance regulators are now confronting. The chatbot’s X account claims more than 5.5 billion images were generated in just 30 days, illustrating how quickly AI tools can shape online behaviour. At that scale, even limited design flaws or weak safeguards can translate into widespread harm, particularly when tools are misused to manipulate images of real people.
European officials have framed the issue not as innovation versus regulation, but as rights versus unchecked power. Henna Virkkunen, the Commission’s Executive Vice-President for Tech Sovereignty, Security and Democracy, described AI-generated sexual deepfakes as a “violent, unacceptable form of degradation,” arguing that citizens’ rights cannot become collateral damage of new technologies.
This stance reflects a broader European philosophy: that democratic oversight must extend to digital systems capable of reshaping public life. The DSA gives regulators the authority to impose fines of up to 6% of global turnover and even order interim measures if companies fail to address systemic risks. For X, the investigation follows an earlier €120m fine over misleading blue tick verification, reinforcing the sense that regulators are no longer willing to rely on voluntary compliance.
Musk, by contrast, has framed scrutiny of Grok as an attack on free expression. He has accused governments, particularly in the UK and EU, of using safety concerns as a pretext for censorship. That argument has found support among some US officials, who have accused Brussels of targeting American tech firms and overreaching its authority.
More from Kenya
The clash reveals a deeper divide. In the US, platform governance is often shaped by market forces and corporate discretion. In the EU, lawmakers argue that technologies with societal impact must answer to democratically agreed rules, regardless of where they are built or who owns them.
As investigations into Grok unfold across multiple jurisdictions, the central question is becoming harder to ignore: when AI systems are powerful enough to affect dignity, safety and trust at scale, should their limits be set by tech leaders or by law?
The EU’s answer appears clear. Whether it can enforce that vision against one of the world’s most influential technology owners may define the next chapter of digital governance.





