- ByteDance fires an intern for sabotaging AI model training.
- Though, the company clarified no major operations were affected.
A recent story circulating on WeChat raised concerns about a security breach at ByteDance’s artificial intelligence department. The company then responded, claiming that an intern had deliberately disrupted model training as part of their work on AI commercialisation this summer.
ByteDance, the parent company of TikTok, assured the public that the incident didn’t affect online operations or any commercial projects. While rumours suggested that “over 8,000 GPU cards” were compromised, causing millions of dollars in damages, ByteDance clarified that these claims were exaggerated.
So why should we care? This situation highlights a larger issue: the need for stricter security standards in tech companies, particularly when interns are involved in crucial projects. It serves as a reminder that even minor oversights can have serious consequences in high-stakes environments.
After investigating the matter, ByteDance discovered that the intern in question had been working with its commercialisation tech team, not its AI Lab. The individual was dismissed in August, bringing their brief but disruptive involvement to an end.
The incident came to light recently through a viral message on WeChat, which claimed that a major tech company had its large model training sabotaged by an intern.
According to the local media outlet Jiemian, the incident occurred in June when a doctoral student interning at ByteDance became frustrated with the team’s resource allocation.
The intern launched an attack to disrupt the model training process. They reportedly exploited a vulnerability in Hugging Face, a popular AI app development platform, to insert malicious code into ByteDance’s shared models, resulting in unreliable outputs.
Despite the disruption, the automated machine learning (AML) team struggled to identify the cause. However, the intern’s attack concentrated on ByteDance’s internal model training, leaving their commercial Doubao model unaffected.
For context, China’s market for large model platforms and related AI applications was valued at about US$250 billion in 2023, according to IDC. Baidu AI Cloud, SenseRobot, and Zhipu AI are among the leading players in this market.
It’s clear that there were violations, though perhaps not as severe as some online posts implied. If the intern had genuinely caused the reported amount of damage, they would likely face more than just dismissal.
The importance of intern management in tech companies
Incidents like the highlight an important question: how are interns managed in tech companies – given they can play important roles in large-scale projects. While this can provide them with significant expertise, it also poses concerns if enough oversight and security measures are not implemented.
While interns may be talented and eager to help, they may not completely understand the complexity or potential consequences of their actions, especially in sensitive areas like AI model training.
Companies can reduce these risks by giving interns extensive training, establishing clearly defined roles, and limiting interns’ access to vital systems until they can demonstrate a thorough understanding of the security implications.
Mentorship programs and regular check-ins can also help in identifying and addressing any problems before they become serious. It is not just about assigning tasks; it is also about providing interns with the necessary assistance and supervision to help them improve while protecting the integrity of the company’s operations.

Implications for AI commercialisation
AI models are important to the success of many business operations. The accuracy and reliability of AI models can have a direct impact on a company’s bottom line, whether they are used to improve products, optimise operations, or create new revenue streams.
When AI model training is disrupted, as in this case, it might cause delays in product releases or even impact ongoing AI-dependent services. For a company like ByteDance, which rely heavily on AI for everything from content recommendation to user engagement, any disruption can erode customer trust and result in financial losses.
Clients and partners often seek assurance that AI models are developed in secure, well-managed environments. If there is even a hint that models might be compromised – whether through internal mistakes or external attacks – it could lead to a loss of confidence.
For organisations in the commercialisation stage of AI development, such security breaches might be far more than a brief hiccup – they could ruin a brand’s reputation and result in substantial financial consequences.
The role of ethical AI and corporate responsibility
It’s not enough for companies to develop cutting-edge technologies; they must also ensure that tech is created and managed responsibly and transparently. When lapses like this occur, it’s important to take ownership of the problem and put measures in place to prevent future incidents.
Ethical AI isn’t just about making AI systems that are fair and unbiased – it’s about building systems that are secure, accountable, and aligned with corporate values. ByteDance’s fast response to the situation shows responsibility, but it also highlights flaws in its oversight processes. Moving forward, ByteDance should be more transparent with the public when similar issues arise.
Transparency can go a long way toward restoring trust, particularly when security breaches may have an impact on the performance or reliability of AI models. Corporations must prioritise ethical considerations in AI development, ensuring that models are not just optimised for commercial success but also safe from malicious interference.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.