AI Asia | TechWire Asia https://techwireasia.com/tag/ai/ Where technology and business intersect Mon, 14 Apr 2025 13:10:48 +0000 en-GB hourly 1 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png AI Asia | TechWire Asia https://techwireasia.com/tag/ai/ 32 32 First it was Ghibli, now it’s the AI Barbie Box trend https://techwireasia.com/2025/04/first-it-was-ghibli-now-its-the-ai-barbie-box-trend/ Mon, 14 Apr 2025 13:10:48 +0000 https://techwireasia.com/?p=241723 Following the Ghibli portraits, the AI Barbie trend comes to LinkedIn. Blending nostalgia with self-promotion, produces brand interest but little celebrity uptake. After gaining attention with Studio Ghibli-style portraits, ChatGPT’s image generator is now powering a new wave of self-representation online – this time with users turning themselves into plastic action figures. What began as […]

The post First it was Ghibli, now it’s the AI Barbie Box trend appeared first on TechWire Asia.

]]>
  • Following the Ghibli portraits, the AI Barbie trend comes to LinkedIn.
  • Blending nostalgia with self-promotion, produces brand interest but little celebrity uptake.
  • After gaining attention with Studio Ghibli-style portraits, ChatGPT’s image generator is now powering a new wave of self-representation online – this time with users turning themselves into plastic action figures.

    What began as a quirky trend on LinkedIn has now spread to platforms like Instagram, TikTok, Facebook, and X. The trend includes different takes, but the “AI Action Figure” version is among the most common. It typically shows a person recreated as a doll encased in a plastic blister pack, often accessorised with work-related items like laptops, books, or coffee mugs. That’s fitting, considering the trend’s initial traction among professionals and marketers on LinkedIn.

    Other versions draw inspiration from more recognisable aesthetics, like the “Barbie Box Challenge,” where the AI-generated figure is styled to resemble a vintage Barbie.

    The rise of the virtual dolls follows the earlier success of the Studio Ghibli-style portraits, which pushed ChatGPT’s image capabilities into the spotlight. That earlier trend sparked some backlash related to environmental, copyright, and creative concerns – but so far, the doll-themed offshoot hasn’t drawn the same level of criticism.

    What’s notable about the trends is the consistent use of ChatGPT as the generator of choice. OpenAI’s recent launch of GPT-4o, which includes native image generation, attracted such a large volume of users that the firm had to temporarily limit image output and delay rollout for free-tier accounts.

    While the popularity of action figures hasn’t yet matched that of Ghibli portraits, it does highlight ChatGPT’s role in introducing image tools to a broader user base. Many of these doll images are shared by users with low engagement, and mostly in professional circles. Some brands, including Mac Cosmetics and NYX, have posted their own versions, but celebrities and influencers have largely stayed away. One notable exception is US Representative Marjorie Taylor Greene, who shared a version of herself with accessories including a Bible and a gavel, calling it “The Congresswoman MTG Starter Kit.”

    What the AI Barbie trend looks like

    The process involves uploading a photo into ChatGPT and prompting it to create a doll or action figure based on the image. Many users opt for the Barbie aesthetic, asking for stylised packaging and accessories that reflect their personal or professional identity. The final output often mimics retro Barbie ads from the 1990s or early 2000s. Participants typically specify details like:

    • The name to be displayed on the box
    • Accessories, like pets, smartphones, or coffee mugs
    • The desired pose, facial expression, or outfit
    • Packaging design elements like colour or slogans

    Users often iterate through several versions, adjusting prompts to better match their expectations. The theme can vary widely – from professional personas to hobbies or fictional characters – giving the trend a broad creative range.

    How the trend gained momentum

    The idea gained visibility in early 2025, beginning on LinkedIn where users embraced the “AI Action Figure” format. The Barbie-style makeover gained traction over time, tapping into a blend of nostalgia and visual novelty. Hashtags like #aibarbie and #BarbieBoxChallenge have helped to spread the concept. While the Barbie-inspired version has not gone as viral as the Ghibli-style portraits, it has maintained steady traction on social media, especially among users looking for lighthearted ways to express their personal branding.

    https://youtube.com/watch?v=Z6S6zQQ8sCQ%3Fsi%3DPJOwLgHWngf21YhL

    Using ChatGPT’s image tool

    To participate, users must access ChatGPT’s image generation tool, available with GPT-4o. The process begins by uploading a high-resolution photo – preferably full-body – and supplying a prompt that describes the desired figurine.

    To improve accuracy, prompts usually include:

    • A theme (e.g., office, workout, fantasy)
    • Instructions for how the figure should be posed
    • Details about clothing, mood, or accessories
    • A note to include these elements inside a moulded box layout

    Reiterating the intended theme helps ensure consistent results. While many focus on work-related personas, the style is flexible – some choose gym-themed versions, others opt for more humorous or fictional spins.

    Behind the spike in image activity

    ChatGPT’s image generation tool launched widely in early 2025, and its use quickly surged. According to OpenAI CEO Sam Altman, the demand became so intense that GPU capacity was stretched thin, prompting a temporary cap on image generation for free users. Altman described the load as “biblical demand” in a social media post, noting that the feature had drawn more than 150 million active users in its first month. The tool’s ability to generate everything from cartoons to logos – and now custom action figures – has played a central role in how users explore visual identity through AI.

    The post First it was Ghibli, now it’s the AI Barbie Box trend appeared first on TechWire Asia.

    ]]>
    Google introduces Ironwood TPU to power large-scale AI inference https://techwireasia.com/2025/04/google-introduces-ironwood-tpu-to-power-large-scale-ai-inference/ Thu, 10 Apr 2025 09:59:56 +0000 https://techwireasia.com/?p=241711 Google’s Ironwood TPU is purpose-built for AI inference. Designed to support high-demand applications like LLMs and MoE models. Google has introduced Ironwood, its seventh-generation Tensor Processing Unit (TPU) at Google Cloud Next 2025. The processor unit is specifically designed to support large-scale inference workloads. The chip marks a shift in focus from training to inference, […]

    The post Google introduces Ironwood TPU to power large-scale AI inference appeared first on TechWire Asia.

    ]]>
  • Google’s Ironwood TPU is purpose-built for AI inference.
  • Designed to support high-demand applications like LLMs and MoE models.
  • Google has introduced Ironwood, its seventh-generation Tensor Processing Unit (TPU) at Google Cloud Next 2025. The processor unit is specifically designed to support large-scale inference workloads.

    The chip marks a shift in focus from training to inference, reflecting broader changes in how AI models are used in production environments. TPUs have been a core part of Google’s infrastructure for several years, powering internal services and customer applications. Ironwood continues with enhancements for the next wave of AI applications – including large language models (LLMs), Mixture of Experts (MoEs), and other compute-intensive tools that require real-time responsiveness and scalability.

    Inference takes centre stage

    Ironwood is designed to support what Google calls the “age of inference,” in which AI systems interpret and generate insights actively, rather than just responding to inputs. The shift is reshaping how AI models are deployed, particularly in business use, where continuous, low-latency performance is important.

    Ironwood represents a number of architectural upgrades: Each chip provides 4,614 teraflops at peak performance, supported by 192GB of high bandwidth memory and up to 7.2 terabytes per second of memory bandwidth – significantly more than in previous TPUs.

    The expanded memory and throughput are to support models requiring rapid access to large datasets, like those used in search, recommendation systems, and scientific computing.

    Ironwood also features an improved version of SparseCore, a component aimed at accelerating ultra-large embedding models that are often used in ranking and personalisation tasks.

    Scale and connectivity

    Ironwood’s scalability means it can be deployed in configurations from 256 to 9,216 chips in a single pod. At full scale, a pod delivers 42.5 exaflops of compute, making it more than 24 times more powerful than the El Capitan supercomputer, which tops out at 1.7 exaflops.

    To support this level of distributed computing, Ironwood includes a new version of Google’s Inter-Chip Interconnect, which can communicate bidirectionally at 1.2 terabits per second. This helps reduce bottlenecks so data can move more efficiently across thousands of chips during training or inference. Ironwood is integrated with Pathways, Google’s distributed machine learning runtime developed by DeepMind. Pathways allows workloads to run on multiple pods, letting developers orchestrate tens or hundreds of thousands of chips for a single model or application.

    Efficiency and sustainability

    Power efficiency metrics show that Ironwood has twice the performance per watt as its predecessor, Trillium, able to sustain high output under sustained workloads. The TPU has a liquid-based cooling system, and according to Google, is nearly 30 times more power-efficient than the first Cloud TPU introduced in 2018. The emphasis on energy efficiency reflects growing concerns about the environmental impact of large-scale AI infrastructure, particularly as demand continues to grow.

    Supporting real-world applications

    Ironwood’s architecture supports “thinking models,” which are used increasingly in real-time applications like chat interfaces and autonomous systems. The TPU’s capabilities also offer the potential for use in finance, logistics, and bio-informatics workloads, which require fast, large-scale computations. Google has integrated Ironwood into its Cloud AI Hypercomputer strategy, which combines custom hardware and tools like Vertex AI.

    What comes next

    Google plans to make Ironwood publicly-available later this year to support workloads like Gemini 2.5 and AlphaFold, and the unit is expected to be used in research and production environments that demand large-scale distributed inference.

    The post Google introduces Ironwood TPU to power large-scale AI inference appeared first on TechWire Asia.

    ]]>
    DeepSeek’s new technology makes AI actually understand what you’re asking for https://techwireasia.com/2025/04/deepseeks-new-technology-makes-ai-actually-understand-what-youre-asking-for/ Wed, 09 Apr 2025 08:26:44 +0000 https://techwireasia.com/?p=241688 DeepSeek’s AI feedback systems help make AI understand what humans want. Method allows smaller AI models to perform as well as larger cousins. Potential to reduce cost of training. Chinese AI company DeepSeek has developed a new approach to AI feedback systems that could transform how artificial intelligence learns from human preferences. Working with Tsinghua […]

    The post DeepSeek’s new technology makes AI actually understand what you’re asking for appeared first on TechWire Asia.

    ]]>
  • DeepSeek’s AI feedback systems help make AI understand what humans want.
  • Method allows smaller AI models to perform as well as larger cousins.
  • Potential to reduce cost of training.
  • Chinese AI company DeepSeek has developed a new approach to AI feedback systems that could transform how artificial intelligence learns from human preferences.

    Working with Tsinghua University researchers, DeepSeek’s innovation tackles one of the most persistent challenges in AI development: teaching machines to understand what humans genuinely want from them. The breakthrough is detailed in a research paper “Inference-Time Scaling for Generalist Reward Modeling,” and introduces a technique making AI responses more accurate and efficient – a win-win in the AI world where better performance typically demands more computing power.

    Teaching AI to understand human preferences

    At the heart of DeepSeek’s innovation is a new approach to what experts call “reward models” – essentially the feedback mechanisms that guide how AI systems learn. Think of reward models as digital teachers. When an AI responds, models provide feedback on how good that response was, helping the AI improve over time. The problem has always been how to create reward models that accurately reflect human preferences across many different types of questions. DeepSeek’s solution combines two techniques:

    1. Generative Reward Modeling (GRM): Uses language to represent rewards, providing richer feedback than previous methods that relied on simple numerical scores.
    2. Self-Principled Critique Tuning (SPCT): Allows the AI to adaptively generate its guiding principles and critiques through online reinforcement learning.

    Zijun Liu, a researcher from Tsinghua University and DeepSeek-AI who co-authored the paper, explains that this combination allows “principles to be generated based on the input query and responses, adaptively aligning reward generation process.”

    Doing more with less

    What makes DeepSeek’s approach particularly valuable is “inference-time scaling.” Rather than requiring more computing power during the training phase, the method allows for performance improvements when the AI is used – the ‘point of inference’.

    The researchers demonstrated that their method achieves better results with increased sampling during inference, potentially allowing smaller models to match the performance of much larger ones. The efficiency breakthrough comes at a important moment in AI development when the relentless push for larger models raises concerns about sustainability, supply chain viability, and accessibility.

    What this means for the future of AI

    DeepSeek’s innovation in AI feedback systems could have far-reaching implications:

    • More accurate AI responses: Better reward models mean AI systems receive more precise feedback, improving outputs over time.
    • Adaptable performance: The ability to scale performance during inference allows AI systems to adjust to different computational constraints.
    • Broader capabilities: AI systems can perform better across many tasks by improving reward modelling for general domains.
    • Democratising AI development: If smaller models can achieve similar results to larger models via better inference methods, AI research could become more accessible to those with limited resources.

    DeepSeek’s rising influence

    The latest advance adds to DeepSeek’s growing reputation in the AI field. Although founded only in 2023 by entrepreneur Liang Wenfeng, the Hangzhou-based company has made an impact with the V3 foundation model and R1 reasoning model. The company recently upgraded its V3 model (DeepSeek-V3-0324), which it said offered “enhanced reasoning capabilities, optimised front-end web development and upgraded Chinese writing proficiency.”

    DeepSeek has also committed to open-source its AI technology, by opening five public code repositories in February which allow developers to review and contribute to software development.

    According to the research paper, DeepSeek intends to make its GRM models open-source, although no specific timeline has been provided. Its decision could accelerate progress in the field by allowing broader experimentation with this type of advanced AI feedback system.

    Beyond bigger is better

    As AI continues to evolve rapidly, DeepSeek’s work demonstrates that innovations in how models learn can be just as important as increasing their size. By focusing on the quality and scalability of feedback, DeepSeek addresses one of the challenges to create AI that better understands and aligns with human preferences.

    This possible breakthrough in AI feedback systems suggests the future of artificial intelligence may depend not just on raw computing power but on more intelligent and efficient methods that better capture the nuances of human preferences.

    The post DeepSeek’s new technology makes AI actually understand what you’re asking for appeared first on TechWire Asia.

    ]]>
    Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos https://techwireasia.com/2025/04/viral-ghibli-feature-drives-chatgpt-surge/ Tue, 08 Apr 2025 13:04:25 +0000 https://techwireasia.com/?p=241676 Ghibli-style art pushes ChatGPT’s activity to new highs. OpenAI says working to scale capacity for GPT-4o image tools. ChatGPT’s internet traffic has skyrocketed due to a spike in interest in AI-generated images styled after Studio Ghibli animations. OpenAI noticed a large increase in engagement following the release of its image-generation tool, which enables users to […]

    The post Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos appeared first on TechWire Asia.

    ]]>
  • Ghibli-style art pushes ChatGPT’s activity to new highs.
  • OpenAI says working to scale capacity for GPT-4o image tools.
  • ChatGPT’s internet traffic has skyrocketed due to a spike in interest in AI-generated images styled after Studio Ghibli animations.

    OpenAI noticed a large increase in engagement following the release of its image-generation tool, which enables users to create artwork reminiscent of classic titles like Spirited Away and My Neighbor Totoro. Data from Similarweb shows that weekly active users passed 150 million for the first time this year.

    OpenAI CEO Sam Altman said on social media that the platform added one million users in a single hour – surpassing previous growth records. SensorTower reported that downloads and revenue through the ChatGPT app also increased. Weekly downloads rose by 11%, active users by 5%, and in-app purchase revenue by 6% compared to the previous month.

    The rapid increase in use put pressure on the platform’s infrastructure. Users reported slowdowns and brief outages, forcing Altman to caution that future features may face delays while OpenAI manages capacity

    ChatGPT's weekly average users hit record high (Source - Similarweb)
    ChatGPT’s weekly average users hit record high (Source – Similarweb)

    Legal and copyright concerns with the ChatGPT x Ghibli

    The viral trend has prompted discussion around copyright. Some legal experts have raised questions about whether closely-replicating distinctive animation styles could cross into infringement.

    “The legal landscape of AI-generated images mimicking Studio Ghibli’s distinctive style is an uncertain terrain. Copyright law has generally protected only specific expressions rather than artistic styles themselves,” said Evan Brown, a partner at law firm Neal & McDevitt.

    OpenAI did not respond to questions about how its models were trained or whether copyrighted materials influenced its image generator. Studio Ghibli has not issued a formal statement, but commentary from its co-founders has resurfaced.

    Hayao Miyazaki’s 2016 reaction to an early AI-generated image drew attention last week. In a widely circulated video, he described the technology as “an insult to life itself.” The full clip shows him responding specifically to a zombie-like AI render, which he called “extremely unpleasant.”

    In a recent interview, Studio Ghibli’s managing director Goro Miyazaki acknowledged the growing capabilities of AI. He claimed that AI-generated films could become a reality in the coming years, but questioned whether audiences would embrace them. He also acknowledged that while new technology could lead to new creative voices, it may be difficult to replicate the sensibilities of previous generations. “Nowadays, the world is full of opportunities to watch anything, anytime, anywhere,” he said, suggesting that younger artists may not share the same experiences that shaped Ghibli’s earlier works.

    Studio concerns and industry shifts

    Japan faces a shortage of trained animators, in part due to long hours and low wages in the industry. Goro noted that Gen Z may be less inclined to pursue the traditionally labour-intensive career path of hand-drawn animation.

    AI tools are emerging as a faster, lower-cost alternative to visual storytelling. Studio Ghibli’s legacy includes a number of films that blend fantastical themes with personal and historical reflections. Miyazaki’s latest work, The Boy and the Heron, earned an Academy Award and may be his final project. Goro has contributed his own directorial efforts, including Tales from Earthsea and From Up on Poppy Hill, and helped develop the Ghibli Museum and Ghibli Park.

    User privacy and data security

    As more users upload personal images to generate stylised portraits, privacy advocates are raising concerns about how that data is collected and used. “When you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print,” said Christoph C. Cemper, founder of AIPRM.

    OpenAI’s privacy policy confirms the platform collects user-provided and automatically generated data, including images. Unless users opt out or request data deletion, content may be retained and used to train future models.

    Cemper said that uploaded images could be misused. Personal data may appear in public datasets, like LAION-5B, which has been linked to the training of tools like Stable Diffusion and Google Imagen. One reported case involved a user finding private medical images in a public dataset. Cemper said that AI-generated content has already been used to produce fabricated documents and images, adding that deepfake risks are increasing. “There are too many real-world verification flows that rely on ‘real images’ as proof. That era is over,” one user wrote on social media.

    Navigating licensing and user rights between ChatGPT and Ghibli

    Cemper urged users to be aware of broad licensing terms buried in AI platform policies. Terms like “non-exclusive,” “royalty-free,” and “irrevocable license” can give platforms broad rights over uploaded content. The rights may extend even after the user stops using the service.

    Creating AI art in the style of well-known brands could also present legal challenges. Artistic styles like those of Studio Ghibli, Disney, and Pixar are closely associated with their original creators, and mimicking them may fall under derivative work protections.

    In late 2022, several artists filed lawsuits against AI firms, alleging their work was used without permission to train image generators. The ongoing legal challenges highlight the tension between creative freedom and intellectual property rights.

    Cemper added: “The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred, and the risk of unintentionally violating intellectual property laws continues to grow. While these trends may seem harmless,creators must be aware that what may appear as a fun experiment could easily cross into legal territory.

    “The rapid pace of AI development also raises significant concerns about privacy and date security. With more users engaging with AI tools, there’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data – especially when they may not realise how their information is being stored, shared, or used.”

    The post Viral Ghibli feature drives ChatGPT surge—What you should know before uploading photos appeared first on TechWire Asia.

    ]]>
    Ant Group develops AI models using Chinese chips to lower training costs https://techwireasia.com/2025/04/ant-group-develops-ai-models-using-chinese-chips-to-lower-training-costs/ Wed, 02 Apr 2025 09:12:52 +0000 https://techwireasia.com/?p=241645 Ant Group uses Chinese chips and MoE models to cut AI training costs and reduce reliance on Nvidia. Releases open-source AI models, claiming strong benchmark results with domestic hardware. Chinese Alibaba affiliate company, Ant Group, is exploring new ways to train LLMs and reduce dependency on advanced foreign semiconductors. According to people familiar with the […]

    The post Ant Group develops AI models using Chinese chips to lower training costs appeared first on TechWire Asia.

    ]]>
  • Ant Group uses Chinese chips and MoE models to cut AI training costs and reduce reliance on Nvidia.
  • Releases open-source AI models, claiming strong benchmark results with domestic hardware.
  • Chinese Alibaba affiliate company, Ant Group, is exploring new ways to train LLMs and reduce dependency on advanced foreign semiconductors.

    According to people familiar with the matter, the company has been using domestically-made chips – including those supplied by Alibaba and Huawei – to support the development of cost-efficient AI models through a method known as Mixture of Experts (MoE).

    The results have reportedly been on par with models trained using Nvidia’s H800 GPUs, which are among the more powerful chips currently restricted from export to China. While Ant continues to use Nvidia hardware for certain AI tasks, sources said the company is shifting toward other options, like processors from AMD and Chinese alternatives, for its latest development work.

    The strategy reflects a broader trend among Chinese firms looking to adapt to ongoing export controls by optimising performance with locally available technology.

    The MoE approach has grown in popularity in the industry, particularly for its ability to scale AI models more efficiently. Rather than processing all data through a single large model, MoE structures divide tasks into smaller segments handled by different specialised “experts.” The division helps reduce the computing load and allows for better resource management.

    Google and China-based startup DeepSeek have also applied the method, seeing similar gains in training speed and cost-efficiency.

    Ant’s latest research paper, published this month, outlines how the company has been working to lower training expenses by not relying on high-end GPUs. The paper claims the optimised method can reduce the cost of training 1 trillion tokens from around 6.35 million yuan (approximately $880,000) using high-performance chips to 5.1 million yuan, using less advanced, more readily-available hardware. Tokens represent pieces of information that AI models process during training to learn patterns, in order to generate text, or complete tasks.

    According to the paper, Ant has developed two new models – Ling-Plus and Ling-Lite – which it now plans to offer in various industrial sectors, including finance and healthcare. The company recently acquired Haodf.com, an online medical services platform, as part of its broader push for AI-driven healthcare services. It also runs the AI life assistant app Zhixiaobao and a financial advisory platform known as Maxiaocai.

    Ling-Plus and Ling-Lite have been open-sourced, with the former consisting of 290 billion parameters and the latter 16.8 billion. Parameters in AI are tunable elements that influence a model’s performance and output. While these numbers are smaller than the parameter count anticipated for advanced models like OpenAI’s GPT-4.5 (around 1.8 trillion), Ant’s offerings are nonetheless regarded as sizeable by industry standards.

    For comparison, DeepSeek-R1, a competing model also developed in China, contains 671 billion parameters.

    In benchmark tests, Ant’s models were said to perform competitively. Ling-Lite outpaced a version of Meta’s Llama model in English-language understanding, while both Ling models outperformed DeepSeek’s offerings on Chinese-language evaluations. The claims, however, have not been independently verified.

    The paper also highlighted some technical challenges the organisation faced during model training. Even minor adjustments to the hardware or model architecture resulted in instability, including sharp increases in error rates. These issues illustrate the difficulty of maintaining model performance while shifting away from high-end GPUs that have become the standard in large-scale AI development.

    Ant’s research indicates a rise in effort among Chinese companies to achieve more technological self-reliance. With US export limitations limiting access to Nvidia’s most advanced chips, companies like Ant are seeking ways to build competitive AI tools using alternative resources.

    Although Nvidia’s H800 chip is not the most powerful in its lineup, it remains one of the most capable processors available to Chinese buyers. Ant’s ability to train models of comparable quality without such hardware signals a potential path forward for companies affected by trade controls.

    At the same time, the broader industry dynamics continue to evolve. Nvidia CEO Jensen Huang has said that increasing computational needs will drive demand for more powerful chips, even as efficiency-focused models gain traction. Despite alternative strategies like those explored by Ant, his view suggests that advanced GPU development will continue to be prioritised.

    Ant’s effort to reduce costs and rely on domestic chips could influence how other firms approach AI training – especially in markets facing similar constraints. As China accelerates its push toward AI independence, developments like these are likely to draw attention across both the tech and financial landscapes.

    The post Ant Group develops AI models using Chinese chips to lower training costs appeared first on TechWire Asia.

    ]]>
    AI race intensifies: China narrows the gap https://techwireasia.com/2025/03/ai-race-intensifies-china-narrows-the-gap/ Thu, 27 Mar 2025 13:54:25 +0000 https://techwireasia.com/?p=241606 China is closing the gap with the US in AI technology advancements. DeepSeek’s open-source models demonstrate improvements through algorithmic efficiency. The artificial intelligence race between China and the United States has entered a new phase as Chinese companies narrow the technology gap despite Western sanctions. According to Lee Kai-fu, CEO of Chinese startup 01.AI and […]

    The post AI race intensifies: China narrows the gap appeared first on TechWire Asia.

    ]]>
  • China is closing the gap with the US in AI technology advancements.
  • DeepSeek’s open-source models demonstrate improvements through algorithmic efficiency.
  • The artificial intelligence race between China and the United States has entered a new phase as Chinese companies narrow the technology gap despite Western sanctions.

    According to Lee Kai-fu, CEO of Chinese startup 01.AI and former head of Google China, the gap in core technologies has shrunk from “six to nine months” to “probably three months,” with China actually pulling ahead in specific areas like infrastructure software engineering. The Chinese AI startup DeepSeek has become the epicentre of the intensifying technological rivalry.

    On January 20, 2025, while the world’s attention was fixed on Donald Trump’s inauguration, DeepSeek quietly launched its R1 model – a low-cost, open-source, high-performance large language model with capabilities reportedly rivalling or surpassing OpenAI’s ChatGPT-4, but at a fraction of the cost.

    “The fact that DeepSeek can figure out the chain of thought with a new way to do reinforcement learning is either catching up with the US, learning quickly, or maybe even more innovative now,” Lee told Reuters, referring to how DeepSeek models show users their reasoning process before delivering answers.

    Innovative efficiency: China’s response to chip sanctions

    DeepSeek’s achievement is particularly notable because it emerged despite US restrictions on advanced processor chip exports to China. Instead of being hampered by international limitations, Chinese companies have responded by optimising efficiency and compensating for lower-quality hardware with quantity.

    The adaptive approach was demonstrated further on March 25, 2025, when DeepSeek upgraded its V3 large language model. The new version, DeepSeek-V3-0324, features enhanced reasoning capabilities, optimised front-end web development, and upgraded Chinese writing proficiency. DeepSeek-V3-0324 significantly improved in several benchmark tests, especially in mathematics and coding. Häme University lecturer Kuittinen Petri highlighted the significance of these advancements, stating on social media:

    “DeepSeek is doing all this with just [roughly] 2% [of the] money resources of OpenAI.” He added that when he asked the new model to “create a great-looking responsive front page for an AI company,” it produced a mobile-friendly, properly functioning website after coding 958 lines.

    Global market implications

    The impact of China’s AI advances extends beyond technological achievement to financial markets. When DeepSeek launched its R1 model in January, America’s Nasdaq plunged 3.1%, while the S&P 500 fell 1.5%, demonstrating the wider economic significance of technological competition.

    The AI race presents opportunities and challenges for Asia and other regions. China’s low-cost, open-source model could help emerging economies develop AI innovation and entrepreneurship. It also pressures closed-source firms like OpenAI to reconsider their stance.

    Meanwhile, both superpowers are making massive investments in AI infrastructure. The Trump administration has unveiled the $500 billion Stargate Project, and China is projected to invest more than 10 trillion yuan (US$1.4 trillion) into technology by 2030.

    A double-edged sword for global technology

    The US-China tech rivalry risks deepening global divides, forcing nations to navigate growing complexities. Countries face difficult questions: How can they manage research partnerships with China without jeopardising collaboration with US institutions?

    How can nations reliant on Chinese materials and exports avoid Chinese technologies? South Korea, the world’s second-largest producer of semiconductors, labours with this dilemma. In 2023, it became more dependent on China for five of the six important raw materials needed for chip-making. Major firms like Toyota, SK Hynix, Samsung, and LG Chem remain vulnerable due to Chinese supply chain dominance. And, the climate implications of this AI race are significant.

    According to the Institute for Progress, maintaining AI leadership will require the United States to build five-gigawatt clusters in the next five years. By 2030, data centres could consume 10% of US electricity, more than double the 4% recorded in 2023.

    The path forward

    As the AI landscape evolves, DeepSeek’s arrival has challenged the assumption that US sanctions were constraining China’s AI sector. Washington’s semiconductor sanctions have proven to be what Lee Kai-fu calls a “double-edged sword” that created short-term challenges and forced Chinese firms to innovate under constraints.

    The rapid development of Chinese AI has reignited debates over US chip export controls. Critics argue that the present restrictions have accelerated China’s domestic innovation, as evidenced by DeepSeek’s development and improving capabilities.

    China is demonstrating remarkable resilience and innovation in the face of restrictions. As DeepSeek prepares to launch its R2 model potentially early, the technology gap continues to narrow.

    The post AI race intensifies: China narrows the gap appeared first on TechWire Asia.

    ]]>
    Nvidia chip crackdown: Malaysia under US pressure to stop AI reaching China https://techwireasia.com/2025/03/nvidia-chip-crackdown-malaysia-under-us-pressure-to-stop-ai-reaching-china/ Tue, 25 Mar 2025 15:29:21 +0000 https://techwireasia.com/?p=241587 Malaysia tightens semiconductor regulations amid Nvidia chip diversion to China. $390 million fraud case in Singapore reveals vulnerabilities in SE Asia supply chain. The Nvidia chip crackdown in Malaysia is intensifying. The country is apparently facing mounting pressure from the United States to prevent advanced semiconductors from being diverted to China. Malaysia’s Trade Minister Zafrul […]

    The post Nvidia chip crackdown: Malaysia under US pressure to stop AI reaching China appeared first on TechWire Asia.

    ]]>
  • Malaysia tightens semiconductor regulations amid Nvidia chip diversion to China.
  • $390 million fraud case in Singapore reveals vulnerabilities in SE Asia supply chain.
  • The Nvidia chip crackdown in Malaysia is intensifying. The country is apparently facing mounting pressure from the United States to prevent advanced semiconductors from being diverted to China.

    Malaysia’s Trade Minister Zafrul Aziz has confirmed the Malaysian government plans to tighten regulations on semiconductor movements in response to specific US demands to monitor high-end Nvidia chips entering the country. “[The US is] asking us to make sure that we monitor every shipment that comes to Malaysia when it involves Nvidia chips,” Aziz told the Financial Times [paywall]. “They want us to ensure that servers end up in the data centres they’re supposed to and not suddenly move to another ship.”

    The minister has formed a special task force with Digital Minister, Gobind Singh Deo, to strengthen regulations around Malaysia’s rapidly-growing data centre industry, which heavily relies on chips from industry leader Nvidia. The move comes amid heightened concerns in the US that Malaysia may be serving as a transit point for advanced AI chips ultimately destined for China, in violation of US export controls.

    Singapore fraud case highlights regional concerns

    The Malaysian moves follow closely on the heels of a major fraud investigation in neighbouring Singapore, where authorities have charged three individuals – two Singaporeans and one Chinese national – over trades in hardware servers allegedly worth approximately $390 million.

    During a press briefing in early March, Singapore’s Home Affairs Minister K Shanmugam stated that the servers in question “may contain Nvidia chips.” The case involves Dell and Supermicro servers imported from the US and subsequently sold to a company in Malaysia. “The question is whether Malaysia was a final destination or from Malaysia it went somewhere else, which we do not know for certain at this point,” Shanmugam said, adding that the Singaporean government had requested assistance from both the US and Malaysian authorities in its investigation.

    Two of the individuals charged – Alan Wei Zhaolun, 48, and Aaron Woon Guo Jie, 40 – hold senior positions at Aperia Cloud Services as CEO and COO respectively. According to its website, Aperia claims to be “Nvidia’s first qualified Nvidia Cloud Partner in Southeast Asia,” with “priority access to the highest-performing [graphics processing units] available in the market.” The third individual, a 51-year-old Chinese national named Li Miang, is accused of claiming fraudulently that the end user of items he purchased was a Singaporean computer equipment sales company, Luxuriate Your Life.

    US export controls on Nvidia chip and regional impact

    The increased scrutiny stems from broader US efforts to obstruct China’s development of advanced technologies, particularly AI with potential military applications. During the final days of the Biden administration in late 2024, the US introduced a three-tier licensing system for AI chips designed for use in data centres, explicitly targeting Nvidia’s powerful graphics processing units (GPUs). The measures were designed to prevent Chinese companies from circumventing US restrictions by accessing restricted chips through third countries. The US is also investigating if Chinese AI firm DeepSeek (which made headlines recently about its impressive AI model performance) has been using banned US chips.

    Malaysia’s growing data centre industry

    Malaysia has emerged as one of the fastest-growing global data centre development markets, with much of this growth concentrated in the southern state of Johor. According to Zafrul, the state has attracted over $25 billion in investment from major technology companies, including Nvidia, Microsoft, and ByteDance (TikTok’s parent company) in the past 18 months alone. The country recently agreed to form a special economic zone with Singapore, further embedding it as a key player in regional technology infrastructure. However, with the growth comes an increased responsibility to ensure compliance with international export controls.

    Challenges in enforcement

    Minister Zafrul has acknowledged the significant challenges in tracking semiconductors through complex global supply chains. “The US is also putting much pressure on their own companies to be responsible for ensuring [chips] arrive at their rightful destination,” he said. “Everybody’s been asked to play a role throughout the supply chain.” He emphasised the difficulty of enforcement, stating plainly, “Enforcement might sound easy, but it’s not.”

    Nvidia’s global sales patterns underscore the challenge. It generates nearly a quarter of its global sales through its Singapore office, raising attention in the US around potential hardware movements to China. Nvidia has maintained that almost all of these sales constitute invoicing of international companies through Singapore, with very few chips passing through the city-state.

    Regional context and industry impact

    The focus on semiconductor flows in Southeast Asia represents one aspect of broader technology trade restrictions in place. In a parallel development, the European Union recently sanctioned Splendent Technologies, a Singaporean chip distributor, as part of measures targeting companies allegedly helping Russia’s defence sector.

    Balancing economic development with regulatory compliance presents a practical challenge for Malaysia. The country’s efforts to strengthen monitoring systems must address complex supply chains while be supportive of its growing position in the regional technology ecosystem. As Malaysia implements new oversight measures, technology companies operating in the region may face additional compliance requirements stemming from Kuala Lumpur. However, the precise impact on the broader semiconductor industry will depend on the specific implementation approach and enforcement capacity.

    The post Nvidia chip crackdown: Malaysia under US pressure to stop AI reaching China appeared first on TechWire Asia.

    ]]>
    Is the US losing its edge in AI? https://techwireasia.com/2025/03/is-the-us-losing-its-edge-in-ai/ Mon, 24 Mar 2025 11:44:44 +0000 https://techwireasia.com/?p=241578 US AI firms warn America’s AI lead is shrinking to DeepSeek’s R1 and Ernie X1. OpenAI and Anthropic cite national security risk from Chinese AI models. Major US artificial intelligence companies, like OpenAI, Anthropic, and Google, have expressed concern over China’s increasing abilities in AI development. In submissions to the US government, the companies have […]

    The post Is the US losing its edge in AI? appeared first on TechWire Asia.

    ]]>
  • US AI firms warn America’s AI lead is shrinking to DeepSeek’s R1 and Ernie X1.
  • OpenAI and Anthropic cite national security risk from Chinese AI models.
  • Major US artificial intelligence companies, like OpenAI, Anthropic, and Google, have expressed concern over China’s increasing abilities in AI development.

    In submissions to the US government, the companies have warned America’s edge in AI is dwindling, as Chinese models like DeepSeek R1 become more advanced. The submissions were filed in response to a government request for input on an AI Action Plan, and were made in March 2025.

    China’s growing AI presence

    DeepSeek R1, the AI model from China, has drawn attention from US developers. OpenAI described DeepSeek as evidence that the technological gap between the US and China is closing. The corporation described DeepSeek as “state-subsidised, state-controlled, and freely available,” and expressed concerns about China’s ability to influence global AI development.

    OpenAI compared DeepSeek to Chinese telecommunications company Huawei, warning that Chinese regulations could allow the government to compel DeepSeek to compromise sensitive systems or important infrastructure.

    OpenAI also expressed worries about data privacy, pointing out that DeepSeek’s requirements for data-sharing with the Chinese government could strengthen the state’s surveillance abilities.

    Anthropic’s submission focused on biosecurity, noting that DeepSeek R1 “complied with answering most biological weaponisation questions, even when formulated with a clearly malicious intent.”

    The willingness to generate possibly dangerous information contrasts with the safety protocols the submissions describe as implemented in US-developed models.

    Competition goes beyond DeepSeek. Baidu, China’s largest search engine, recently launched Ernie X1 and Ernie 4.5, two new AI models designed to compete with leading Western systems. Ernie X1, a reasoning model, is said to match DeepSeek R1’s performance at half the cost. Meanwhile, Ernie 4.5 is priced at 1% of OpenAI’s GPT-4.5 and has outperformed it on certain benchmarks, according to Baidu.

    Both OpenAI and Anthropic framed the competition as ideological, describing it as a contest between “democratic AI” developed under Western principles and “authoritarian AI” shaped by state control. However, the recent success of Baidu and DeepSeek suggests that cost and accessibility may have a greater impact on global adoption than ideology.

    US AI security and infrastructure concerns

    The US companies’ submissions also raised their concerns about security and infrastructure challenges linked to the technology development. OpenAI’s submission focused on the dangers of Chinese state influence over AI models like DeepSeek, while Anthropic’s submission its emphasised biosecurity concerns tied to AI capabilities. The company disclosed that its own Claude 3.7 Sonnet model demonstrated improvements in biological weapon development, highlighting the dual-use nature of advanced AI systems. Anthropic also pointed to gaps in US export controls.

    While Nvidia’s H20 chips comply with US export restrictions, they still perform well in text generation – a key factor in reinforcement learning. Anthropic urged the government to strengthen these controls to prevent China from gaining an advantage.

    Google’s submission took a more balanced approach, acknowledging security risks while warning against over-regulation. The company argued that strict export controls could harm US economic competitiveness by creating barriers for domestic cloud providers and AI developers. Google suggested targeted controls to protect national security without disrupting business operations.

    All three businesses stressed the need for improved government oversight of AI security. Anthropic called for expanding the AI Safety Institute and strengthening the National Institute of Standards and Technology (NIST) to assess and mitigate AI-related security threats.

    Economic competitiveness and energy needs

    The submissions also focused on the economic factors shaping AI development. Anthropic stressed infrastructure challenges, warning that by 2027, training a single advanced AI model could require five gigawatts of power. The corporation proposed the building 50 gigawatts of AI-dedicated power capacity by 2027 and streamlining the power transmission line approval process.

    Baidu’s recent announcements have highlighted the importance of cost-effective AI development. Ernie 4.5 and X1 are reportedly available for a fraction of the cost of comparable Western models, with much lower token processing fees than OpenAI’s current models. Such pricing strategies from Chinese models could pressure US developers to reduce costs to remain competitive. OpenAI portrayed the competition as an ideological contest between Western and Chinese models’ arguing a free-market strategy would result in more innovation and better outcomes for consumers.

    Google’s stance in the submissions was more concerned with practical policy recommendations. The company called for increased federal investment in AI research, improved access to government contracts, and streamlined export controls.

    Regulatory strategies

    A unified approach to AI regulation emerged as a consistent theme across all three submissions. OpenAI proposed a regulatory framework managed by the Department of Commerce, claiming that fragmented domestic state-level regulations could drive AI development overseas. The company supported a tiered export control framework that would allow broader access to US-developed AI in countries considered democratic while restricting access in authoritarian states. Anthropic called for stricter export controls on AI hardware and training data, warning that even marginal improvements in model performance could provide strategic advantages to China.

    Google’s submission focused on copyright and intellectual property rights. The company argued that its current ‘fair use’-based policies are essential for AI development, and warned that overly strict copyright rules could disadvantage US firms compared to Chinese competitors.

    All three companies emphasised the need for faster governmental implementation of AI. OpenAI suggested removing existing testing and procurement processes, joining Anthropic’s advocacy of streamlined AI procurement processes from federal agencies. Google supported similar reforms, highlighting the importance of improved interoperability in government cloud infrastructure.

    Maintaining a competitive edge

    The submissions from OpenAI, Anthropic, and Google reflect a shared concern about maintaining US leadership in AI in the country as competition from China intensifies. The rise of DeepSeek R1 and Baidu’s latest models points to a growing challenge not just in technological capability but also in cost and accessibility.

    As AI development accelerates, the balance between security, economic growth, and technological leadership will likely remain key policy challenges.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Is the US losing its edge in AI? appeared first on TechWire Asia.

    ]]>
    Nvidia introduces new AI chips at GTC and joins AI infrastructure partnership https://techwireasia.com/2025/03/nvidia-introduces-new-ai-chips-at-gtc-and-joins-ai-infrastructure-partnership/ Thu, 20 Mar 2025 09:46:02 +0000 https://techwireasia.com/?p=241572 Nvidia introduces new AI chips: Blackwell Ultra and Vera Rubin. Joins AI Infrastructure Partnership with BlackRock, Microsoft, and xAI. Nvidia revealed new AI chips at its annual GTC conference on Tuesday. CEO Jensen Huang introduced two key products: the Blackwell Ultra chip family, which is expected to ship in the second half of this year, […]

    The post Nvidia introduces new AI chips at GTC and joins AI infrastructure partnership appeared first on TechWire Asia.

    ]]>
  • Nvidia introduces new AI chips: Blackwell Ultra and Vera Rubin.
  • Joins AI Infrastructure Partnership with BlackRock, Microsoft, and xAI.
  • Nvidia revealed new AI chips at its annual GTC conference on Tuesday. CEO Jensen Huang introduced two key products: the Blackwell Ultra chip family, which is expected to ship in the second half of this year, and Vera Rubin, a next-generation GPU set to launch in 2026.

    The release of OpenAI’s ChatGPT in late 2022 has significantly boosted Nvidia’s business, with sales increasing more than sixfold. Nvidia’s GPUs play an important role in the training of advanced AI models, giving the company a market advantage. Cloud providers like Microsoft, Google, and Amazon will be evaluating the new chips to see if they provide enough performance and efficiency gains to justify further investment in Nvidia technology. “The computational requirement, the scaling law of AI, is more resilient, and in fact, is hyper-accelerated,” Huang said.

    The new releases reflect Nvidia’s shift to an annual release cycle for chip families, moving away from its previous two-year pattern.

    Nvidia expands role in AI infrastructure partnership

    Nvidia’s announcements come as the company deepens its involvement in the AI Infrastructure Partnership (AIP), a collaborative effort to build next-generation AI data centres and energy solutions. On Wednesday, BlackRock and its subsidiary Global Infrastructure Partners (GIP), along with Microsoft and MGX, announced updates to the partnership. Nvidia and Elon Musk’s AI company, xAI, have joined the initiative, strengthening its position in AI infrastructure development.

    Nvidia will serve as a technical advisor to the AIP, contributing its expertise in AI computing and hardware. The partnership aims to improve AI capabilities and focus on energy-efficient data centre solutions.

    Since its launch in September 2024, AIP has attracted strong interest from investors and corporations. The initiative’s initial goal is to unlock $30 billion in capital, with a target to generate up to $100 billion in total investment potential through a mix of direct investment and debt financing.

    Early projects will focus on AI data centres in the United States and other OECD countries. GE Vernova and NextEra Energy are recent members of the partnership, bringing experience in energy infrastructure. GE Vernova will assist with supply chain planning and energy solutions to support AI data centre growth.

    Vera Rubin chip family

    Nvidia’s next-generation GPU system, Vera Rubin, is scheduled to ship in the second half of 2026, consisting of two main components: a custom CPU, Vera, and a new GPU called Rubin, named after astronomer Vera Rubin. Vera marks Nvidia’s first custom CPU design, built on an in-house core named Olympus. Previously, Nvidia used off-the-shelf Arm-based designs. The company claims Vera will deliver twice the performance of the Grace Blackwell CPU introduced last year.

    Rubin will support up to 288 GB of high-speed memory and deliver 50 petaflops of performance for AI inference – more than double the 20 petaflops handled by Blackwell chips. It will feature two GPUs working together as a single unit. Nvidia plans to follow up with a “Rubin Next” chip in 2027, combining four dies into a single chip to double Rubin’s processing speed.

    Blackwell Ultra chips

    Nvidia also introduced new versions of its Blackwell chips under the name Blackwell Ultra, created to increase token processing, allowing AI models to process data faster. Nvidia expects cloud providers to benefit from Blackwell Ultra’s improved performance, claiming that the chips could generate up to 50 times more revenue than the Hopper generation, which was introduced in 2023.

    Blackwell Ultra will be available in multiple configurations, including a version paired with an Nvidia Arm CPU (GB300), a standalone GPU version (B300), and a rack-based version with 72 Blackwell chips. Nvidia said the top four cloud companies have already deployed three times as many Blackwell chips as Hopper chips. Nvidia also referred to its history of increasing AI computing power with each generation, from Hopper in 2022 to Blackwell in 2024 and the anticipated Rubin in 2026.

    DeepSeek and AI reasoning

    Nvidia addressed investor concerns about China’s DeepSeek R1 model, which launched in January and reportedly required less processing power than comparable US-based models. Huang framed DeepSeek’s model as a positive development, noting that its ability to perform “reasoning” requires more computational power. Nvidia said its Blackwell Ultra chips are designed to handle reasoning models more effectively, improving inference performance and responsiveness.

    Broader AI strategy

    The GTC conference in San Jose, California, drew about 25,000 attendees and featured presentations from hundreds of companies that use Nvidia hardware for AI development. General Motors, for example, announced plans to use Nvidia’s platform for its next-generation vehicles.

    Nvidia also introduced new AI-focused laptops and desktops, including the DGX Spark and DGX Station, designed to run large models like Llama and DeepSeek. The company also announced updates to its networking hardware, which ties GPUs together to function as a unified system, and introduced a software package called Dynamo to optimise chip performance.

    Nvidia plans to continue naming its chip families after scientists. The architecture following Rubin will be named after physicist Richard Feynman and is scheduled for release in 2028.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Nvidia introduces new AI chips at GTC and joins AI infrastructure partnership appeared first on TechWire Asia.

    ]]>
    OpenAI and Google seek approval to train AI on content without permission https://techwireasia.com/2025/03/openai-and-google-seek-approval-to-train-ai-on-content-without-permission/ Tue, 18 Mar 2025 11:17:51 +0000 https://techwireasia.com/?p=241503 OpenAI and Google ask US government to allow AI to train on copyright materials. Urge adoption of copyright exemptions for ‘national security.’ OpenAI and Google are pushing the US government to allow AI models to train on copyrighted material, arguing that ‘fair use’ is critical for maintaining the country’s competitive edge in artificial intelligence. Both […]

    The post OpenAI and Google seek approval to train AI on content without permission appeared first on TechWire Asia.

    ]]>
  • OpenAI and Google ask US government to allow AI to train on copyright materials.
  • Urge adoption of copyright exemptions for ‘national security.’
  • OpenAI and Google are pushing the US government to allow AI models to train on copyrighted material, arguing that ‘fair use’ is critical for maintaining the country’s competitive edge in artificial intelligence.

    Both companies outlined their positions in proposals submitted this week in response to a request from the White House for input on President Donald Trump’s “AI Action Plan.”

    OpenAI’s national security argument

    According to OpenAI, allowing AI companies to use copyrighted material for training is a national security issue. The company warned that if US firms are restricted from accessing copyrighted data, China could outperform the US in AI development.

    OpenAI specifically highlighted the rise of DeepSeek as evidence that Chinese developers have unrestricted access to data, including copyrighted material. “If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over,” OpenAI stated in its filing.

    Google’s position on copyright and fair use

    Google supported OpenAI’s stance, arguing that copyright, privacy, and patent laws could create barriers to AI development if they restrict access to data.

    The company highlighted that fair use protections and text and data mining exceptions have been crucial for training AI models using publicly available content. “These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders,” Google said. Without these protections, developers could face “highly unpredictable, imbalanced, and lengthy negotiations” with data holders during model development and research.

    Google also revealed a broader strategy to strengthen the US’s competitiveness in AI. The corporation called for increased investment in AI infrastructure, including addressing rising energy demands and establishing export controls to preserve national security while supporting AI exports to foreign markets.

    It emphasised the need for collaboration between federal and local governments to support AI research through partnerships with national labs and improving access to computational resources.

    Google recommended the US government take the lead in adopting AI, suggesting the implementation of multi-vendor AI solutions and streamlined procurement processes for emerging technologies. It warned that policy decisions will shape the outcome of the global AI race, urging the government to adopt a “pro-innovation” approach that protects national security.

    Anthropic’s focus on security and infrastructure

    Anthropic, the developer of the Claude chatbot, also submitted a proposal but did not add to the statements on copyright. Instead, the company called on the US government to create a system for assessing national security risks tied to AI models and strengthen export controls on AI chips. It also urged investment in energy infrastructure to support AI development, pointing out that AI models’ energy demands will continue to grow.

    Copyright lawsuits and industry concerns

    The proposals come as AI companies face increasing legal challenges over the use of copyrighted material. OpenAI is currently dealing with lawsuits from major news organisations, including The New York Times, and from authors like Sarah Silverman and George R.R. Martin. These cases allege that OpenAI used content, without permission, to train its models.

    Other AI firms, including Apple, Anthropic, and Nvidia, have also been accused of using copyrighted material. YouTube has claimed that these companies violated its terms of service by scraping subtitles from its platform to train AI models in a remarkable instance of the pot calling the kettle black.

    Industry pressure to clarify copyright rules

    AI developers worry that restrictive copyright policies could disadvantage US firms, as China and other nations continue to invest heavily in AI without strictures placed on use of materials. Content creators and rightsholders disagree, claiming that AI businesses should not be able to use their work without fair compensation.

    The White House’s AI Action Plan is expected to set the foundation for future US policy on AI development and data access, with potential implications for both the technology sector and content industries.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post OpenAI and Google seek approval to train AI on content without permission appeared first on TechWire Asia.

    ]]>