Nvidia Asia | TechWire Asia https://techwireasia.com/tag/nvidia/ Where technology and business intersect Wed, 02 Apr 2025 09:12:52 +0000 en-GB hourly 1 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png Nvidia Asia | TechWire Asia https://techwireasia.com/tag/nvidia/ 32 32 Ant Group develops AI models using Chinese chips to lower training costs https://techwireasia.com/2025/04/ant-group-develops-ai-models-using-chinese-chips-to-lower-training-costs/ Wed, 02 Apr 2025 09:12:52 +0000 https://techwireasia.com/?p=241645 Ant Group uses Chinese chips and MoE models to cut AI training costs and reduce reliance on Nvidia. Releases open-source AI models, claiming strong benchmark results with domestic hardware. Chinese Alibaba affiliate company, Ant Group, is exploring new ways to train LLMs and reduce dependency on advanced foreign semiconductors. According to people familiar with the […]

The post Ant Group develops AI models using Chinese chips to lower training costs appeared first on TechWire Asia.

]]>
  • Ant Group uses Chinese chips and MoE models to cut AI training costs and reduce reliance on Nvidia.
  • Releases open-source AI models, claiming strong benchmark results with domestic hardware.
  • Chinese Alibaba affiliate company, Ant Group, is exploring new ways to train LLMs and reduce dependency on advanced foreign semiconductors.

    According to people familiar with the matter, the company has been using domestically-made chips – including those supplied by Alibaba and Huawei – to support the development of cost-efficient AI models through a method known as Mixture of Experts (MoE).

    The results have reportedly been on par with models trained using Nvidia’s H800 GPUs, which are among the more powerful chips currently restricted from export to China. While Ant continues to use Nvidia hardware for certain AI tasks, sources said the company is shifting toward other options, like processors from AMD and Chinese alternatives, for its latest development work.

    The strategy reflects a broader trend among Chinese firms looking to adapt to ongoing export controls by optimising performance with locally available technology.

    The MoE approach has grown in popularity in the industry, particularly for its ability to scale AI models more efficiently. Rather than processing all data through a single large model, MoE structures divide tasks into smaller segments handled by different specialised “experts.” The division helps reduce the computing load and allows for better resource management.

    Google and China-based startup DeepSeek have also applied the method, seeing similar gains in training speed and cost-efficiency.

    Ant’s latest research paper, published this month, outlines how the company has been working to lower training expenses by not relying on high-end GPUs. The paper claims the optimised method can reduce the cost of training 1 trillion tokens from around 6.35 million yuan (approximately $880,000) using high-performance chips to 5.1 million yuan, using less advanced, more readily-available hardware. Tokens represent pieces of information that AI models process during training to learn patterns, in order to generate text, or complete tasks.

    According to the paper, Ant has developed two new models – Ling-Plus and Ling-Lite – which it now plans to offer in various industrial sectors, including finance and healthcare. The company recently acquired Haodf.com, an online medical services platform, as part of its broader push for AI-driven healthcare services. It also runs the AI life assistant app Zhixiaobao and a financial advisory platform known as Maxiaocai.

    Ling-Plus and Ling-Lite have been open-sourced, with the former consisting of 290 billion parameters and the latter 16.8 billion. Parameters in AI are tunable elements that influence a model’s performance and output. While these numbers are smaller than the parameter count anticipated for advanced models like OpenAI’s GPT-4.5 (around 1.8 trillion), Ant’s offerings are nonetheless regarded as sizeable by industry standards.

    For comparison, DeepSeek-R1, a competing model also developed in China, contains 671 billion parameters.

    In benchmark tests, Ant’s models were said to perform competitively. Ling-Lite outpaced a version of Meta’s Llama model in English-language understanding, while both Ling models outperformed DeepSeek’s offerings on Chinese-language evaluations. The claims, however, have not been independently verified.

    The paper also highlighted some technical challenges the organisation faced during model training. Even minor adjustments to the hardware or model architecture resulted in instability, including sharp increases in error rates. These issues illustrate the difficulty of maintaining model performance while shifting away from high-end GPUs that have become the standard in large-scale AI development.

    Ant’s research indicates a rise in effort among Chinese companies to achieve more technological self-reliance. With US export limitations limiting access to Nvidia’s most advanced chips, companies like Ant are seeking ways to build competitive AI tools using alternative resources.

    Although Nvidia’s H800 chip is not the most powerful in its lineup, it remains one of the most capable processors available to Chinese buyers. Ant’s ability to train models of comparable quality without such hardware signals a potential path forward for companies affected by trade controls.

    At the same time, the broader industry dynamics continue to evolve. Nvidia CEO Jensen Huang has said that increasing computational needs will drive demand for more powerful chips, even as efficiency-focused models gain traction. Despite alternative strategies like those explored by Ant, his view suggests that advanced GPU development will continue to be prioritised.

    Ant’s effort to reduce costs and rely on domestic chips could influence how other firms approach AI training – especially in markets facing similar constraints. As China accelerates its push toward AI independence, developments like these are likely to draw attention across both the tech and financial landscapes.

    The post Ant Group develops AI models using Chinese chips to lower training costs appeared first on TechWire Asia.

    ]]>
    Nvidia introduces new AI chips at GTC and joins AI infrastructure partnership https://techwireasia.com/2025/03/nvidia-introduces-new-ai-chips-at-gtc-and-joins-ai-infrastructure-partnership/ Thu, 20 Mar 2025 09:46:02 +0000 https://techwireasia.com/?p=241572 Nvidia introduces new AI chips: Blackwell Ultra and Vera Rubin. Joins AI Infrastructure Partnership with BlackRock, Microsoft, and xAI. Nvidia revealed new AI chips at its annual GTC conference on Tuesday. CEO Jensen Huang introduced two key products: the Blackwell Ultra chip family, which is expected to ship in the second half of this year, […]

    The post Nvidia introduces new AI chips at GTC and joins AI infrastructure partnership appeared first on TechWire Asia.

    ]]>
  • Nvidia introduces new AI chips: Blackwell Ultra and Vera Rubin.
  • Joins AI Infrastructure Partnership with BlackRock, Microsoft, and xAI.
  • Nvidia revealed new AI chips at its annual GTC conference on Tuesday. CEO Jensen Huang introduced two key products: the Blackwell Ultra chip family, which is expected to ship in the second half of this year, and Vera Rubin, a next-generation GPU set to launch in 2026.

    The release of OpenAI’s ChatGPT in late 2022 has significantly boosted Nvidia’s business, with sales increasing more than sixfold. Nvidia’s GPUs play an important role in the training of advanced AI models, giving the company a market advantage. Cloud providers like Microsoft, Google, and Amazon will be evaluating the new chips to see if they provide enough performance and efficiency gains to justify further investment in Nvidia technology. “The computational requirement, the scaling law of AI, is more resilient, and in fact, is hyper-accelerated,” Huang said.

    The new releases reflect Nvidia’s shift to an annual release cycle for chip families, moving away from its previous two-year pattern.

    Nvidia expands role in AI infrastructure partnership

    Nvidia’s announcements come as the company deepens its involvement in the AI Infrastructure Partnership (AIP), a collaborative effort to build next-generation AI data centres and energy solutions. On Wednesday, BlackRock and its subsidiary Global Infrastructure Partners (GIP), along with Microsoft and MGX, announced updates to the partnership. Nvidia and Elon Musk’s AI company, xAI, have joined the initiative, strengthening its position in AI infrastructure development.

    Nvidia will serve as a technical advisor to the AIP, contributing its expertise in AI computing and hardware. The partnership aims to improve AI capabilities and focus on energy-efficient data centre solutions.

    Since its launch in September 2024, AIP has attracted strong interest from investors and corporations. The initiative’s initial goal is to unlock $30 billion in capital, with a target to generate up to $100 billion in total investment potential through a mix of direct investment and debt financing.

    Early projects will focus on AI data centres in the United States and other OECD countries. GE Vernova and NextEra Energy are recent members of the partnership, bringing experience in energy infrastructure. GE Vernova will assist with supply chain planning and energy solutions to support AI data centre growth.

    Vera Rubin chip family

    Nvidia’s next-generation GPU system, Vera Rubin, is scheduled to ship in the second half of 2026, consisting of two main components: a custom CPU, Vera, and a new GPU called Rubin, named after astronomer Vera Rubin. Vera marks Nvidia’s first custom CPU design, built on an in-house core named Olympus. Previously, Nvidia used off-the-shelf Arm-based designs. The company claims Vera will deliver twice the performance of the Grace Blackwell CPU introduced last year.

    Rubin will support up to 288 GB of high-speed memory and deliver 50 petaflops of performance for AI inference – more than double the 20 petaflops handled by Blackwell chips. It will feature two GPUs working together as a single unit. Nvidia plans to follow up with a “Rubin Next” chip in 2027, combining four dies into a single chip to double Rubin’s processing speed.

    Blackwell Ultra chips

    Nvidia also introduced new versions of its Blackwell chips under the name Blackwell Ultra, created to increase token processing, allowing AI models to process data faster. Nvidia expects cloud providers to benefit from Blackwell Ultra’s improved performance, claiming that the chips could generate up to 50 times more revenue than the Hopper generation, which was introduced in 2023.

    Blackwell Ultra will be available in multiple configurations, including a version paired with an Nvidia Arm CPU (GB300), a standalone GPU version (B300), and a rack-based version with 72 Blackwell chips. Nvidia said the top four cloud companies have already deployed three times as many Blackwell chips as Hopper chips. Nvidia also referred to its history of increasing AI computing power with each generation, from Hopper in 2022 to Blackwell in 2024 and the anticipated Rubin in 2026.

    DeepSeek and AI reasoning

    Nvidia addressed investor concerns about China’s DeepSeek R1 model, which launched in January and reportedly required less processing power than comparable US-based models. Huang framed DeepSeek’s model as a positive development, noting that its ability to perform “reasoning” requires more computational power. Nvidia said its Blackwell Ultra chips are designed to handle reasoning models more effectively, improving inference performance and responsiveness.

    Broader AI strategy

    The GTC conference in San Jose, California, drew about 25,000 attendees and featured presentations from hundreds of companies that use Nvidia hardware for AI development. General Motors, for example, announced plans to use Nvidia’s platform for its next-generation vehicles.

    Nvidia also introduced new AI-focused laptops and desktops, including the DGX Spark and DGX Station, designed to run large models like Llama and DeepSeek. The company also announced updates to its networking hardware, which ties GPUs together to function as a unified system, and introduced a software package called Dynamo to optimise chip performance.

    Nvidia plans to continue naming its chip families after scientists. The architecture following Rubin will be named after physicist Richard Feynman and is scheduled for release in 2028.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Nvidia introduces new AI chips at GTC and joins AI infrastructure partnership appeared first on TechWire Asia.

    ]]>
    Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA https://techwireasia.com/2025/03/indosat-becomes-first-mobile-operator-in-sea-to-roll-out-ai-ran-with-nokia-and-nvidia/ Mon, 10 Mar 2025 08:13:39 +0000 https://techwireasia.com/?p=241404 Indosat Ooredoo Hutchison deploys AI-RAN in Southeast Asia with Nokia and NVIDIA. The AI-RAN solution combines Nokia’s 5G Cloud RAN and NVIDIA AI Aerial. At MWC 2025, Indosat Ooredoo Hutchison became the first mobile operator in Southeast Asia to deploy AI-RAN (Artificial Intelligence Radio Access Network), in collaboration with Nokia and NVIDIA. The deployment integrates […]

    The post Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA appeared first on TechWire Asia.

    ]]>
  • Indosat Ooredoo Hutchison deploys AI-RAN in Southeast Asia with Nokia and NVIDIA.
  • The AI-RAN solution combines Nokia’s 5G Cloud RAN and NVIDIA AI Aerial.
  • At MWC 2025, Indosat Ooredoo Hutchison became the first mobile operator in Southeast Asia to deploy AI-RAN (Artificial Intelligence Radio Access Network), in collaboration with Nokia and NVIDIA. The deployment integrates Nokia’s 5G Cloud RAN solution with NVIDIA AI Aerial, creating what the companies term a unified computing infrastructure that hosts both AI and RAN workloads.

    AI and telecom convergence

    Indosat is the world’s third operator to deploy AI-RAN commercially. The recent initiative combines AI and wireless connectivity to improve network performance, efficiency, and service capabilities. As part of the partnership, the companies have signed an MOU to develop, test, and deploy AI-RAN solutions. The initial focus will be on AI inferencing workloads using NVIDIA AI Aerial, followed by the full integration of RAN workloads on the same platform.

    Indosat, Nokia, and NVIDIA will work with Indonesian universities and research institutes to advance AI-driven telecom applications, support academic research and student training, and drive innovation in network optimisation, spectral efficiency, and energy management.

    AI-RAN’s role in network transformation

    The AI-RAN infrastructure is expected to change Indosat’s network strategy, letting the company share infrastructure costs for multiple applications and introduce AI-powered services. The integration’s aims of improving spectral efficiency and reduction in energy use laying the groundwork for future 6G improvements.

    The initiative is in line with Indonesia’s national AI strategy, establishing Indosat as an enabler of AI services rather than just a telecom provider. The company has established a ‘Sovereign AI Factory’ in Indonesia, designed to support startups, enterprises, and government organisations in developing AI applications for healthcare, education, and agriculture. With NVIDIA AI Enterprise software and serverless APIs, Indosat plans to scale AI inferencing for Indonesia’s population of 277 million, optimising AI workloads across the network.

    Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA
    Bottom row, from left: Ronnie Vasishta, SVP Telecoms at Nvidia, Tommi Uitto, president of Mobile Networks at Nokia and Vikram Sinha, president director and CEO of Indosat.

    Expanding AI capabilities across applications

    The provided serverless API framework, created in collaboration with NVIDIA, will allow Indosat’s AI partners, including Hippocratic.ai, Personal.ai, GoTo, and Accenture, to deploy distributed inference engines on a large scale.

    Indosat President Director and CEO Vikram Sinha made clear the broader impact of AI integration in telecom, saying, “By embedding AI into our radio access network, we’re not just enhancing connectivity – we’re building a nationwide AI-powered ecosystem that will fuel innovation across industries. This aligns with our mission to connect and empower every Indonesian.”

    Deployment roadmap

    The AI-RAN rollout will follow a phased approach:

    • Early 2025: A 5G AI-RAN lab established in Surabaya to support development, testing, and validation.
    • Second half of 2025: Launch of a small-scale commercial pilot to test AI inferencing workloads running on the NVIDIA AI-RAN infrastructure.
    • 2026: Broader expansion of AI-RAN deployment.

    Industry perspectives

    Tommi Uitto, President of Mobile Networks at Nokia, stated: “When you combine AI with RAN, you create an engine for future innovation. With our 5G Cloud RAN platform, Indosat can transform its network into a multi-purpose computing grid that uses the synergies of AI-accelerated computing. With our AI-powered products, we help Indosat augment RAN capabilities for enhanced performance, operational efficiency, advanced automation and optimised energy efficiency.”

    Ronnie Vasishta, SVP Telecoms at NVIDIA said: “The combination of Indosat’s vision for a nationwide AI grid and NVIDIA AI expertise and full-stack software and hardware platform will catalyse AI adoption and innovation across Indonesia, creating a new playbook for telecom operators worldwide.”

    The post Indosat becomes first mobile operator in SEA to roll out AI-RAN with Nokia and NVIDIA appeared first on TechWire Asia.

    ]]>
    Nvidia offers AI model for large-scale genetic analysis https://techwireasia.com/2025/02/nvidia-introduces-ai-model-for-large-scale-genetic-analysis/ Fri, 21 Feb 2025 12:17:01 +0000 https://techwireasia.com/?p=239882 Nvidia and research partners introduce Evo 2. Evo 2 can identify disease-causing mutations and assist in synthetic genome design. Nvidia and its research partners have developed an artificial intelligence model designed to analyse genetic sequences at an unprecedented scale. Announced on February 19, the Evo 2 AI is built to read and design genetic code […]

    The post Nvidia offers AI model for large-scale genetic analysis appeared first on TechWire Asia.

    ]]>
  • Nvidia and research partners introduce Evo 2.
  • Evo 2 can identify disease-causing mutations and assist in synthetic genome design.
  • Nvidia and its research partners have developed an artificial intelligence model designed to analyse genetic sequences at an unprecedented scale.

    Announced on February 19, the Evo 2 AI is built to read and design genetic code from different life forms. By finding patterns in DNA and RNA sequences, Evo 2 can process biological data in ways that would take researchers years of manual work.

    The model was designed to detect disease-causing mutations in human genes, and it can also generate synthetic genomes as complex as those found in bacteria. Scientists believe that the model’s ability to analyse data at scale could speed research in medicine, genetics, and bio-engineering.

    Expanding AI’s role in biology

    Evo 2 builds on its predecessor, Evo 1, which focuses on single-cell genomes. The newer version has been trained on 9.3 trillion nucleotides sourced from more than 128,000 whole genomes. Nucleotides are the fundamental components of genetic material.

    The model also examines metagenomic data, expanding its knowledge base beyond bacteria, archaea, and phages to include genetic information from humans, plants, and multi-cellular species.

    According to the researchers, such a model can recognise complex patterns in genetic sequences that would be difficult for traditional methods to detect. One of its primary applications is to identify dangerous mutations, like those associated with genetic illnesses.

    In early tests, Evo 2 correctly identified 90% of potentially harmful mutations in BRCA1, a breast cancer-linked gene. Scientists believe that this capability could support the development of targeted gene therapies, allowing treatments to target only specific cells while lowering the risk of unintended genetic modifications.

    Patrick Hsu, co-founder of the Arc Institute and senior researcher on Evo 2, described the model as a step toward generative biology, in which AI can “read, write, and think in the language of nucleotides.” He said Evo 2 has a wide understanding of genetic structures, making it useful for tasks like identifying disease-causing mutations, and designing artificial genetic sequences for scientific research.

    Computing power behind Evo 2

    Evo 2 was trained over several months using Nvidia DGX Cloud AI on AWS infrastructure, and used 2,000 Nvidia H100 GPUs. The model is capable of processing genetic sequences of up to 1 million nucleotides at once, allowing it to analyse complex relationships across entire genomes. To support this degree of processing, researchers developed a new AI architecture called StripedHyena 2, which is designed to handle large-scale biological datasets efficiently.

    According to the team, the architecture enabled Evo 2 to process 30 times more data than Evo 1 and analyse eight times more nucleotides. Greg Brockman, co-founder of OpenAI, worked on the project during a sabbatical, helping to optimise the AI for large-scale biological research.

    Applications beyond medicine

    While Evo 2 has shown promise in medical research, scientists believe the model could also help progress in fields such as agriculture, environmental science, and synthetic biology. Some potential applications might include:

    • Developing crops that are more resilient to climate change, with improved resistance to drought, pests, and extreme weather conditions.
    • Engineering organisms capable of breaking down environmental pollutants, offering new approaches to reducing industrial and agricultural waste.
    • Studying genetic adaptations in different species to better understand evolutionary biology and biodiversity.

    Collaborative research effort

    The project used Nvidia’s computing capabilities with research from the Arc Institute, a nonprofit organisation dedicated to addressing long-term scientific concerns. The institute was established in 2021 with $650 million in funding, and works with Stanford University, UC Berkeley, and UC San Francisco to advance research in bio-engineering, medicine, and genetics.

    Evo 2 is now freely available to researchers worldwide through Nvidia’s BioNeMo research platform, which includes various AI-powered tools for analysing and modelling biological data. By making the model accessible, the research team hopes to speed innovation in genomics, synthetic biology, and other fields that rely on large-scale genetic analysis.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Nvidia offers AI model for large-scale genetic analysis appeared first on TechWire Asia.

    ]]>
    Nvidia CEO Jensen Huang’s optimism amid US tech policies https://techwireasia.com/2024/11/nvidia-ceo-jensen-huangs-optimism-amid-us-tech-policies/ Mon, 25 Nov 2024 22:38:33 +0000 https://techwireasia.com/?p=239415 Nvidia CEO Jensen Huang expresses confidence in the resilience of global collaboration. President-elect Trump’s potential tariffs on Taiwanese semiconductors could disrupt supply chain. Nvidia’s CEO Jensen Huang believes that global tech collaboration will remain strong, even as the US considers stricter policies on advanced computing products. During a recent visit to Hong Kong, Huang addressed […]

    The post Nvidia CEO Jensen Huang’s optimism amid US tech policies appeared first on TechWire Asia.

    ]]>
  • Nvidia CEO Jensen Huang expresses confidence in the resilience of global collaboration.
  • President-elect Trump’s potential tariffs on Taiwanese semiconductors could disrupt supply chain.
  • Nvidia’s CEO Jensen Huang believes that global tech collaboration will remain strong, even as the US considers stricter policies on advanced computing products. During a recent visit to Hong Kong, Huang addressed concerns about the evolving political landscape, emphasising that science thrives on openness.

    “Open science and global collaboration—cooperation across math and science—have been around for a very long time. They are the foundation of social and scientific advancement,” he said. “That’s not going to change.”

    His outlook comes at a critical time. President-elect Donald Trump has reignited debates over tariffs and reshoring chip manufacturing, proposing measures that could significantly impact the global semiconductor sector.

    Trump has long-supported tariffs as a tool for reshaping trade and manufacturing. During a recent appearance on Joe Rogan’s podcast, he criticised the CHIPS Act—a bipartisan effort signed in 2022 to boost US semiconductor production—calling it “so bad.” Instead of subsidies, Trump suggested imposing tariffs on semiconductors from Taiwan, arguing that this would push companies such as TSMC to build more facilities in the US.

    However, experts are sceptical. William Reinsch, a senior adviser at the Center for Strategic and International Studies, pointed out that TSMC is already building a fab in Arizona. “Tariffs aren’t going to make that move any faster. If anything, they might complicate the effort,” he said.

    Potential impacts on Nvidia and the tech industry

    If Trump moves forward with tariffs, companies like Nvidia and AMD, which rely heavily on Taiwanese chips, could face rising costs. While their expenses might be passed down to customers, the ripple effects would likely be felt across the tech industry.

    Huang was measured in his response to present uncertainties. “Whatever happens, we’ll balance compliance with laws and policies, continue to advance our technology, and support customers worldwide,” he said.

    During his visit Huang also discussed broader issues, such as the energy demands of AI technologies. “If the world uses more energy to power the AI factories of the world, we’re a better world when that happens,” he said. He suggested sustainable solutions, such as placing AI supercomputers in remote areas powered by renewable energy.

    “My hope and dream is that, in the end, we’ll all see that using energy for intelligence is the best use of energy,” Huang said, underscoring AI’s potential to address global challenges—from carbon capture to designing better wind turbines.

    The stakes of reshoring chip manufacturing

    Reshoring chip production has become a national security priority for the US, especially after the pandemic exposed vulnerabilities in global supply chains. As of 2021, 44% of US logic chip imports came from Taiwan. A major disruption in Taiwanese manufacturing could cause logic chip prices to surge by as much as 59%, according to a 2023 US International Trade Commission report.

    The CHIPS Act aims to mitigate such risks, and companies have already started building new US facilities. Still, Trump’s proposed tariffs could introduce new challenges, potentially cutting profit margins for US-based companies like Nvidia.

    Reactions to Trump’s tariff proposals are divided. Dan Newman, CEO of Futurum Group, suggests the idea may be more political posturing than a concrete plan. “Trump is unlikely to move forward with anything that hurts the economy,” he said.

    However, Columbia Business School’s Lori Yue argued there’s a high chance Trump could impose tariffs. She added that deregulation related to AI under a second Trump administration might offset some of the financial strain on chip companies.

    A new era for AI and computing

    Huang closed his trip to Hong Kong on a hopeful note. Speaking at the Hong Kong University of Science and Technology after receiving an honorary doctorate in engineering, he told graduates they are entering a transformative era.

    “The age of AI has started,” Huang said. “The whole world is reset. You’re at the starting line with everybody else. An industry is being reinvented. You now have the instruments necessary to advance science in so many different fields.”

    While uncertainties about US technology policies remain, Huang’s message was clear: innovation will continue, driven by a new generation ready to redefine what is possible.

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post Nvidia CEO Jensen Huang’s optimism amid US tech policies appeared first on TechWire Asia.

    ]]>
    How Japan and NVIDIA are building an AI powerhouse https://techwireasia.com/2024/11/how-japan-and-nvidia-are-building-an-ai-powerhouse/ Wed, 13 Nov 2024 10:32:54 +0000 https://techwireasia.com/?p=239351 Japan, NVIDIA, and SoftBank push AI with language models and digital twins. NVIDIA and cloud leaders drive Japan’s leadership in telecom, robotics, and automotive. Becoming a global leader in AI is a top priority for Japan, and it’s all starting with AI-driven language models. Japanese tech experts are working on advanced AI models that understand […]

    The post How Japan and NVIDIA are building an AI powerhouse appeared first on TechWire Asia.

    ]]>
  • Japan, NVIDIA, and SoftBank push AI with language models and digital twins.
  • NVIDIA and cloud leaders drive Japan’s leadership in telecom, robotics, and automotive.
  • Becoming a global leader in AI is a top priority for Japan, and it’s all starting with AI-driven language models. Japanese tech experts are working on advanced AI models that understand the country’s unique cultural and linguistic nuances. This is opening up new possibilities for developers in industries that demand high precision, like healthcare, finance, and manufacturing.

    And it’s not just a solo mission—major consulting firms like Accenture, Deloitte, EY Japan, FPT, Kyndryl, and TCS Japan—are teaming up with NVIDIA to establish innovation hubs throughout Japan. The centres aim to help companies fully embrace AI, both in enterprise use and physical applications. They’re using NVIDIA’s AI software, language models tailored to the Japanese language, and NVIDIA’s NIM microservices to build customised AI tools that suit specific industry needs. Essentially, it’s about creating a digital workforce that boosts productivity across the board.

    One of the most exciting tools here is NVIDIA’s Omniverse platform, which lets Japanese companies make digital twins (virtual copies of real-world assets) and test complex AI systems before rolling them out in the real world. For industries like manufacturing and robotics, this tech is a huge advantage, helping them refine processes and make smart tweaks without taking on real-world risks. With AI integrated into these strongholds of Japanese industry, the country’s well on its way to addressing some of its toughest challenges.

    Japan is facing a shrinking workforce due to an ageing population; a serious issue in the country. But Japane’s strength in robotics and automation puts it in a good position to tackle the problem with AI-powered solutions. Japan’s government recently released a report underscoring its ambition to be “the world’s most AI-friendly country,” clearly signalling AI’s role in its future.

    And the numbers are backing up this commitment. IDC reports that Japan’s AI market hit $5.9 billion value this year, marking a solid 31.2% year-on-year growth. In Tokyo and Kansai, newly opened consulting hubs are primed to give companies hands-on experience with NVIDIA’s cutting-edge AI tech and guidance to help accelerate AI adoption. For Japan, this isn’t just about tech—it’s about solving real-world social challenges and driving long-term economic growth.

    Building Japan’s AI backbone with cloud leaders

    The country’s top cloud providers—SoftBank Corp., GMO Internet Group, KDDI, Highreso, Rutilea, and SAKURA Internet—are all-in on building AI infrastructure with NVIDIA’s support, focusing on industries like robotics, automotive, healthcare, and telecom. With backing from Japan’s Ministry of Economy, Trade, and Industry (METI), the cloud providers are setting up AI data centers across the country to support both local and national development, equipped with NVIDIA’s high-performance accelerated computing.

    According to NVIDIA CEO Jensen Huang, Japan has huge potential to gain from the AI era. “Japan’s companies stand to benefit tremendously from the new industrial revolution powered by AI,” Huang said. “Employees will see their productivity soar with AI agents taking over repetitive tasks. The factories of tomorrow will operate in dual mode—AI factories generating software intelligence alongside traditional factories.”

    One standout partnership is between NVIDIA and SoftBank Corp., one that will have a big impact on fast-tracking Japan’s AI goals. During his keynote at the NVIDIA AI Summit Japan, the NVIDIA CEO announced that SoftBank is building Japan’s most powerful AI supercomputer using the NVIDIA Blackwell platform, and has plans to use the NVIDIA Grace Blackwell platform for its next big project. SoftBank isn’t just aiming for tech leadership in Japan—it’s also setting its sights on new revenue opportunities in telecommunications globally.

    SoftBank’s already tested out the world’s first combined AI and 5G telecom network using NVIDIA’s AI Aerial platform, a breakthrough that could open up new revenue streams for telecom providers worldwide. In addition, SoftBank is working on an AI marketplace to meet the growing demand for secure, local AI computing in Japan. The marketplace could turn SoftBank into a central hub for AI services, supporting businesses, consumers, and enterprises across the country.

    “Japan has a long history of pioneering technological innovations with global impact,” Huang said. “With SoftBank’s significant investment in NVIDIA’s AI, Omniverse, and 5G AI-RAN platforms, Japan is taking a leap into the AI industrial revolution. This shift is expected to benefit sectors like telecommunications, transportation, robotics, and healthcare in ways that will ultimately advance society.”

    SoftBank’s CEO Junichi Miyakawa echoed the optimism, saying, “Through our close partnership with NVIDIA, SoftBank is leading the AI-driven transformation of society. With our powerful AI infrastructure and our new AI-RAN solution ‘AITRAS,’ which reinvents 5G networks for AI, we’re accelerating innovation across Japan and beyond.”

    Japan’s vision for an AI-driven future

    SoftBank is about to receive the world’s first NVIDIA DGX B200 systems for its new DGX SuperPOD supercomputer, which will not only support SoftBank’s own AI projects but also those of universities, research centres, and businesses, right across Japan. Expected to be Japan’s most powerful AI machine yet, it’s perfect for developing large language models and managing high-performance compute tasks.

    SoftBank has even bigger plans: it’s working on a second NVIDIA-accelerated supercomputer that’s optimised for extremely intensive workloads. This system, based on the NVIDIA Grace Blackwell platform, combines NVIDIA Blackwell GPUs with energy-efficient Arm-based NVIDIA Grace CPUs to take AI in Japan to the next level, solidifying the country’s position as a leader in AI.

     

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.

    The post How Japan and NVIDIA are building an AI powerhouse appeared first on TechWire Asia.

    ]]>
    Nvidia GTC 2024 showcases the ‘world’s most powerful chip’ for AI and more! https://techwireasia.com/2024/03/nvidia-introduces-a-new-ai-chip-and-more/ Wed, 20 Mar 2024 01:00:17 +0000 https://techwireasia.com/?p=238488 The Nvidia GTC 2024 reveals the ‘world’s most powerful chip’ for AI, set to transform AI model accessibility and efficiency. Nvidia also announced significant partnerships and software tools. Nvidia also deepened its foray into the automotive industry, partnering with leading Chinese electric vehicle makers. This year’s Nvidia GTC is packed with noteworthy revelations, including the […]

    The post Nvidia GTC 2024 showcases the ‘world’s most powerful chip’ for AI and more! appeared first on TechWire Asia.

    ]]>
  • The Nvidia GTC 2024 reveals the ‘world’s most powerful chip’ for AI, set to transform AI model accessibility and efficiency.
  • Nvidia also announced significant partnerships and software tools.
  • Nvidia also deepened its foray into the automotive industry, partnering with leading Chinese electric vehicle makers.
  • This year’s Nvidia GTC is packed with noteworthy revelations, including the introduction of the Blackwell B200 GPU. Branded as the ‘world’s most powerful chip’ for AI, it’s designed for its capacity to make AI models with trillions of parameters more accessible to a wider audience.

    Jensen Huang, the CEO of Nvidia, inaugurated the company’s annual developer conference with a series of strategic announcements aimed at solidifying Nvidia’s supremacy in the AI sector.

    Revolutionizing AI with the new Nvidia chip

    Nvidia introduced the B200 GPU, boasting an impressive 20 petaflops of FP4 computing power, thanks to its 208 billion transistors. Additionally, Nvidia unveiled the GB200, which synergizes two B200 GPUs with a single Grace CPU, claiming it can enhance LLM inference workload performance by 30 times while also significantly boosting efficiency. This advancement is said to slash costs and energy usage by as much as 25 times compared to the H100 model.

    Previously, training a model with 1.8 trillion parameters required 8,000 Hopper GPUs and 15 megawatts of power. Now, Nvidia asserts that only 2,000 Blackwell GPUs are needed to achieve the same feat, reducing power consumption to just four megawatts.

    The introduction of the GB200 “superchip” alongside the Blackwell B200 GPU marks a significant milestone. Nvidia reports that, in tests using the GPT-3 LLM with 175 billion parameters, the GB200 delivered seven times the performance and quadrupled the training speed of the H100.

    A notable upgrade The Verge highlights is the second-generation transformer engine that enhances compute, bandwidth, and model capacity by utilizing four bits per neuron, halving the previous eight. The next-gen NVLink switch is a groundbreaking feature, enabling communication among up to 576 GPUs and providing 1.8 terabytes per second of bidirectional bandwidth. This innovation necessitated the creation of a new network switch chip, boasting 50 billion transistors and 3.6 teraflops of FP8 compute capability.

    Bridging the gap: Nvidia’s new suite of software tools

    Nvidia has also introduced a suite of software tools designed to streamline the sale and deployment of AI models for businesses, catering to a clientele that includes the globe’s tech behemoths.

    These developments underscore Nvidia’s ambition to broaden its influence in the AI inference market, a segment where its chips are not yet predominant, as noted by Joel Hellermark, CEO of Sana.

    Nvidia is renowned for its foundational role in training AI models, such as OpenAI’s GPT-4, a process that requires digesting vast data quantities, predominantly undertaken by AI-centric and large tech companies.

    However, as businesses of various sizes strive to integrate these foundational models into their operations, Nvidia’s newly released tools aim to simplify the adaptation and execution of diverse AI models on Nvidia hardware.

    According to Ben Metcalfe, a venture capitalist and founder of Monochrome Capital, Nvidia’s approach is akin to offering “ready-made meals” instead of gathering ingredients from scratch. This strategy is particularly advantageous for companies that may lack the technical prowess of giants like Google or Uber, enabling them to quickly deploy sophisticated systems.

    For instance, ServiceNow used Nvidia’s toolkit to develop a “copilot” for addressing corporate IT challenges, demonstrating the practical applications of Nvidia’s innovations.

    Noteworthy is Nvidia’s collaboration with major tech entities such as Microsoft, Google, and Amazon, which will incorporate Nvidia’s tools into their cloud services. However, prominent AI model providers like OpenAI and Anthropic are conspicuously absent from Nvidia’s partnership roster.

    Nvidia’s toolkit could significantly bolster its revenue, as part of a software suite priced at US$4,500 annually per Nvidia chip in private data centers or US$1 per hour in cloud data centers.

    Reuters suggests that the announcements made at GTC 2024 are pivotal in determining whether Nvidia can sustain its commanding 80% share in the AI chip marketplace.

    These developments reflect Nvidia’s evolution from a brand favored by gaming enthusiasts to a tech titan on par with Microsoft, boasting a staggering sales increase to over US$60 billion in its latest fiscal year.

    While the B200 chip promises a thirtyfold increase in efficiency for tasks like chatbot responses, Huang remained tight-lipped about its performance in extensive data training and did not disclose pricing details.

    Despite the surge in Nvidia’s stock by 240% over the past year, Huang’s announcements did not ignite further enthusiasm in the market, with a slight decline in Nvidia’s stock following the presentation.

    Tom Plumb, CEO and portfolio manager at Plumb Funds, a significant investor in Nvidia, remarked that the Blackwell chip’s unveiling was anticipated but reaffirmed Nvidia’s leading edge in graphics processing technology.

    Nvidia has revealed that key clients, including Amazon, Google, Microsoft, OpenAI, and Oracle, are expected to incorporate the new chip into their cloud services and AI solutions.

    The company is transitioning from selling individual chips to offering complete systems, with its latest model housing 72 AI chips and 36 central processors, exemplifying Nvidia’s comprehensive approach to AI technology deployment.

    Analysts predict a slight dip in Nvidia’s market share in 2024 as competition intensifies and major customers develop their chips, posing challenges to Nvidia’s dominance, especially among budget-conscious enterprise clients.

    Despite these challenges, Nvidia’s extensive software offerings, particularly the new microservices, are poised to enhance operational efficiency across various applications, reinforcing its position in the tech industry.

    Moreover, Nvidia is expanding into software for simulating the physical world with 3-D models, announcing collaborations with leading design software companies. This move and Nvidia’s capability to stream 3-D worlds to Apple’s Vision Pro headset marks a significant leap forward in immersive technology.

    Nvidia’s automotive and industrial ventures

    Nvidia also unveiled an innovative line of chips tailored for automotive use, introducing capabilities that enable chatbots to operate within vehicles. The tech giant has further solidified its partnerships with Chinese car manufacturers, announcing that electric vehicle leaders BYD and Xpeng will incorporate its latest chip technology.

    Last year, BYD surpassed Tesla, becoming the top electric vehicle producer worldwide. The company plans to adopt Nvidia’s cutting-edge Drive Thor chips, which promise to enhance autonomous driving capabilities and digital functionalities. Nvidia also highlighted that BYD intends to leverage its technology to optimize manufacturing processes and supply chain efficiency.

    Additionally, this collaboration will facilitate the creation of virtual showrooms, according to Danny Shapiro, Nvidia’s Vice President for Automotive, during a conference call. Shapiro indicated that BYD vehicles would integrate Drive Thor chips starting next year.

    The announcement was part of a broader revelation of partnerships during Nvidia’s GTC developer conference in San Jose, California. Notably, Chinese automakers, including BYD, Xpeng, GAC Aion’s Hyper brand, and autonomous truck developers, have declared their expanded cooperation with Nvidia. Other Chinese brands like Zeekr, a subsidiary of Geely, and Li Auto have also committed to using the Drive Thor technology.

    Nvidia extends partnership with BYD
    Nvidia extends partnership with BYD (Source – X)

    These partnerships reflect a strategic move by Chinese auto brands to leverage advanced technology, offsetting their relatively lower global brand recognition. BYD and its competitors are keen on increasing their market presence in Europe, Southeast Asia, and other regions outside China, positioning themselves against Tesla and other established Western brands in their domestic market.

    Shapiro emphasized the considerable number of Chinese automakers and highlighted the supportive regulatory environment and incentives fostering innovation and developing advanced automated driving technologies.

    Nvidia also announced several other key partnerships in the automotive and industrial sectors, including a collaboration with U.S. software firm Cerence to adapt large language model AI systems for automotive computing needs. Chinese computer manufacturer Lenovo is also working with Nvidia to deploy large language model technologies.

    Another development is Soundhound’s utilization of Nvidia’s technology to create a voice command system for vehicles, enabling users to access information from a virtual owner’s manual through voice commands, marking a step forward in enhancing user interaction with vehicle technology.

    The post Nvidia GTC 2024 showcases the ‘world’s most powerful chip’ for AI and more! appeared first on TechWire Asia.

    ]]>
    How Nvidia navigates through legal complexities and market cap dominance https://techwireasia.com/2024/03/how-nvidia-navigates-through-legal-complexities-and-market-cap-dominance/ Wed, 13 Mar 2024 01:40:44 +0000 https://techwireasia.com/?p=238454 Nvidia hits US$2 trillion market cap, outshining legal issues with AI focus. Legal hurdles can’t slow Nvidia; market value and AI dominance climb. Nvidia’s AI strategy drives market cap past rivals, despite legal fights. Legal challenges, and market dynamics presents a fascinating narrative that shapes the fortunes of leading corporations. Among these, Nvidia, a titan […]

    The post How Nvidia navigates through legal complexities and market cap dominance appeared first on TechWire Asia.

    ]]>
  • Nvidia hits US$2 trillion market cap, outshining legal issues with AI focus.
  • Legal hurdles can’t slow Nvidia; market value and AI dominance climb.
  • Nvidia’s AI strategy drives market cap past rivals, despite legal fights.
  • Legal challenges, and market dynamics presents a fascinating narrative that shapes the fortunes of leading corporations. Among these, Nvidia, a titan in the field of AI, has recently been at the center of a noteworthy legal dispute while simultaneously experiencing an unprecedented surge in its market valuation.

    This complex scenario provides a rich case study for examining the broader implications for the tech industry, market competition, and the legal frameworks that govern intellectual property rights.

    Nvidia found itself embroiled in controversy when three authors—Brian Keene, Abdi Nazemian, and Stewart O’Nan—levied accusations against the company for allegedly using their copyrighted works without permission. These works were purportedly incorporated into a substantial dataset of approximately 196,640 books to advance the capabilities of Nvidia’s NeMo AI platform, a sophisticated system aimed at mimicking human language.

    The fallout from these allegations led to the dataset’s removal in October, underscoring the legal complexities surrounding copyright infringement in the digital age.

    A meteoric rise in market valuation

    Despite facing this legal hurdle, Nvidia has witnessed a surge in its market valuation, underscoring the intense investor interest in AI technologies. This rise is indicative of the broader trends in the semiconductor industry, where demand for AI chips, particularly those powering popular applications such as ChatGPT, has skyrocketed.

    Within a span of nine months, Nvidia’s market value soared from US$1 trillion to over US$2 trillion, surpassing industry giants like Amazon.com, Google’s parent company Alphabet, and Saudi Aramco in the process. This meteoric rise has positioned Nvidia as a formidable contender in the race to become the world’s second-most valuable company, trailing closely behind Apple and Microsoft.

    As reported by Reuters, Nvidia’s current market capitalization, standing at approximately US$2.38 trillion, exemplifies the fierce competition at the apex of the global corporate sector. This competitive landscape is not only defined by market valuations but also by the continuous drive for innovation and the development of high-quality products that resonate with consumers and enterprises alike. Apple’s journey to becoming the world’s most valuable company in 2011, bolstered by its array of successful products and services, highlights the critical role of brand loyalty and product innovation in achieving market dominance.

    Nvidia's mastery over legal hurdles and market cap peaks
    Nvidia’s mastery over legal hurdles and market cap peaks (Source – X)

    On the other hand, Microsoft’s ascendance in 2024 to claim the title of the most valuable company globally emphasizes the significance of strategic investments in technology, particularly AI. With over 70% of computers worldwide running on Windows, according to Statcounter, Microsoft’s influence extends beyond its operating system. The company’s diversified portfolio, including the Office Suite, Azure cloud platform, Xbox consoles, and Surface devices, alongside a substantial investment in OpenAI, demonstrates its commitment to shaping the future of technology.

    Nvidia’s stronghold over the high-end AI chip market, commanding 80% of the sector, combined with its significant stock performance, has propelled Wall Street to new heights this year. This success story is a testament to the investor enthusiasm for AI technologies, positioning Nvidia and Meta Platforms as leaders in a market increasingly focused on digital innovation.

    Industry experts, such as Richard Meckler of Cherry Lane Investments, attribute Nvidia’s robust market performance to the solid fundamentals underpinning its business model and the speculative support from investors. This blend of strong business practices and market speculation has facilitated Nvidia’s steady climb in stock value throughout 2024, even as it faces legal challenges and stiff competition from tech giants like Apple and Microsoft.

    Apple’s recent challenges with iPhone sales and the shift in market capitalization rankings underscore the dynamic nature of the tech industry, where companies continually vie for leadership positions. Meanwhile, Nvidia’s competitive forward price-to-earnings ratio and the insights from David Wagner of Aptus Capital Advisors suggest that Nvidia represents an attractively priced stock within the AI narrative, with the potential for significant growth in the coming years.

    Facing the peaks: The Nvidia market cap challenges

    However, as Nvidia’s stock approaches what some analysts believe to be its peak, the challenges of sustaining rapid growth in the face of increasing market capitalization become apparent. The speculative nature of stock valuations, coupled with the potential for innovation and market expansion, presents a nuanced picture of Nvidia’s future prospects. Should Nvidia continue to surpass analyst expectations, it could maintain or even enhance its market position, reflecting the intricate balance between innovation, legal challenges, and market dynamics.

    Nvidia’s recent experiences offer valuable insights into the challenges and opportunities faced by leading tech companies today. As legal disputes unfold and market valuations fluctuate, the broader implications for the tech industry, intellectual property rights, and the ongoing pursuit of innovation remain subjects of keen interest. Nvidia’s journey through these complex landscapes underscores the dynamic interplay between legal considerations, market competition, and the relentless drive for technological advancement.

    As the industry moves forward, the lessons learned from Nvidia’s story will undoubtedly influence future discussions on copyright law, market dynamics, and the role of AI in shaping the digital future.

    The post How Nvidia navigates through legal complexities and market cap dominance appeared first on TechWire Asia.

    ]]>
    Nvidia’s CEO, Jensen Huang: AI will take over coding, making learning optional https://techwireasia.com/2024/03/nvidias-ceo-jensen-huang-ai-will-take-over-coding-making-learning-optional/ Mon, 04 Mar 2024 01:30:55 +0000 https://techwireasia.com/?p=238303 AI is set to make coding accessible for everyone, reshaping how we learn to program. Huang predicts a shift from traditional coding to using AI for software development. Despite AI’s rise in coding, the journey of learning and innovating in tech continues. Nvidia’s CEO, Jensen Huang is stirring the pot again, folks. This time, he’s […]

    The post Nvidia’s CEO, Jensen Huang: AI will take over coding, making learning optional appeared first on TechWire Asia.

    ]]>
  • AI is set to make coding accessible for everyone, reshaping how we learn to program.
  • Huang predicts a shift from traditional coding to using AI for software development.
  • Despite AI’s rise in coding, the journey of learning and innovating in tech continues.
  • Nvidia’s CEO, Jensen Huang is stirring the pot again, folks. This time, he’s stepping up with a bold claim that’s set to redefine our understanding of coding. But he’s not just throwing this out into the void; he’s delivering his message to an audience with the power to broadcast it across the tech landscape.

    Remember the good old days? Those were the times when wrestling with strings and arrays, spending hours diligently debugging, and trying to unravel complex algorithms were considered rites of passage for developers. Those challenging days are on the cusp of becoming historical footnotes, thanks to the revolutionary entrance of generative AI into the coding domain, signaling a transformative shift in the development process.

    AI is no longer just a supporting act; it’s assuming the lead role, fundamentally altering the coding narrative. The advent of AI is moving us away from the precise details of programming languages, guiding us towards a broader horizon where problem-solving and innovation take precedence. The empowerment provided by AI is democratizing software development, enabling individuals with minimal tech exposure to create applications, a scenario that once seemed far-fetched.

    The evolution from coding challenges to AI solutions

    TechRadar has recently illuminated Huang’s dialogue at the World Government Summit in Dubai, where he made a compelling argument that in the wake of AI’s rapid advancements, we may need to reassess the elevated status traditionally accorded to coding skills within the tech realm.

    For a long time, mastering coding was seen as the golden ticket in the tech industry. Huang is challenging this long-standing paradigm, suggesting it’s time for a strategic pivot in how we prioritize skills for the future.

    Huang is shaking things up, proclaiming that the era of prioritizing coding skills is over. Now, he suggests, we should focus on fields like agriculture and education. The rise of generative AI and natural language processing technologies is set to revolutionize our approach to technology, potentially redirecting the countless hours previously dedicated to learning programming languages towards gaining a deeper understanding of these critical areas.

    Huang is on a quest to render technology so user-friendly that programming becomes an innate skill for everyone, achievable through the simplicity of our native languages. He envisions a future where the magic of artificial intelligence makes everyone a programmer, without the need for specialized coding languages.

    However, Huang quickly points out that this doesn’t spell the end for coding. A foundational understanding of coding principles remains essential, particularly for leveraging AI programming effectively. He’s advocating for a shift towards upskilling, ensuring that individuals grasp the ‘how’ and the ‘when’ of employing AI in problem-solving.

    His enthusiasm for natural language processing suggests a future where the barrier to coding is not the complexity of programming languages but rather the ability to communicate ideas clearly. This could potentially open up programming to a much wider audience.

    Personal reflections on the coding journey

    Looking back on my journey into the world of coding during my university days as a network engineer, I was captivated by the magic of creating something from nothing. The simplicity and immediacy of building websites with HTML and CSS were exhilarating, offering a tangible sense of creation from mere lines of code.

    Java, on the other hand, presented a steeper learning curve. Unlike the more intuitive HTML and CSS, Java introduced a level of complexity that tested my resolve, demanding a deeper understanding and a more significant commitment from those who wished to master it.

    Yet, the challenge of Java was also its reward. It served as a gateway to a broader understanding of programming concepts such as object-oriented programming and multithreading, enriching my coding repertoire.

    Java was a learning curve, but it comes with rewards.
    Java was a learning curve, but it comes with rewards. (Source – Shutterstock).

    The journey through learning Java, and programming in general, taught me an important lesson: the path to mastery varies greatly depending on your objectives. For those looking to get their feet wet, the learning curve is manageable. But for those aiming for proficiency, the road is fraught with complex concepts that require dedication.

    Then came C++, a language that, in its complexity, offered a profound depth of understanding for classes, structs, memory manipulation, and foundational programming concepts. The journey to mastering C++ was a testament to the value of persistence and the transformative power of applying theoretical knowledge to practical projects.

    The future of programmers in the age of AI

    The emergence of AI in coding brings us to a pivotal question: will AI render programmers obsolete? My perspective leans towards skepticism. Despite the undeniable impact of AI, the nuances of legacy code, the necessity of oversight, and the intricacies of control suggest that programming jobs will evolve rather than disappear. The prospect of AI enabling lay users to create software through conversation opens new possibilities, but also underscores the enduring value of programming expertise – because if you create in your natural language, you’re not inherently challenged to know what has gone wrong, or why, when it inevitably does, and so don’t have the understanding to correct it.

    As we navigate the initial stages of AI’s integration into coding, it becomes apparent that we are interacting with the nascent stages of AI’s capabilities. These early iterations, while impressive, are placeholders for the more sophisticated, efficient solutions that are yet to emerge.

    As we stand on the precipice of this new era that Huang envisions, where AI could fundamentally alter our relationship with coding, we’re reminded of the journey that has brought us here. The core of technology—learning, applying, and exploring—remains as dynamic and exciting as ever, even as we venture into this uncharted territory.

    The post Nvidia’s CEO, Jensen Huang: AI will take over coding, making learning optional appeared first on TechWire Asia.

    ]]>
    Nvidia GeForce RTX 4070 Super game ready driver is out now https://techwireasia.com/2024/01/nvidia-reflex-hits-100-games-rtx-4070-super-gets-new-driver/ Fri, 19 Jan 2024 01:23:06 +0000 https://techwireasia.com/?p=237287 GeForce RTX 4070 Super enhances 1440p gaming with superior performance and DLSS 3, supported by Nvidia Reflex for reduced latency. Nvidia Reflex revolutionizes gaming with latency reduction across 100+ games, enhancing responsiveness in various genres. Nvidia combines GeForce RTX 4070 Super’s graphic excellence with Reflex technology for a seamless gaming experience. In online gaming, latency […]

    The post Nvidia GeForce RTX 4070 Super game ready driver is out now appeared first on TechWire Asia.

    ]]>
  • GeForce RTX 4070 Super enhances 1440p gaming with superior performance and DLSS 3, supported by Nvidia Reflex for reduced latency.
  • Nvidia Reflex revolutionizes gaming with latency reduction across 100+ games, enhancing responsiveness in various genres.
  • Nvidia combines GeForce RTX 4070 Super’s graphic excellence with Reflex technology for a seamless gaming experience.
  • In online gaming, latency and ping are critical factors that significantly influence a player’s experience. These terms, which might not always be at the forefront for casual gamers, are essential in shaping the game’s responsiveness, fairness, and overall enjoyment. Addressing these vital aspects, Nvidia Reflex emerges as a game-changing technology.

    By significantly reducing system latency on GeForce graphics cards and laptops, Nvidia Reflex ensures that players’ actions are registered and reflected on-screen more quickly. This enhancement is particularly beneficial in multiplayer matches, where having a competitive edge is crucial, and it also makes single-player titles more responsive and enjoyable.

    Since its introduction in September 2020, Nvidia Reflex has revolutionized system latency reduction in over 100 games. It has become a widely adopted feature, with over 90% of GeForce gamers enabling Reflex in their settings. The technology is not limited to competitive shooters like Apex Legends, Call of Duty: Modern Warfare III, and Fortnite; it also extends to critically acclaimed titles across various genres, such as Cyberpunk 2077, The Witcher 3: Wild Hunt, and Red Dead Redemption 2.

    Nvidia Reflex’s widespread adoption in competitive and single-player games underscores its effectiveness in tackling the challenges of latency and ping, elevating the gaming experience to new heights.

    In 2023, gamers equipped with GeForce graphics cards devoted over 10 billion hours to playing their favorite games, enjoying a noticeable enhancement in responsiveness thanks to the implementation of Nvidia Reflex’s system latency reduction technology.

    Latest game titles embracing Nvidia Reflex

    The trend toward adopting Reflex shows no signs of slowing, and we can expect to see its integration in a growing number of eagerly awaited games throughout 2024. Since Nvidia’s previous update, games such as Layers of Fear, SCUM, and Squad have embraced Reflex technology. Additionally, at CES 2024, it was revealed that Horizon Forbidden West and NAKWON: Last Paradise will include Reflex support from their respective launches.

    Players anticipating the Horizon Forbidden West Complete Edition will find it launches with Reflex support integrated from day one. This sequel to Horizon Zero Dawn, highly regarded for its expansive world and narrative depth, invites players to explore new territories, confront formidable machines, and interact with diverse tribes. The game’s narrative centers on a land plagued by devastating storms and an unrelenting blight, posing a threat to the remnants of humanity. As the protagonist, Aloy, players will seek to unravel the mysteries behind these threats and strive to restore balance to the world. The Complete Edition also features the Burning Shores story expansion and additional content.

    The psychological horror game Layers of Fear has received an update to include Nvidia Reflex, significantly reducing system latency and enhancing the gaming experience. This title, known for its impact on the narrative-driven horror genre, combines the original Layers of Fear, its sequel, and all downloadable content into a comprehensive package. This package now benefits from the latency-reducing capabilities of Reflex, ensuring a more immersive and responsive gameplay experience.

    NAKWON: Last Paradise, an upcoming game by Mintrocket, is set in a post-apocalyptic Seoul overrun by zombies. This third-person stealth survival game emphasizes stealth and survival in a city where firearms are scarce, but hiding spots abound. Players must rely on their wits to survive against both AI-controlled zombies and other players in a PvPvE format. When it launches, NAKWON: Last Paradise will feature Reflex support from the outset, offering players enhanced responsiveness and performance.

    SCUM, developed by Gamepires and Jagex, offers a unique survival experience where prisoners fight for survival and fame in a dystopian future. The game balances hardcore survival mechanics with optional PvP events, blending intricate planning and intense action. Recently, SCUM incorporated Reflex technology, significantly reducing system latency and improving the overall player experience.

    Offworld Industries’ Squad, a large-scale multiplayer shooter, focuses on realism and team coordination. The game was recently upgraded to include Reflex support, reducing system latency and enhancing the responsiveness of its combat gameplay. This update is particularly beneficial in Squad‘s large-scale battles, where quick reactions and precise actions are essential.

    The introduction of Reflex technology extends beyond game titles to gaming hardware. For instance, the launch of the HyperX Pulsefire Haste 2 Mini – Wireless Gaming Mouse, compatible with the Nvidia Reflex Analyzer, represents a step forward in gaming peripherals. This lightweight mouse, designed for optimal performance, can be used to measure system latency accurately, offering gamers insights into their setup’s responsiveness.

    As Nvidia Reflex continues to expand its presence in the gaming world, more games and compatible devices are expected to be announced.

    GeForce RTX 4070 Super game ready driver is out now

    The introduction of Nvidia’s GeForce RTX 4070 Super, alongside the GeForce RTX 4080 Super and 4070 Ti Super, marks a significant enhancement in the GeForce RTX 40 Series. The GeForce RTX 4070 Super, available now, is designed to deliver exceptional performance, especially in graphically demanding games. Its launch is accompanied by the availability of a new game ready driver, essential for harnessing the full potential of this advanced GPU.

    The GeForce RTX 4070 Super stands out for its significant core increase compared to the GeForce RTX 4070, making it a compelling choice for gaming at 1440p with maximum settings. Its performance, surpassing that of the GeForce RTX 3090 while being more power-efficient, is further enhanced by the inclusion of DLSS 3. This makes it an attractive option for gamers and creators upgrading from the GeForce RTX 3070 or RTX 2070, offering a significant leap in frame rates and overall gaming experience at 1440p.

    GeForce RTX 4070 SUPER - 20% more cores
    RTX 4070 SUPER – 20% more cores (Source – Nvidia)

    Media reviews of the GeForce RTX 4070 Super highlight its efficiency, performance, and quiet operation, underscoring its prowess as a 1440p gaming GPU. Available in both Founders Edition and custom models from various add-in card providers, the GPU caters to a range of preferences and requirements.

    In addition to the hardware advancements, Nvidia’s focus extends to gaming experiences, exemplified by the upcoming Palworld game. Set to enter early access, Palworld is a multiplayer, open-world survival and crafting game featuring a unique monster-collection element. GeForce RTX gamers will have the advantage of activating DLSS 2 to enhance game performance, ensuring an optimal gaming experience.

    GeForce Experience continues to support gamers with one-click optimal settings for over 1,000 titles, including recent additions like Apocalypse Party and Escape from Tarkov: Arena. This feature, along with the ability to capture and share gaming moments, underscores Nvidia’s commitment to enhancing the gaming experience.

    The new GeForce game ready driver is available for download, providing the necessary support for the latest hardware and games. Nvidia encourages users to provide feedback on the driver through the GeForce.com driver feedback forum, ensuring continuous improvement and addressing any technical issues.

    The post Nvidia GeForce RTX 4070 Super game ready driver is out now appeared first on TechWire Asia.

    ]]>