Big Data Asia | TechWire Asia https://techwireasia.com/tag/big-data/ Where technology and business intersect Fri, 14 Feb 2025 10:08:33 +0000 en-GB hourly 1 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png Big Data Asia | TechWire Asia https://techwireasia.com/tag/big-data/ 32 32 Ready for the AI wave? Here’s your must-take course list https://techwireasia.com/2024/11/ready-for-the-ai-wave-heres-your-must-take-course-list/ Wed, 13 Nov 2024 23:23:05 +0000 https://techwireasia.com/?p=239344 AI continues to transform industries, and preparing yourself with the right skills has never been more important. From developing foundational knowledge to understanding advanced AI tools, learning opportunities abound for those looking to adapt to this evolving technology. Whether you’re a leader aiming to integrate AI strategically, a creative professional eager to streamline content creation, […]

The post Ready for the AI wave? Here’s your must-take course list appeared first on TechWire Asia.

]]>
AI continues to transform industries, and preparing yourself with the right skills has never been more important. From developing foundational knowledge to understanding advanced AI tools, learning opportunities abound for those looking to adapt to this evolving technology.

Whether you’re a leader aiming to integrate AI strategically, a creative professional eager to streamline content creation, or someone simply interested in exploring AI’s potential, there are courses designed to match any number of career paths and expertise levels.

Staying informed on AI’s ethical considerations, mastering AI-specific tools, and learning how to navigate the shifting job landscape are key to personal and business success in this new era.

Embracing these learning opportunities can help you or your organisation innovate, make informed decisions, and keep pace with the rapid changes AI brings to the workplace. Here are a couple of online AI courses that you can take:

Prompt Engineering+: Master Speaking to AI

‘Prompt engineering’ is the structuring of instructions for generative AI models, with practitioners serving as an interface between human intent and machine response. Success in this field requires understanding models’ architecture, training data, tokenisation, and available, tune-able parameters. Essential skills for prompt engineers include natural language processing, proficiency with AI models like ChatGPT, Google Gemini, and DALL-E, analytical skills, and Python programming.

Prompt Engineering+: Master Speaking to AI prepares learners to create effective prompts for diverse AI applications. The course covers prompt structure, one-shot, few-shot, and zero-shot learning, and best practices to enhance problem-solving and decision-making. Students gain insight into prompt engineering’s role in industry, the skills needed to tackle challenges like AI inaccuracies, and ways to control model output for safe deployment. Equipped with practical experience, learners are ready to enter this emerging field confidently.

What you’ll learn:

  • Fundamentals and advanced techniques in prompt engineering,
  • Lifecycle skills for prompt refinement and deployment, and many more.

Generative AI for Leaders

Generative AI presents both opportunity and challenge. Its rapid advancements push business leaders to act quickly, yet uncertainties and organisational constraints can cause hesitation, even at the highest level. Although 43% of CEOs have started investing in AI with 45% planning to do so soon, most focus on quick efficiency gains rather than transformative initiatives. Currently, 90% of organisations remain at early AI maturity stages, with deployments limited to proofs-of-concept or small-scale projects.

Who should take this course?

Business leaders, executives, managers, team leaders, technology decision-makers, and professionals interested in digital transformation.

What you will learn:

  • How to leverage AI for business innovation and efficiency,
  • Strategies for building and executing an AI-driven business strategy,
  • Fostering a culture that supports AI adoption and collaboration,
  • Ethical and legal considerations, including addressing AI bias and compliance,
  • Insights into the latest AI trends and preparing your organisation for the future.

Canva AI: Master Canva AI Tools and Apps 2024

This course introduces Canva’s AI tools to simplify and enhance content creation. Participants will explore how Canva AI supports efficient design for projects like social media posts, videos, PDFs, and presentations. With practical guidance on Canva’s Magic Studio, the course delves into creating speaking avatars and converting text to images. By the end, learners will have hands-on experience and the skills to produce a wide variety of content quickly and effectively.

What you’ll learn:

  • How use Canva AI in projects, including materials for social media posts, videos, PDFs, and presentations.
  • Use of Canva AI Magic Studio to enhance results and simplify content creation.
  • Exploration of Canva’s top AI applications, such as creating speaking avatars and text-to-image transformations.
  • Techniques to produce 100+ social media posts in minutes, boosting content creation efficiency.
  • Hands-on experience through interactive projects at the end of each section.

Artificial Intelligence: Preparing Your Career for AI

As AI becomes integral to business strategies, organisations must rethink hiring, training, and upskilling. Generative AI is expected to reshape workflows, redefine roles, and shift skill demands. The question isn’t if AI will replace jobs, but rather which skills it will affect and how organisations can refocus human priorities.

In this free course, you’ll learn five steps to prepare your career for the AI-centric workplace:

  • Educate yourself on AI fundamentals,
  • Align your career path with AI advancements,
  • Invest thoughtfully in an AI-first economy,
  • Use AI responsibly and ethically,
  • Adapt to continuous AI-driven changes.

The future will favour those who engage with AI proactively—take the steps now to stay ahead!

Who this course is for:

Ideal for beginners and general audiences interested in understanding AI’s impact on careers.

Business Analyst: Digital Director for AI and Data Science

Staying current with trends like AI and Big Data is crucial for those defining requirements for digital solutions. In AI projects, the process of requirements elicitation and analysis is as essential as in any initiative, with success based on collaboration between business stakeholders and technical experts, the latter including data scientists and machine learning engineers.

Business analysts are important in ensuring new technologies add business value by guiding and specifying data needs. As AI advances, they act as support for data science by clarifying what information is essential to generate accurate models.

What you’ll learn:

  • The role of business analysts in implementing AI solutions,
  • Techniques for requirements elicitation in conversational user experiences,
  • Differences between NLU and rule-based bots,
  • Basics of conversation flow analysis and design.

Who this course is for:

Ideal for professionals involved in requirements for Data Science, Machine Learning, or AI projects, including Business Analysts, Systems Analysts, Product Owners, Managers, and Executives.

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ready for the AI wave? Here’s your must-take course list appeared first on TechWire Asia.

]]>
How big data and AI are transforming India’s business landscape https://techwireasia.com/2024/10/big-data-india-and-its-potential-for-growth-in-multiple-sectors/ Wed, 02 Oct 2024 07:44:14 +0000 https://techwireasia.com/?p=239110 Is the buzz phrase of yesteryear, ‘big data’ making a resurgence, or has the concept simply been underappreciated all along? The rise of AI systems has underscored the vital role data plays in modern businesses, pushing many organisations to finally capitalise on the data they generate daily. Data has always been important, but with AI’s […]

The post How big data and AI are transforming India’s business landscape appeared first on TechWire Asia.

]]>
Is the buzz phrase of yesteryear, ‘big data’ making a resurgence, or has the concept simply been underappreciated all along?

The rise of AI systems has underscored the vital role data plays in modern businesses, pushing many organisations to finally capitalise on the data they generate daily. Data has always been important, but with AI’s rising significance, businesses are taking a more strategic approach to managing it – particularly in regions such as India.

The push for AI and big data infrastructure in India

India has positioned itself as a contender to become a global leader in artificial intelligence, with major technology companies such as Microsoft and Amazon committing billions to building computing infrastructure in the country. This investment race sees companies seeking dominance of the fast-growing AI industry. The Indian government has also offered incentives to tech companies to set up operations from electronics manufacturing to data storage, with the hope that a thriving domestic market and a competent workforce could elevate the country to the top ranks of AI technology consumers and exporters.

Microsoft, for example, has pledged approximately US$3.7 billion to southern state Telangana. Local officials report that the tech giant has acquired land for data centres which will contributing an additional 660 megawatts of IT capacity – enough electricity to power roughly half a million European homes annually. Meanwhile, Amazon plans to spend US$12.7 billion on cloud infrastructure in India by 2030. The country is emerging as one of the world’s most interesting tech markets.

Interestingly, data itself is no longer regarded as “big” in the conventional sense. Although data volumes have grown, hardware capabilities have advanced even faster. The emphasis has shifted from managing data volume to using it to improve decision-making.

As more companies integrate AI into their operations, data becomes increasingly important for measuring product effectiveness and gaining insights into internal processes. In recent years, India’s big data and AI ecosystem has expanded rapidly, attracting both large and small companies. The country is expected to become one of the world’s leading markets for big data analytics, offering numerous opportunities for data scientists.

According to Mordor Intelligence, India’s big data technology and services market is projected to reach US$2.17 billion by 2024 and grow to US$3.38 billion by 2029; a compound annual growth rate (CAGR) of 7.66%.

Unlocking potential for SMEs

Big data presents a significant opportunity for small and medium-sized enterprises (SMEs) in India to increase efficiency and drive growth. However, this potential is accompanied by challenges that require strategic planning.

SMEs are well-positioned to act as innovation leaders in the supply chain. Their agility allows them to adopt technologies like big data to identify market gaps and streamline their operations, opening the way to rapid growth. Their adaptability and growth will likely strengthens supply chain relationships and promote mutual growth in their vertical.

To fully leverage big data, SMEs should invest in IT infrastructure and improve data systems as first steps to boosting production and stakeholder trust. Advanced data analytics will assist businesses in gaining crucial insights, allowing them make decisions based on empirical data. Alongside the take-up of refreshed approaches to big data and AI, there is an important requirement for clear data governance policies (and adherence to the same) that address security and privacy.

Despite these benefits, SMEs face challenges from lack of comprehensive data policies and tools for extracting meaningful information. Fragmented IT systems create data silos, making it difficult to form the basis for effective decisions. Addressing these difficulties can assist SMEs in unlocking the full potential of big data and achieving long-term growth.

The role of big data across key industries in India

The talent gap in business processes means companies can and should adopt analytics, a facility gaining momentum in industries such as BFSI (Banking, Financial Services, and Insurance), Retail, and Telecom. In Industry 4.0, data analytics is becoming an essential skill for sustainable manufacturing, alongside AI, machine learning, IoT, and automation.

For example, many retailers are dealing with erratic sales and need more resources to identify core problems or effectively forecast sales. Organisations generally lack the resources to establish an in-house data analytics team, so they turn to external analytics providers for answers.

Big data has the capability to transform the in-store retail experience. By analysing customer movement data, retailers can optimise store layouts, improve product placements, and create a more enjoyable shopping environment. Globally, apparel retailers have been leveraging these insights to refine their in-store experiences, and the results speak for themselves.

Revolutionising Indian agriculture with big data

Big data in agriculture is changing the way Indian farmers make decisions. Farmers can now make decisions based on data from ground-based sensors, satellites, weather forecasts, and machines, rather than guesswork. Big data enables them to manage real-time irrigation requirements, monitor soil health, estimate crop yields, and detect early plant disease signs. This transition to ‘smart farming’ is important in India, where optimal resource utilisation may considerably increase productivity and efficiency while lowering waste and environmental impact.

Ultimately, big data is improving agricultural output in India while paving the way for a more sustainable future.

India’s big data technology and services market remains highly competitive and fragmented, yet the country’s robust IT services sector means relatively easy adoption of the latest in technology. In addition to established industry giants, numerous startups and mid-sized businesses are increasingly meeting the growing demand for big data solutions in multiple industries.

Conclusion

As India works to position itself as a global leader in AI and big data, the opportunities are vast. The ability to use data is critical for driving innovation, growth, and sustainability, whether in large tech companies or small and medium-sized enterprises. Big data and its use are redefining how businesses operate and make choices, from supply chain transformation to agricultural revolution. The future appears bright for enterprises of any size that are ready to embrace the ‘big data’ resurgence.

The post How big data and AI are transforming India’s business landscape appeared first on TechWire Asia.

]]>
Google is making a billion-dollar bet on its first data center in the UK https://techwireasia.com/2024/01/google-is-making-a-billion-dollar-bet-on-its-first-data-center-in-the-uk/ Mon, 22 Jan 2024 04:14:52 +0000 https://techwireasia.com/?p=237341 Google is investing US$1 billion in a new UK data center to meet rising service demand, supporting Prime Minister Rishi Sunak’s tech leadership ambitions. The data center will be Google’s first in the UK. Beyond being a global technological powerhouse, Google Cloud has become the steadfast ally of governments worldwide, ushering in an era of innovation. […]

The post Google is making a billion-dollar bet on its first data center in the UK appeared first on TechWire Asia.

]]>
  • Google is investing US$1 billion in a new UK data center to meet rising service demand, supporting Prime Minister Rishi Sunak’s tech leadership ambitions.
  • The data center will be Google’s first in the UK.
  • Beyond being a global technological powerhouse, Google Cloud has become the steadfast ally of governments worldwide, ushering in an era of innovation. Google’s commitment to transforming lives, modernizing public services, and revolutionizing operations within the UK has been a dynamic reality. As the sun sets on traditional computing landscapes, Google Cloud is rising, rapidly expanding its presence in the UK and reshaping the essence of cloud computing.

    One of the critical pillars of Google Cloud’s presence in the UK is its substantial investment in cutting-edge data infrastructure. Google recently announced a staggering US$1 billion investment in a new data center, a testament to its dedication to meeting the escalating demand for cloud services. This move signifies a boost for the UK’s technological infrastructure and aligns with the government’s aspirations to position the nation as a global leader in technology.

    “As more individuals embrace the opportunities of the digital economy and AI-driven technologies enhance productivity, creativity, health, and scientific advancements, investing in the necessary technical infrastructure becomes crucial. That’s why we’re investing $1 billion in a new UK data center in Waltham Cross, Hertfordshire—a 33-acre site creating jobs for the local community,” Debbie Weinstein, VP of Google and Managing Director of Google UK & Ireland, said in a statement last week.

    Illustration of Google's new UK data Centre in Waltham Cross, Hertfordshire. The 33-acre site will create construction and technical jobs for the local community. Source: Google
    Illustration of Google’s new UK data Centre in Waltham Cross, Hertfordshire. The 33-acre site will create construction and technical jobs for the local community. Source: Google

     

    In short, this investment will provide vital computing capacity, supporting AI innovation and ensuring dependable digital services for Google Cloud customers and users in the UK and beyond. As stated on its website, the upcoming data center in the UK marks the company’s first in the country. 

    Google already operates data centers in various European locations, including the Netherlands, Denmark, Finland, Belgium, and Ireland, where its European headquarters are also situated. The company boasts a workforce of over 7,000 people in Britain.

    Google Cloud’s impact extends far beyond physical infrastructure. The company’s robust suite of cloud services has become integral to businesses across various sectors in the UK. From startups to enterprises, organizations are leveraging Google Cloud’s scalable and flexible solutions to drive efficiency, enhance collaboration, and accelerate innovation

    The comprehensive nature of Google Cloud’s offerings, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS), ensures that it caters to the diverse needs of the UK’s business landscape.

    That said, the investment in Google’s Waltham Cross data center is part of the company’s ongoing commitment to the UK. It follows their significant assets, such as the US$1 billion acquisition of our Central Saint Giles office in 2022, the development in King’s Cross, and the launch of the Accessibility Discovery Centre, fostering accessible tech across the UK.

    “Looking beyond our office spaces, we’re connecting nations through projects like the Grace Hopper subsea cable, linking the UK with the United States and Spain,” Weinstein noted. However, investments by Google extend beyond infrastructure to empower communities and individuals across the UK. In fact, since 2015, Google has reached over 500 locations in the UK, providing free digital skills training to over one million individuals.

    “In 2021, we expanded the Google Digital Garage training program with a new AI-focused curriculum, ensuring more Brits can harness the opportunities presented by this transformative technology,” Weinstein concluded. 

    24/7 Carbon-free energy by 2030

    Google Cloud’s commitment to sustainability also aligns seamlessly with the UK’s environmental goals. The company has been at the forefront of implementing green practices in its data centers, emphasizing energy efficiency and carbon neutrality. “As a pioneer in computing infrastructure, Google’s data centers are some of the most efficient in the world. We’ve set out our ambitious goal to run all of our data centers and campuses on carbon-free energy (CFE), every hour of every day by 2030,” it said.

    This aligns with the UK’s ambitious targets to reduce carbon emissions, creating a synergy beyond technological innovation. In a dynamic move, Google forged a robust partnership with ENGIE for offshore wind energy from the Moray West wind farm in Scotland, adding 100 MW to the grid and propelling its UK operations towards 90% carbon-free energy by 2025. 

    Beyond that, the tech giant said it is delving into groundbreaking solutions, exploring the potential of harnessing data center heat for off-site recovery and benefiting local communities by sharing warmth with nearby homes and businesses.

    The post Google is making a billion-dollar bet on its first data center in the UK appeared first on TechWire Asia.

    ]]>
    Detailed Data From On-High: How Planet Views the World https://techwireasia.com/2024/01/how-does-hyperspectral-satellite-imagery-help-decision-making-farming-urban-planning/ Thu, 11 Jan 2024 00:37:54 +0000 https://techwireasia.com/?p=237005 From new-generation imaging satellites, rich data can inform multiple verticals, including environmental planning, farming, urban planning, utilities, mining, and much more.

    The post Detailed Data From On-High: How Planet Views the World appeared first on TechWire Asia.

    ]]>
    Many lines have been written on sites like Tech Wire Asia describing the near-exponential growth of available data in the last few years and how best organizations can utilize this gold mine of information to best effect. But just creating or having access to petabytes of data is of lesser use if the information is not objective, consistently produced, detailed where it needs to be, and – in many use cases – undoubtedly global.

    In life-threatening disasters such as war, flood, famine, and life-affecting tasks like carbon emission tracking, pollution monitoring, and large-scale farming, having access to rich, globally consistent data helps organizations make big decisions. From choosing the best crops for a particular soil type to finding renegade activities via air quality monitoring in urban areas, technology holds the answer. Indeed, a specific solution that’s supremely effective is hyperspectral satellite imagery.

    Source: Planet Labs

    Unlike traditional satellite photography, hyperspectral imaging captures more than just light, bringing new information to organizations that need to see beyond the immediately visible. By filtering data, attenuating spectra, and combining a broad range of imagery, hidden information is brought to the surface that’s otherwise unavailable, even from ground-based surveying.

    In addition to removing issues like surveying team costs and differences in measurement methods from place to place, high-altitude hyperspectral imaging satellites create multiple views at required resolutions to unearth canonical data that are accurate within smaller tolerances than any other collection method. From there, data processing can filter results to focus on any of the hundreds of indicators that uncover hidden changes over time, highlight slow alterations, or expose data that’s impossible to capture without a long-term view from, literally, on high.

    Covering multiple use cases and active in many verticals, Planet is one of the world’s leading suppliers of this next-generation data-gathering and processing capability. Its data capture presentation solutions are used daily in natural disaster response, utility infrastructure planning, agriculture, defense, and many more. Information can be presented in multiple ways or embedded into existing solutions via API calls. Whether it’s a single source of truth or used to enrich and extend existing data sets, hyperspectral data creates detailed topographical, environmental, or geological information.

    Planet is currently investing in its third and fourth-generation satellite swarms, covering the Earth’s landmass via polar orbits that, as the constellation grows, capture every inch of the world’s surface with increasing frequency.

    The farmer

    Planet’s solution for agriculture epitomizes the precision and detail of the data available. Year-round crop health can be measured from planting to plowing, with precision data fed to ground-based operations that treat low-yield or diseased areas with minimal environmental impact and cost. Predictive analysis enables proactive decision-making, ensuring results on the ground are optimized for outcomes ranging from ecological soundness to bespoke yield targets.

    Source: Planet Labs

    Over time, Planet has built an archive of imagery that shows the changing use patterns of agricultural regions worldwide, at resolutions down to 50 cm. Users can access this treasure trove of information via open APIs, making it an invaluable resource for agritech innovators, local farmers, civic and rural planners, and any organization interested in how we use land for food production, development, rewilding, or conservation.

    The increasing pressure placed on agricultural land needs a measured and precise set of responses from its stewards, and the broad-brush farming methods of the past are no longer environmentally desirable or economically viable. From the scale of the single-family farm upwards, those stewards can ensure the best use and treatments for land according to prevailing conditions, such as ensuring optimum yields using as few additives as possible, all on a highly localized basis.

    The planner

    New challenges face urban planners all over the world. While developed countries’ populations are less dependent on urban environments thanks to new work patterns, the developing world continues to see substantial population movements to urban areas where economic activity is concentrated.

    Source: Planet Labs

    Discovering what impacts decisions taken on the ground have on large areas requires detailed information. Over large physical areas, gathered information requires massive data processing and storage requirements. By stipulating what information captured is most relevant and sourcing it, town planners can reach decisions about the best uses of resources without having to both pay and wait for big data processing to unearth insights from terabytes of peripheral information.

    Key indicators captured by satellite constellations ensure relevancy of data, and faster, better decisions can be taken for long-term good.

    The future

    Planet is working on two future constellations. One is its Pelican tasking constellation, set to replace its existing SkySat constellation. This next-generation satellite constellation will offer greater resolution, down to 30 cm at ground level. The new constellation will also provide up to 30 visits per day over target areas, allowing a huge degree of control over ground-based activities.

    The future of hyperspectral imagery is also on its way. Planet is developing its hyperspectral satellite constellation, Tanager. Data is expected to be offered at a 30-meter resolution and provide insights from over 400 spectral bands. These spectral bands can help identify “signatures” of chemicals, materials, and environmental processes normally hidden from the human eye, enabling the evaluation of occurrences such as CO2 emissions and methane leaks.

    The present

    With multiple use cases and an ethos of easy access to powerful data, Planet’s work contributes positively to many industries, helping the planet use its resources accurately and with minimal impact.

    You can find out more about how this global leader can address your specific use cases for hyperspectral imagery by following this link for more information.

    The post Detailed Data From On-High: How Planet Views the World appeared first on TechWire Asia.

    ]]>
    Snowflake has the perfect data platform for AI https://techwireasia.com/2023/11/how-is-snowflake-ai-opening-up-technology/ Thu, 30 Nov 2023 04:00:44 +0000 https://techwireasia.com/?p=235881 • The Snowflake AI approach opens up generative AI to companies with all levels of understanding. • By democratizing technology, the company hopes to move the industry along. • Snowflake has always taken a problem-solving approach to the application of technology, and generative AI is no different. Exponential data growth requires a data platform capable […]

    The post Snowflake has the perfect data platform for AI appeared first on TechWire Asia.

    ]]>
    • The Snowflake AI approach opens up generative AI to companies with all levels of understanding.
    • By democratizing technology, the company hopes to move the industry along.
    • Snowflake has always taken a problem-solving approach to the application of technology, and generative AI is no different.

    Exponential data growth requires a data platform capable of not only understanding the data but also maximizing its potential. Given today’s importance of data-driven insights, businesses need to ensure they have access to the right type of data to inform their decisions.

    But managing all this data is a challenging and complex process. Challenges include not only regulatory and compliance requirements but also the difficulty of accessing all the necessary data.

    According to Sanjay Deshmukh, senior regional vice president of ASEAN and India at Snowflake, the company’s founders were motivated to create a modern data platform due to the inadequacies of existing technologies at the time. Legacy data platforms, especially big data solutions like Hadoop, failed to fulfill their promises, primarily due to their complexity in both construction and maintenance. This complexity hindered customers from becoming truly data-driven and democratizing access to data.

    Snowflake identified and aimed to address the top challenges in data management, including legacy systems, silos, cost, and complexity. Data in silos remained a significant problem as it continued to be segregated despite the promises of big data. This segregation was partly due to technological limitations and internal organizational controls. The costs and complexities associated with traditional systems led to delays in obtaining insights. The lack of innovation in AI, ML, data collaboration, and monetization called for a solution.

    To address these issues, Snowflake made two crucial decisions. Firstly, its platform had to be a cloud service to tackle scale, complexity, and cost issues. Secondly, it had to be a fully managed service to reduce the time to insights. This approach led to the creation of a platform that disrupted the entire data warehousing industry. By separating computing and storage and offering a consumption model, Snowflake’s platform allows users to store and access vast amounts of data at high speed. It’s a platform and an approach that transformed the industry.

    Data in silo were a significant problem as data remained segregated despite the promises of big data.
    Data in silo were a significant problem as data remained segregated despite the promises of big data. (Image generated by AI).

    Snowflake then recognized the growing need for external data and the limitations of existing methods. The company introduced a marketplace within its cloud architecture to address this, enabling secure and efficient data collaboration. This development marked the second phase of the company’s evolution, addressing the demand for external data and enhancing collaboration across different platforms globally.

    The third phase of Snowflake’s growth, which aligns with the expansion of data and the need for a modern data platform, involved empowering customers to build analytical applications. This phase focused on creating an environment conducive to the seamless development of insight-driven applications. The native app framework and other capabilities facilitated the efficient delivery of insights to end-users.

    The fourth and current phase centers on disrupting the generative AI and large language model (LLM) space to democratize access to these technologies. This phase aims to make advanced AI accessible to a broader range of customers, including small and medium enterprises lacking deep expertise. It represents a commitment to making LLMs genuinely accessible and beneficial to companies globally.

    Deshmukh highlighted the company’s expansion over the last five years in the Asia-Pacific region, beginning with Australia, and then moving to Japan and Singapore. The demand for its platform has surged across all sectors in many countries, leading to a strong regional presence. Snowflake’s recent conference in Malaysia underscored the increasing demand for data solutions and the company’s dedication to meeting the diverse needs of different industries.

    Snowflake’s recent conference in Malaysia reflects the increasing demand for data solutions and the company's commitment to addressing diverse industry needs.
    Snowflake’s recent conference in Malaysia.

    Snowflake and the perfect data platform for AI

    In 2023, the spotlight is on generative AI, a trend triggered by the launch of ChatGPT. Unlike previous innovations, generative AI has quickly become a topic in boardrooms, prompting CEOs and CIOs to strategize how their companies can benefit from this new technology. While individual consumers may not require a specific generative AI strategy, enterprises must consider potential risks associated with customer data, intellectual property, and competitive markets.

    For Deshmukh, the advice to enterprise customers is clear: before jumping on the generative AI bandwagon, establish a robust data strategy as a foundational step. The company recognizes that data powers AI and emphasizes the need for a well-thought-out data platform and governance framework. This two-step approach involves creating a solid data foundation and enabling generative AI.

    Sanjay Deshmukh, senior regional vice president, ASEAN and India at Snowflake.
    Sanjay Deshmukh, senior regional vice president, ASEAN and India at Snowflake.

    The initial step in building this foundation involves consolidating data in one place, creating a single source of truth. Here, Deshmukh highlights Snowflake’s capability to support any scale and data type, allowing customers to store structured, semi-structured, and unstructured data in one location. This seemingly simple task is challenging for customers accustomed to data spread across multiple systems.

    The second crucial aspect involves implementing a governance framework, which includes identifying personally identifiable information (PII) and determining how to protect this sensitive data. Depending on its sensitivity, protection methods for PII may include complete encryption, tokenization, or role-based access.

    The third and most vital step is ensuring that the governance framework is applied consistently to every user and use case. Deshmukh pointed out that data is often moved out of the governed environment for specific tasks, putting it at risk of compromise.

    “It is extremely important that you not just put in the governance framework, but you ensure that you do not silo the data again, because the moment you silo it, your governance rule framework is going to get defeated. This is our recommendation to our clients, in terms of building a comprehensive data strategy and a governance framework for AI,” said Deshmukh.

    Deshmukh presented Snowflake as a solution that supports all workloads. He explained that with Snowflake, clients do not need to extract data for various processes such as data warehousing, data lakes, data engineering, and AI/ML model building. Snowflake’s approach ensures that data stays secure within its governed environment. In this setup, data models and applications are brought to the data, a significant departure from traditional models where data is typically moved towards the applications.

    “Our approach resonates well with customers, emphasizing the significance of a comprehensive data strategy and governance framework before delving into generative AI. By prioritizing data organization, security, and accessibility, enterprises can lay a strong foundation for leveraging the capabilities of generative AI technologies,” Deshmukh added.

    Snowflake eliminates the need for clients to pull data out for various processes like data warehousing, data lakes, data engineering, and AI/ML model building.
    Snowflake eliminates the need for clients to pull data out for various processes like data warehousing, data lakes, data engineering, and AI/ML model building. (Image – Shutterstock).

    The three principles for a generative AI strategy

    There are three key principles outlined for effective implementation of generative AI. The first principle emphasizes that data should not be sent to the model; instead, the model and AI applications should be brought to the data. This approach is crucial to maintain customer trust, as hosting a foundation model externally and sending data to it could result in the model being trained on sensitive information, potentially benefiting others and risking trust.

    The second principle underscores the importance of training Large Language Models (LLMs) on data relevant to the business. The model might provide inaccurate or irrelevant answers without proper context and training on the company’s data. Snowflake’s recently launched AI platform, Cortex, adheres to this principle by letting customers bring their LLMs and run them alongside their data within the secure Snowflake perimeter.

    The third and critical principle emphasizes a business-centric approach to generative AI. Instead of succumbing to the hype and seeking problems to solve with the technology, businesses should identify existing challenges they must address.

    For instance, a manufacturing company aiming to enhance production efficiency should pinpoint specific business problems, such as analyzing data related to plant maintenance and servicing documented by engineers. This approach ensures that generative AI is applied purposefully to solve real business issues, rather than being driven solely by technological trends.

    “These are the three principles that we recommend our customers to follow, as they are implementing a generative AI and LLM strategy. If you look at our Cortex platform, it is built with this single-minded objective or single-minded focus that we want to democratize access to AI to pretty much everyone,” said Deshmukh.

    Snowflake Cortex brings powerful AI and semantic search capabilities to the Snowflake platform.
    Snowflake Cortex brings powerful AI and semantic search capabilities to the Snowflake platform.

    A hypothetical scenario for AI

    The approach to implementing generative AI begins with identifying a business problem and determining the necessary data to solve it. Deshmukh shared a hypothetical yet realistic example, focusing on unstructured documents, specifically service agreements signed by engineers and service professionals, which are not currently integrated into the analytical process. The initial step involves digitizing these physical papers into the data platform by scanning and loading them into the Snowflake system.

    Next, Deshmukh pointed out that a Large Language Model (LLM) comes into play to extract intelligence from scanned documents. In 2022, Snowflake acquired Applica, which provides purpose-built LLMs for converting unstructured documents into structured content, known as Document AI. Running Document AI on PDF documents involves training the model with sample data and creating a pipeline to analyze failing parts, identify responsible suppliers, and send relevant information to the respective person for resolution.

    According to Deshmukh, this purpose-built model is not a foundational model; instead, it serves the specific task of converting unstructured documents into structured data, letting users to pose queries in natural language. For instance, users can inquire about the status of service reports from the last three months, identify the number of failures, pinpoint the most problematic part (eg, an injection molding machine), and inquire about the supplier of the faulty component. The LLM facilitates asking questions in English and extracting meaningful insights.

    For Deshmukh, this example emphasizes the importance of starting with a business problem, determining the required data, and selecting an appropriate model. It contrasts with a common pitfall where businesses choose models before identifying the specific issues they aim to solve.

    The initial step for Snowflake in building the AI foundation involves consolidating data in one place, creating a single source of truth.
    The initial step in building the AI foundation involves consolidating data in one place, creating a single source of truth. (Image – Shutterstock).

    Snowflake aims to democratize access to AI

    While generative AI is currently generating hype, AI has been around for several years, initially dominated by a few companies with data scientists and machine learning experts. However, the landscape has since changed significantly, with AI becoming more commoditized due to the availability of open-source language models. While companies can leverage these models, Deshmukh pointed out that the challenge lies in the need for skilled individuals who understand and can effectively work with large language models.

    Recognizing that many, particularly in the SME segment, lack the resources to hire data scientists, the Snowflake AI approach has addressed this gap. The company has onboarded LLMs like Meta’s Llama 2 for those without the proficiency. These pre-trained models are accessible through functions, allowing individuals without data science expertise to build small Python code or AI applications. This approach allows SMEs to harness the power of AI without requiring specialized skills in handling large language models.

    In another hypothetical scenario that closely resembles reality, Deshmukh considered a B2C company with a customer service focus. The customer service leader, tasked with understanding customer complaints and challenges, can utilize the Snowflake platform. The company can gain insights into customer concerns by loading call transcripts into the system and using a function called ‘summarize,’ which uses an LLM hosted in Snowflake. For example, it might reveal that 30% of customers complain about network issues, while 40% express concerns about other issues.

    Deshmukh stated that the key advantage here is that enterprises lacking data scientists or individuals with LLM proficiency can simplify their processes using Snowflake. Those familiar with Python and SQL can easily use these skills to access and analyze data.

    Snowflake’s goal is to cater to a spectrum of users: for those with proficiency, they can bring their language models and build applications; for those with SQL and Python skills, they can call functions; and for those with limited expertise, a copilot feature powered by an LLM allows users to ask English-language questions and receive relevant insights. The platform aims to accommodate users at various proficiency levels, making AI applications more accessible and user-friendly.

    “That’s our goal, to democratize the access to generative AI, to a broad set of companies and not let it be limited to a handful of large companies who have the skills. That’s the approach that we’re taking with Cortex,” he concluded.

    The post Snowflake has the perfect data platform for AI appeared first on TechWire Asia.

    ]]>
    The latest changes at X – including a new privacy policy https://techwireasia.com/2023/09/privacy-x-twitter-elon-musk-update/ Tue, 19 Sep 2023 04:01:58 +0000 https://techwireasia.com/?p=233190 Updates to users’ privacy on X are the latest in the slew of changes Elon Musk has brough to the site formerly known as Twitter. X  plans to collect biometric data on its users, in an update to its privacy policy. Biometric data is a broad term for any data relating to a person’s physical […]

    The post The latest changes at X – including a new privacy policy appeared first on TechWire Asia.

    ]]>
    Updates to users’ privacy on X are the latest in the slew of changes Elon Musk has brough to the site formerly known as Twitter.

    X  plans to collect biometric data on its users, in an update to its privacy policy.

    The new name and logo of Musk’s social media site.

    Biometric data is a broad term for any data relating to a person’s physical or behavioural characteristics including, for example, facial scanning technology. Under X’s new policy (section 1.1), which is due to come into effect on the 29th of September, the platform “may collect and use your biometric information for safety, security, and identification purposes.” The policy states that this is “based on your consent”, which suggests that users may be able to opt out.

    The news follows a proposed class-action lawsuit in the US state of Illinois this July, which alleged that X violated the Illinois Biometric Information Privacy Act (BIPA) by wrongfully capturing, storing and using the biometric data of Illinois residents without their consent.

    Although the incoming policy does not define what X considers ‘biometric information’, the company provided some clarity in a recent statement to the BBC:

    “X will give [users] the option to provide their government ID, combined with a selfie, to add a verification layer. Biometric data may be extracted from both the government ID and the selfie image for matching purposes. This will additionally help us tie, for those that choose, an account to a real person by processing their government-issued ID. This will also help X fight impersonation attempts and make the platform more secure.”

    It’s speculated that this decision may also help to enable passwordless sign-ins. According to findings in the app’s code it appears that X plans to support passkeys, which create a secure link between a user’s device and a website or app. Passkey-supported apps enable users to log in to their account using their device’s fingerprint scanner, facial recognition, or PIN.

    In addition to biometric information, X’s new privacy policy states that it “may collect and use your personal information (such as your employment history, employment preferences, skills and abilities, job search activity and engagement, and so on) to recommend potential jobs for you, to share with potential employers when you apply for a job, to enable employers to find potential candidates, and to show you more relevant advertising.”

    According to reports, X Corp – Twitter’s parent company – acquired a tech recruiting service called Laskie in May, and there has been speculation that X may want to offer recruitment services in future. This would be unsurprising given that Elon Musk, who acquired Twitter in 2022 for USD$44bn, has previously stated that he hopes to turn X into an “everything app”  à la China’s hugely popular WeChat.

    Changes to privacy on X seem more focused at hiding from other users – not the site and third parties.

    The updated privacy policy is not the only major policy change taking place at Twitter/X. The company also announced in a statement released August 29th that it will start accepting all political advertisements again, beginning in the US.

    Twitter banned all political ads globally in October 2019 amid concerns surrounding misinformation on the platform. At the time, then-CEO Jack Dorsey said of the decision: “We believe political message reach should be earned, not bought”, adding that paid political advertising “has significant ramifications that today’s democratic infrastructure may not be prepared to handle.”

    Following Musk’s takeover, the platform relaxed the policy this January to allow “cause-based ads” in the US, including ads that raise awareness of issues such as climate change, voter registration, and government programs.

    X’s statement says this move will build on their “commitment to free expression”, but the decision to allow all political ads in the United States will likely give the company a financial boost too: its ad revenues have plummeted by about 50% since Musk took over, amid concerns from advertisers that their ads could appear next to problematic content.

    Another recent attempt to reverse the revenue decline came in the form of Twitter Blue, a paid subscription service introduced this April, which charges users $8 a month to retain verified (‘blue tick’) status. The initiative was followed by a wave of imposter accounts sharing harmful misinformation, and many organizations and companies either stopped using Twitter or paused Twitter ads as a result.

    Twitter Blue was described by Insider Intelligence as “a mess” that created “more chaos and confusion for brands” who were already feeling vulnerable on the platform.

    X says that it will create a global advertising transparency center, which will enable users to see what political ads are being promoted on the platform. It additionally promises to implement robust screening processes to ensure “only eligible groups and campaigns” can purchase advertisements.

    “Starting in the U.S., we’ll continue to apply specific policies to paid-for promoted political posts,” the statement reads. “This will include prohibiting the promotion of false or misleading content, including false or misleading information intended to undermine public confidence in an election, while seeking to preserve free and open political discourse.” This is in line with X’s updated enforcement policy, ‘Freedom of Speech, Not Reach’, which aims to limit the visibility of posts that violate their policies rather than removing them.

    The post The latest changes at X – including a new privacy policy appeared first on TechWire Asia.

    ]]>
    How Successful CIOs Extract Maximum ROI of Mission-Critical ERP Systems to Fund Innovation https://techwireasia.com/2023/08/how-successful-cios-extract-maximum-roi-of-mission-critical-erp-systems-to-fund-innovation/ Wed, 23 Aug 2023 08:05:02 +0000 https://techwireasia.com/?p=232134 Are you stuck in a vicious cycle of unwanted updates, enforced feature deprecation and crazy support costs? If so, read about TPSSs (third-party support services) and the huge operational and cost advantages they bring.

    The post How Successful CIOs Extract Maximum ROI of Mission-Critical ERP Systems to Fund Innovation appeared first on TechWire Asia.

    ]]>
    IT leaders around the world are being told – not asked – by executives, board members, investors and shareholders to transform IT from a “cost center” to a “strategic profit center” for the business. Gone are the heydays of deep pockets and endless budget for companies, now it is about extracting the most out of every dollar, getting more for less, no matter the industry.

    Companies continue to accrue rising costs of new applications, tools, cybersecurity programs, business continuity planning, and more, while feeling the crunch of the IT talent shortage that drive higher salaries asks, attrition rates, and instability for the organization. So where are IT leaders to find wiggle room in their budget to fund innovation and meet the growing demand of the business without compromising operational excellence?

    The answer is in their mission-critical ERP systems, or rather, WHO is supporting their ERP.

    ERP serves as the backbone of most enterprises, offering solutions that enable the business to operate, scale and grow. However, mistaking ERP as a business strategy, not a tool, is where costs can get out of control and work against the health of the business.

    Innovative companies bring forth successful ideas, new products and services that are based on the brilliant leadership and team vision, not by the ERP they purchased. Software vendors push upgrades, migrations and even forced deadlines to make companies move their systems to a newer product, each cycle adding cost, risk, disruption and downtime that impact the bottom line of the business. If executives are at the roundtable asking themselves “When do we need to make the purchase, how do we afford this,” rather than, “Why should we do this?” then it seems as though the vendor is the one running the future of the business, not the leaders.

    Source: Shutterstock

    Annual ERP software maintenance fees are often 22% of license fees, with diminishing ROI over time and less features and support offered, but steady in their demand of IT budget. Typically, customizations are not supported, and tax, legal and regulatory updates are only offered if on the latest release – all roads point to upgrade or migrate as a solution. This vendor-driven roadmap does not always have a clear ROI for the business.

    Rimini Street offers a solution to help CXOs, IT and Finance leaders fund innovation while maximizing the ROI of its mission-critical ERP systems and enjoying better support and partnership than ever before. As the global provider of end-to-end enterprise software support, products and services that offers a comprehensive family of unified solutions to run, manage, support, customize, configure, connect, protect, monitor and optimize enterprise application, database and technology software, Rimini Street enables clients to achieve better business outcomes, significantly reduce costs and reallocate resources for innovation.

    More than 5,200 Fortune 500, Fortune Global 100, midmarket, public sector and other organizations have relied on Rimini Street as their trusted enterprise software solutions provider, saving over $7B along the way. By enabling companies to run their current software release for 15 additional years, gain access to highly experienced, knowledgeable senior engineers, benefit from improved support, and partner with a global team whose interest is driven by client-success, Rimini Street continues to prove funding innovation can be easy as saying goodbye to your current vendor support provider.

    Learn more and do more with Rimini Street: www.riministreet.com

    The post How Successful CIOs Extract Maximum ROI of Mission-Critical ERP Systems to Fund Innovation appeared first on TechWire Asia.

    ]]>
    Shut the front door! Why tolerate today’s levels of phishing attacks? https://techwireasia.com/2023/07/email-anti-phishing-clean-inbox-zero-tolerance-malware-review/ Mon, 10 Jul 2023 03:21:02 +0000 https://techwireasia.com/?p=230537 Why is it OK for hundreds of phishing attempts to get through to your users’ inboxes? This executive thinks it’s not acceptable. With Abnormal Security’s Tim Bentley.

    The post Shut the front door! Why tolerate today’s levels of phishing attacks? appeared first on TechWire Asia.

    ]]>

    Organised groups of cybercriminals operate very much like any business. They won’t hesitate to deploy the latest technological advances in their pursuit of profit. In that respect, we’re seeing a huge rise in instances where artificial intelligence is used to gain users’ trust. For example, by leveraging tools like ChatGPT, even inexperienced cybercriminals can now write more sophisticated phishing attacks that better emulate actual conversational styles.

    Once bad actors’ emails get into end-users’ inboxes, it seems there’s little that IT departments and cybersecurity teams can do. Clever, well-written messages are more likely to deceive their victims into acting on threat actors’ instructions and credentials being provided to malicious third parties. The last line of defence is staff training to inform employees of the signs of a phishing attack; this is not always an effective strategy, particularly in cases where the recipient isn’t concentrating or is under stress – in fact, in any number of edge cases.

    At the end of the day, phishing attacks will get through to end-user inboxes, and there’s little we can do about it, right?

    Source: Shutterstock

    Not so, says Tim Bentley, Regional Director APAC at Abnormal Security. In an exclusive interview with Tech Wire Asia, he said of the presence of malicious content in any user’s inbox, “It’s been widely accepted that bad email like phishing emails get through to users. In turn, the last three or four years have seen a pretty much new industry – security awareness training – go from strength to strength. Technology had effectively waved the white flag because it can’t deal with the influx of malicious email.”

    But what if users never had to decipher whether their emails were legitimate or an attack? What if those phishing emails were stopped before they reached inboxes? The engine at the heart of the Abnormal Security approach to email security is behavioural artificial intelligence, which uses an organisation’s email as a learning body to baseline known ‘normal’ behaviour – including user-specific communication patterns, styles, and relationships – and detect deviations that may denote malicious activity.

    “For example, if I receive an email from a vendor that’s been compromised – I’ve got no idea that vendor has been compromised – but the source IP address is actually from Bulgaria, which doesn’t tally with how that vendor normally deals with me. There’s language in [the email] that indicates an abnormality. There’s banking information that doesn’t line up with their normal bank information[…] and so forth. All those signals can be pulled into making a more informed decision about the legitimacy of the email. […] At our fingertips now, within milliseconds, we have a mountain of evidence to be able to say, ‘well, this is abnormal!’ before it ever reaches my inbox.”

    Companies with extended supply chains are particularly vulnerable. Many malicious actors will target smaller companies and use those, once compromised, to attack bigger companies. Tim says, “It’s not a spoofed email, it’s a compromised email, which is much more difficult to detect, because it’s going to pass all the normal authentication methods.” So spotting it, and subsequently blocking it, needs a smart system to look under the surface.

    Source: Shutterstock

    “We’ll take as much intelligence as we can about how people work, and we use that to determine behavior,’ said Tim. “For internal employees, it’s not just from their email, but from Microsoft 365 as a whole, as well as any other tools that the customer has integrated, like CrowdStrike and Okta.

    “More recently, we started protecting Slack, Zoom and Teams, which give us different insights as well. So, for example, let’s say there’s a CFO based in Singapore, and she travels to Hong Kong and Sydney fairly often. She uses an iPhone 13 and her device for work is a ThinkPad. Now, for the first time, the data is telling us that she has popped up in Nairobi, on an Android device over a protocol that bypasses MFA – Abnormal can detect this anomalous behavior, determine it to be suspicious activity, and then remediate her account. Without that background knowledge, any protective shield can’t be effective.”

    Note that Abnormal Security doesn’t claim a 100% hit rate.

    As Tim says, “You can’t rest on your laurels. But we can raise the bar so high that it becomes an event if something does get through, rather than an acceptance that hundreds [of phishing emails] will get through a day. If any other cyber security defence layer let you down dozens or even hundreds of times a day, you know you’d change it, but somehow, it’s accepted with email.”

    As part of the proof of concept, the Abnormal platform spends a week learning from an organisation’s last 90 days of email activity to demonstrate the emails that it would have flagged against what was, by default, let through. It’s a “non-invasive proof-of-concept that connects via API and doesn’t interfere with current processes,” Tim said. “It’s the front door, right? It’s being left ajar at the moment. We’re talking about closing it.”

    To learn more, or start with a proof of concept, contact the local team. You can also download the CISO guide to generative AI attacks and discover how cybercriminals use generative AI tools like ChatGPT to create more effective email attacks.

    The post Shut the front door! Why tolerate today’s levels of phishing attacks? appeared first on TechWire Asia.

    ]]>
    The innovator of flash memory goes from strength to strength https://techwireasia.com/2023/06/nand-nvme-flash-ssd-best-toshiba-memory-drives-review/ Tue, 20 Jun 2023 04:03:39 +0000 https://techwireasia.com/?p=229860 We look at Kioxia, part of the Toshiba group, a company that innovates and continues to break new ground in memory & storage performance and reliability.

    The post The innovator of flash memory goes from strength to strength appeared first on TechWire Asia.

    ]]>
    Keeping up with the ever-increasing data requirements for businesses can feel like an uphill struggle. Yet, succeeding in the hunt for power and speed in digital products can make all the difference when it comes to besting competitors. On top of this efficiency, executives want to ensure their data is safe and secure.

    This is why it is so essential to get the most reliable memory products behind you from the very start.

    Enter Kioxia – formerly Toshiba Memory – the inventors of revolutionary NAND flash memory 36 years ago. This technology allowed for more ‘memory cells’ – the units set at either ‘0’ or ‘1’ to represent data – to be lined up on a storage platform than before, lowering the cost per bit and the speed of data transfers. It also reduced erasing and writing times, and offered enhanced vibration resistance due to not being reliant on moving parts like its predecessor; the spinning hard disk drive.

    Flash

    Since then, the Japanese company has stayed ahead of the curve when it comes to memory solutions, which have been implemented in everything from microSD cards for smartphones to data center-class SSDs.

    Its over 694,000-square meter Yokkaichi Plant is one of the world’s largest flash memory production factories, and has been implementing artificial intelligence (AI) into its processes to maximize efficiency. This includes machine learning technology that can spot defects and failures, and robots that automate workflows.

    In February, the company was named one of the Clarivate Top 100 Global Innovators, recognizing its ever-growing intellectual property (IP) collection.

    At the cutting edge of flash memory

    Key among Kioxia’s innovations is 3D flash memory technology, created in collaboration with Western Digital. In these storage products, the memory cells are arranged vertically, stacked one on top of the other, rather than in a two-dimensional plane. In March, the two tech giants unveiled the eighth-generation BiCS FLASH, which has 218 layers and offers greater cell capacity on a smaller die.

    Each semiconductor wafer and cell array wafer are also manufactured separately before being bonded, allowing for “the industry’s highest bit density”, according to Kioxia CTO Masaki Momodomi. As a result, BiCS FLASH offers fast reading/writing speeds of over 3.2 Gb/s, a 60 percent improvement from the previous generation. It can also work in both quad-level and triple-level cell modes, meaning it can be implemented in premium SSDs as well as PCs and data centers.

    Keeping your data in the right hands

    Kioxia has been focusing on the robustness of its memory products, an example of this being the new EXCERIA PLUS Portable SSD. This comes with an SSD Utility software tool that allows the user to check storage health, install updates, and change settings, helping to ensure their data’s safety. The card can also be password protected at a firmware level.

    Storage solutions for all

    But Kioxia’s innovations don’t just have applications for the business world. A high-definition photo stored in a smartphone can recall a vivid, human memory, and all the emotions that come with it. This ability is priceless for many everyday users of Kioxia products. These include high-performance memory cards for portable devices, ensuring there are no tangible limits to how many holiday snaps you can take. But if you do run out of space, Kioxia also offers SSDs (with SATA and NVMe options) for personal computers, as well as USB thumb drives with up to 256GB of storage to allow customers to share their memories easily.

    Flash

    The company’s technology has been used in developing mobile storage solutions for years, helping consumers experience their most cherished moments all over again. Moreover, many are now using the AI chatbot ChatGPT as their go-to search engine, whether to help draw up a work-related presentation or discover a new dinner recipe.

    Kioxia is active in this space too, developing so-called ‘Memory-Centric AI’, which promises to be more energy efficient and less inclined to perpetuate bias. Memory-Centric AI differs from conventional AI in that it stores its training data externally and pulls only from the relevant parts to complete tasks.

    Yasuhito Yoshimizu, from Kioxia’s Institute of Memory Technology Research and Development, said: “Conventional AI solves arithmetic problems by using a calculator to do the calculations. Memory-Centric AI, on the other hand, has an index listing all of the calculation methods and answers, so it can look up the answers and figure out which page to go to in order to solve any given problem.”

    Despite dropping the familiar Toshiba name in 2019, Kioxia has maintained its status as the leading developer of flash storage technology. While some tech giants, like Intel, have moved away from developments in the area, it remains the element of the IT stack that keeps both business and consumer tech working at high speed.

    To learn more about Kioxia’s product range, or how the company is making waves in memory technology, click here

    The post The innovator of flash memory goes from strength to strength appeared first on TechWire Asia.

    ]]>
    The data you need to sharpen your edge – Refinitiv https://techwireasia.com/2023/05/the-data-you-need-to-sharpen-your-edge-refinitiv/ Wed, 24 May 2023 09:10:26 +0000 https://techwireasia.com/?p=229014 Can companies that deliver value-added data and analytics to screens across the financial services industry make a substantive difference to corporate outcomes?

    The post The data you need to sharpen your edge – Refinitiv appeared first on TechWire Asia.

    ]]>
    The financial services industry (FSI) lives and dies on accurate information, rapidly and reliably delivered where it’s needed. Accuracy, speed, and reliability equals edge, and the edge is what breathes life into the FSI every day. Delivered consistently over time, edge equals insight – and that can be the difference between a good day and a bad day, a good steer and a bad steer.

    Even governments, well-known as slow-moving on the technology uptake, are realising that open financial data helps drive economic growth. McKinsey & Associates states, “Analysis suggests that the boost to the economy from broad adoption of open-data ecosystems could range from about 1 to 1.5 percent of GDP in 2030,” and, “research suggests that more than half the potential value remains inaccessible, particularly the value that financial institutions could gain directly through greater efficiency and reduced fraud costs.”

    But as the amount of data available grows hugely, the edge gets harder to find, because value gets harder to see in a data-blizzard. Even when big FSI companies invest heavily in technological solutions like data lakes, simply capturing data doesn’t give you the edge. It doesn’t turn data into actionable value.

    That means you can frequently end up paying through the nose for technology that, while it’s nice to have, is of no little practical value to your day-to-day financial decisions. There’s undoubtedly valuable material in there somewhere – but it would take you longer to go through the massed data to find it than it would ultimately be worth.

    Focusing the information firehose

    That’s where companies like Refinitiv come in – providing a combination of contextualized insight for the decisions you need to make day-to-day, and the most useful, valuable data as and when you need it to guide your decisions towards profitability. Whatever your role in the FSI, getting that kind of quality data, targeted towards the needs you have, and in the time frame you want, can help cut through the data-blizzard and give you back the edge you need. As data lakes become stagnant, curated data streams stir the waters.

    We spoke to Matt Eddy, Head of Real-Time Customer Managed Services at Refinitiv, an LSEG business, about how the company adds value to companies in the FSI, even if they have their own data lake.

    “A lot of our customers across the FSI have absolutely invested heavily in data lakes. Have they done it right? Possibly… possibly not. And, if you get it wrong, that’s when your data lake becomes a data swamp. If you haven’t properly governed your data inputs, you end up with lots of data of minimal actual use. In a lot of these companies, their data lake volumes aren’t shrinking, they don’t want to throw data away.”

    Refinitiv’s mission is, he said, “To simplify and standardize companies’ relationship to data.”

    Source: Shutterstock

    What that means is that Refinitiv is ‘slicing and dicing’ through the available data, to feed only relevant, pre-normalised, value-added material to screens across the FSI – without bringing spurious information that needs to be parsed again by in-house systems.

    Developing a method of parsing data, or having it curated before consumption, is becoming hugely important. In 2021, a survey of financial professionals commissioned by Google Cloud found that 93 percent of exchanges, trading systems and data providers offered cloud-based data and services and that all planned to offer new cloud services that included derived data. Data ingestion and use is, it’s said, more common on the buy-side, but it’s still a huge influence.

    But there’s more to the process of sharpening the data edge than mere access. The point about having accurate, rapid access to FSI data that helps you make better decisions is that you want it with as little fuss as possible. But translating that into a customer experience (end-user experience, trader experience, et al.) has been tricky in the past, because in less tech-enabled days, there has been one way to do things – whether or not one size fitted all.

    DIY – if you want to

    Refinitiv has a range of access methods, that allow its clients flexibility not only with their payment model, but also with their access and support needs.

    “As an example, with Real-Time – Optimized, which is our AWS-hosted feed, we’re rolling out a self-serve self-authentication, self-entitlement process,” Matt explained. “The idea is that once our clients have purchased the product, they’ll get access to a portal, they’ll create their own ID, add and remove permissions as they need and be able to monitor and manage that connection without any kind of engagement if they so wish. We’re also taking that one step further and looking to roll it into ecommerce channels as well.”

    In other words, if you want to define the parameters of the data you get without being handheld through an induction process, you’re free to do that. But Matt was also keen to say that for those who wanted old-style, end-of-the-line customer support and guidance, that remains available too. The point is that the FSI, like the rest of the tech-enabled world, is changing – and Refinitiv aims to be there for both early adopters, leave-me-be’ers, and the “What does this button do?” crew who quite like their hand held when the way they get their daily data changes for the better.

    Source: Shutterstock

    When it comes to value-added FSI data, the chances are high that companies will want access to everything there is, so that staff with different priorities can shape their own feeds appropriately for their own use, but just as it’s trying to cover the waterfront in terms of user support, the same is true of its pricing models.

    Resonating with cell phone contracts, with Refinitiv’s data services, they aim to offer the option for customers to either buy the credits they think they’ll need for a given period of time, and then stay within those limits, or they can buy a kind of “all you can eat” package, and access any available data however many times they like, as long as their contract runs.

    Whether you need strong, dependable data on the latest trends, or proven analytics to help you see the market more clearly in a given light, companies like Refinitiv take you through the data-blizzard and remind you what’s actually worth knowing for your bottom line. Companies like Refinitiv are focused on finding a cure-all for the snow-blindness that too much data can bring.

    They’re a way of helping companies re-sharpen their edge in the financial services industry. And they’re continually evolving to meet that industry’s increasingly complex needs.

    For more info on how Refinitiv can offer you real-time data insights, click here.

    The post The data you need to sharpen your edge – Refinitiv appeared first on TechWire Asia.

    ]]>