DevOps Archives | TechWire Asia https://techwireasia.com/podcast_categories/devops/ Where technology and business intersect Mon, 24 Feb 2025 11:31:56 +0000 en-GB hourly 1 https://techwireasia.com/wp-content/uploads/2025/02/cropped-TECHWIREASIA_LOGO_CMYK_GREY-scaled1-32x32.png DevOps Archives | TechWire Asia https://techwireasia.com/podcast_categories/devops/ 32 32 Low-Code and the Smart, Future-Proof Data Fabric https://techwireasia.com/podcast/low-code-financial-insurance-development-australia-new-zealand-apac/ Tue, 02 Jul 2024 09:19:31 +0000 https://techwireasia.com/?post_type=podcast&p=238862 With increasing governance and statutory guidelines for the financial industry, low-code can help bring data policy in line with legislation, too, plus create a data fabric that's fit for the future.

The post Low-Code and the Smart, Future-Proof Data Fabric appeared first on TechWire Asia.

]]>

Show Notes for Series 03 Episode 59
This podcast is produced in conjunction with Appian.

The business benefits of using a low-code development platform are many and varied, from faster development times to an increased involvement of business stakeholders in the development process – meaning a better alignment between desired outcomes and the emerging software.

In this episode of the Tech Means Business podcast, we talk about low-code, and focus on the insurance sector, detailing where and how insurers get maximum value from enterprise apps that are fast to emerge and quickly iterated on.

With increasing governance and statutory guidelines for the financial industry, low-code can help bring data policy in line with legislation, too, plus create a data fabric that’s fit for the future.

It turns out that many of the pre-requisites of an effective low-code environment are the same as those needed for a meaningful implementation of AI. With Luke Thomas (Area Vice President – Asia Pacific and Japan, Appian) and Dean McIntosh (Enterprise Account Executive – Insurance), we chat about AI’s use in low-code and in the wider enterprise, too.

Appian’s website:
https://appian.com

Appian’s solutions specifically for Insurers:
https://appian.com/industries/insurance/overview

Luke Thomas on LinkedIn:
https://www.linkedin.com/in/lukeathomas/

Dean McIntosh on LinkedIn:
https://www.linkedin.com/in/dean-mcintosh-921b8815/

Host, Joe Green on LinkedIn:
https://www.linkedin.com/in/josephedwardgreen/

The post Low-Code and the Smart, Future-Proof Data Fabric appeared first on TechWire Asia.

]]>
NFI Industries and Original Software: delivering best-practice UAT https://techwireasia.com/podcast/the-best-uat-user-acceptance-testing-automation-collaboration-platform-devops/ Thu, 09 Nov 2023 10:17:49 +0000 https://techwireasia.com/?post_type=podcast&p=235088 User Acceptance Testing: a critical phase in the development and evolution of any software. We talk to Original Software and its client, NFI Industries.

The post NFI Industries and Original Software: delivering best-practice UAT appeared first on TechWire Asia.

]]>

Show Notes for Series 03 Episode 50
This podcast is produced in conjunction with Original Software.

Software not only has to work as it was designed, it has to be designed so it works as users want it to. User acceptance testing is that critical part of software development that ensures a solution (or an update or addition to an application) becomes a powerful tool for the end-users: functional, easy-to-use, and (hopefully) bug-free.

Our guests today on the Tech Means Business podcast are Original Software, a company that makes market-leading software testing solutions, and NFI, a supplier that specializes in providing powerful software to logistics organizations worldwide.

With Jenny Wilson, VP of Automation Support at NFI, and Colin Armitage, CEO of Original Software, we talk about automation, building test libraries, picking the right user testers, and some of the intricacies of UAT – user acceptance testing.

Although the presence of cloud computing platforms suggests that user testing is somehow standardized, the truth is that uses of any piece of software are so different, every change, every feature, and every application needs to be tested in the right way by the right people – and at a resource cost that won’t alarm the Finance Department.

You can find out more about Original Software’s test suites here:
https://originalsoftware.com/

NFI, suppliers of technology to supply chain and logistics companies, is here:
https://www.nfiindustries.com/

Colin Armitage is on LinkedIn:
https://www.linkedin.com/in/colinarmitage/

Jenny Wilson:
https://www.linkedin.com/in/jenny-wilson-25842a173/

And Joe Green (v2.13rc4) is here:
https://www.linkedin.com/in/josephedwardgreen/

The post NFI Industries and Original Software: delivering best-practice UAT appeared first on TechWire Asia.

]]>
Adding secure low-code to the developer’s toolkit https://techwireasia.com/podcast/low-code-fast-secure-compliant-developer-focus-tools-best-review-audio-podcast/ Wed, 17 May 2023 13:36:14 +0000 https://techwireasia.com/?post_type=podcast&p=228861 Quickly iterative, secure and fully-compliant – low-code platforms are the new powertools for developers. We talk about reworking & expanding legacy code, creating new applications fast, and how to start out with OutSystems.

The post Adding secure low-code to the developer’s toolkit appeared first on TechWire Asia.

]]>

Show Notes for Series 03 Episode 37
This podcast is produced in conjunction with OutSystems.

With the pressure on teams of developers to iterate quickly and bring projects to production status, low-code frameworks can be the difference between success and failure.

In this episode of Tech Means Business, we talk to Richard Davies, Director, Strategic Customers (APAC) of OutSystems, about how dev teams in the region can use low-code to produce secure and governance-compliant code that’s up and running faster than using traditional tooling.

We cover different use cases for low-code, too, like updating legacy code, recreating more scalable extant applications, and creating scalability in what might be seemingly immovable, business-critical systems.

If your organization is looking at low-code, there are ways to test the water, like day-long jump start sessions, plus multiple online resources to absorb information and see what’s possible.

Get started with OutSystems in the cloud or on-premise, beginning here:
https://www.outsystems.com/

Jump start sessions for mixed teams of developers and stakeholders here:
https://www.outsystems.com/events/jump-start/

Other resources, media, and reading:
https://www.outsystems.com/learn/

Richard Davies is on LinkedIn here:
https://www.linkedin.com/in/richdavies1/

Joe “Hello World” Green is here:
https://www.linkedin.com/in/josephedwardgreen/

The post Adding secure low-code to the developer’s toolkit appeared first on TechWire Asia.

]]>
Design, Code, Deploy: building apps with OutSystems low-code platform https://techwireasia.com/podcast/low-code-development-citizen-developer-enterprise-devops-enterprise-podcast-s03-e17/ Mon, 03 Oct 2022 18:31:49 +0000 https://techwireasia.com/?post_type=podcast&p=222156 The developer's role is changing, as too is the developer themselves, with new tools like AI code-buddies and no-code making more possible, faster.

The post Design, Code, Deploy: building apps with OutSystems low-code platform appeared first on TechWire Asia.

]]>

Show Notes for Series 03 Episode 17

This podcast is produced in conjunction with OutSystems.

The big talk in enterprise software development circles is around low-code taking some of the hard work out of the development process. But there are solutions out there that are for both “citizen” and “traditional” developers. OutSystems has evolved that paradigm with high-performance low-code.

Our Tech Means Business podcast guest is Richard Davies, the Director of Digital Transformation for the APAC region. Richard is a stalwart of enterprise-scale software development and someone whose experience informs his role, advising and helping multinational companies craft production-ready apps in record time.

We talk about partnerships between development functions and other stakeholders and how high-performance low-code is the obvious evolution of Agile methodology, where producing an MVP (minimum viable product) takes a fraction of the time, thanks to (relatively) simple application development tools.

While “Harry from Accounts” may not yet be able to create an ERP in a week, those days may not be too far off. But for now, Richard throws down the gauntlet: attend one of OutSystems’ one-day boot camps where you can “build an app in a day” or, try the platform for free in your own time. Put your battered O’Reilly volumes back on the shelf!

Try out OutSystems from here:
https://www.outsystems.com/Platform/Signup

Enterprise-grade app in a day? Go for it!
https://www.outsystems.com/training/classroom-training/developing-outsystems-web-applications-boot-camp/

Richard Davies is here on LinkedIn:
https://www.linkedin.com/in/richdavies1/

Joe Green can be dragged and dropped here:
https://www.linkedin.com/in/josephedwardgreen/

The post Design, Code, Deploy: building apps with OutSystems low-code platform appeared first on TechWire Asia.

]]>
Fireside Chat at NGINX SPRINT 2.1 https://techwireasia.com/podcast/nginx-f5-api-proxy-podcast-apac-sprint-two-point-one-podcast-s02-e30/ Fri, 17 Dec 2021 16:08:53 +0000 https://techwireasia.com/?post_type=podcast&p=214515 Show Notes for Series 02 Episode 30 This podcast is produced in conjunction with NGINX. From its earliest incarnations as the first serious challenger to the Apache web server, NGINX has become the go-to platform for modern applications, used for app management, API mediation, proxying, load-balancing, security, and more. This podcast is the audio-only version […]

The post Fireside Chat at NGINX SPRINT 2.1 appeared first on TechWire Asia.

]]>

Show Notes for Series 02 Episode 30

This podcast is produced in conjunction with NGINX.

From its earliest incarnations as the first serious challenger to the Apache web server, NGINX has become the go-to platform for modern applications, used for app management, API mediation, proxying, load-balancing, security, and more.

This podcast is the audio-only version of the fireside chat, hosted by our very own Joe Green, that preceded the APAC-focused virtual conference, SPRINT 2.1, an extension of the global Sprint 2.0 held in August. Guests were Rob Whiteley, General Manager of NGINX, Burzin Engineer, Co-founder and Chief Reliability Officer of PhonePe, and Sumit Malhorta, Chief Information Officer of Times Internet.

We covered topics like hardware vs. software load balancing, the company’s commitment to the FOSS core of NGINX, application deployment and the experiences guests had moving up for the paid tier, NGINX Plus. With today’s technology landscape dominated increasingly by containers (“the new VMs” – Rob Whiteley), service meshes, Kubernetes and secure application delivery, NGINX already underpins many of the world’s leading platforms.

Listen in to discover why and learn more about this important component in many of today’s business technology platforms, from home testing labs up to global enterprises.

Read more about the NGINX Sprint 2.1 (virtual) here on Tech Wire Asia:

The APAC’s NGINX SPRINT 2.1 (virtual): Reserve Your Place Now

Watch the event on demand:
https://www.continuouslearningevents.com/en/event-registration

Rob Whiteley, General Manager of NGINX is here:
https://www.linkedin.com/in/rwhiteley/

You can find Burzin Engineer of PhonePe:
https://www.linkedin.com/in/burzinengineer/

Sumit Malhorta of Times Internet fame:
https://www.linkedin.com/in/buzzsumit/

And a reverse-proxied Joe Green lives here:
https://www.linkedin.com/in/josephedwardgreen/

The post Fireside Chat at NGINX SPRINT 2.1 appeared first on TechWire Asia.

]]>
SUSE and the business of containers https://techwireasia.com/podcast/suse-open-source-containers-docker-rancher-harvester-podcast-s02e24/ Mon, 02 Aug 2021 09:33:36 +0000 https://techwireasia.com/?post_type=podcast&p=210781 Open-source giant SUSE is leading the pack in the advancement of microservices. We get the inside track on the open-source technology.

The post SUSE and the business of containers appeared first on TechWire Asia.

]]>

Show Notes for Series 02 Episode 24

This podcast is produced in conjunction with SUSE

The great promise of containerized applications and services is one of true platform agnosticism. Microservices are quick to spin up, knit together, and scale, are less resource-intensive than full-blooded virtual machines and can be deployed on hybrid infrastructure as easily as on a single bare metal instance.

As organizations start to explore the options that containers might offer them, we talk to Vishal Ghariwala, CTO for APJ and Greater China, from SUSE about the advantages of the technology, the power of open-source, and the leadership that SUSE is showing in the sector.

Although the German software giant has long been active in the enterprise computing space, its acquisition of Rancher Labs in December 2020 has positioned it nicely. It’s rapidly becoming the de facto solution for containerization deployed in all areas of production, from IIoT in edge environments to large, elastic roll-outs for mixed and hybrid clouds.

Vishal has a history with Red Hat, Intalio, and IBM, and his cloud-native and open-source background serves him well as an ambassador for both containerization and the SUSE variants on Linux and open-source technology. We also touch on some projects to watch from SUSE’s development labs that are making waves, including Harvester, the first genuinely open-source, non-proprietary hyperconvergent platform.

If you’re interested in how containers’ advantages might change the way you think about DevOps and enterprise software in general, this episode of the Tech Means Business podcast is for you.

SUSE Rancher:
https://www.suse.com/products/suse-rancher/

Kubernetes Management for Dummies:
https://www.suse.com/lp/kubernetes-for-dummies/

Rancher Desktop:
https://rancherdesktop.io/

Harvester:
https://harvesterhci.io/

Vishal can be found on LinkedIn here:
https://www.linkedin.com/in/vishalghariwala/

Joe Green, TMB’s host is here:
https://www.linkedin.com/in/josephedwardgreen/

The post SUSE and the business of containers appeared first on TechWire Asia.

]]>
Putting AI into IT Operations in 2021 https://techwireasia.com/podcast/ai-it-operations-artificial-intelligence-provisioning-development-network-compute-podcast-02e15/ Mon, 22 Feb 2021 11:44:06 +0000 https://techwireasia.com/?post_type=podcast&p=207644 In conversation with Jayanti Murty of Digitate about selling AI into IT teams.

The post Putting AI into IT Operations in 2021 appeared first on TechWire Asia.

]]>

Show Notes for Series 02 Episode 15

IT professionals have thick skins that deflects most marketing, especially when it comes to AI. After all, your phone apparently uses AI to frame the perfect picture (really?) and your TV knows and adapts to your taste in viewing thanks to “advanced machine learning” (nope).

So selling AI into ITops must be an uphill struggle. But with the right message and — most importantly — demonstrable use-cases and proof, that’s just what Digitate does. Jayanti Murti talks to us about perceptions of AI, the verticals most ready to adopt, and how machine-speed iterative algorithms actually improve the daily lives of IT staff.

Digitate launched in 2015, and was built by software engineers and more PhDs than you ever expect to see in a single room at any one time. A platform that’s “made its bones”, then, but we still have questions!

Jayanti on LinkedIn can be found here:

https://www.linkedin.com/in/jayantivsnmurty/

Joe’s page is here:

https://www.linkedin.com/in/josephedwardgreen/

The post Putting AI into IT Operations in 2021 appeared first on TechWire Asia.

]]>
The cloud’s been Replicated on-premise https://techwireasia.com/podcast/cloud-aws-on-premise-aws-azure-devops-deployments-replicated-podcast-02e14/ Fri, 05 Feb 2021 15:39:20 +0000 https://techwireasia.com/?post_type=podcast&p=207404 Want all that cloud goodness on-premise? Perhaps your apps should be Replicated.

The post The cloud’s been Replicated on-premise appeared first on TechWire Asia.

]]>

Show Notes for Series 02 Episode 14

The technology press is full of references to “stampedes to the cloud” and other assorted hyperbole. What’s undeniable is that although cloud may not be ideal for your use-case, all the latest DevOps tooling is focused towards cloud.

If you want hybrid, multi-cloud or vanilla on-premise deployments and still need all the latest and greatest K8s, Rancher.io and S3-styled storage, then perhaps consider Replicated.

We talk to Grant Miller about all things DevOps, cloud, optimization and how many organizations just don’t want to leap cloudwards, especially when increasingly, it’s the passport to lock-in. Security, data integrity, governance, previous investments: your reasons to keep it local may differ, but the dangers remain the same. Have you ever costed what it would take to get all those petabytes out of AWS?

Our guest’s Linkedin page is this one:

https://www.linkedin.com/in/grantlmiller/

And Joe’s only social media foray is right here:

https://www.linkedin.com/in/josephedwardgreen/

The post The cloud’s been Replicated on-premise appeared first on TechWire Asia.

]]>
ARMing the low-power data center https://techwireasia.com/podcast/arm-data-center-chips-x86-aws-silicon-podcast-s02-e11/ Wed, 02 Dec 2020 14:42:30 +0000 https://techwireasia.com/?post_type=podcast&p=206500 In conversation with Chris Bergey, VP of Infrastructure Line of Business at ARM Holdings, with whom Joe discusses the ARM future in data centers, routers, washing machines and maybe even a computer or two.

The post ARMing the low-power data center appeared first on TechWire Asia.

]]>

Show Notes for Series 02 Episode 11

Buy a computer chip with some serious “grunt” and you go for Intel, right? Or AMD, perhaps more likely, these days. In whichever case, it’s x86 architecture. That’s what data centers and clouds run on, after all — these are the serious computers for grown-ups.

The little voice of dissent you hear, however, hails from Cambridge, UK, a little company called ARM, recently acquired by NVIDIA for many billions of dollars. The voice is telling you not only are its chip designs as performant as those power-hungry x86 chips, but they run cooler.

It’s (nearly) just a case of recompiling your applications, and like magic, your cloud bill just fell by 30% overnight. Your power consumption and carbon footprint just shrank, too, and maybe your machine-learning algorithms got a boost. All in all, you look like a better IT professional.

Even Apple’s in on the act, although it’s not passing on the savings it makes by not having to buy Intel chips straight onto consumers. But nevertheless, Apple now makes its own ARM chips (or rather, it uses “Apple silicon”).

With a unique licensing model, anyone can make their own ARM processors or variant thereon; and in fact, as Chris Bergey of ARM says, the more the merrier! This is a guy who looks forward to reading his Twitter feed as delighted comments flow through his timeline.

There will soon be 200 billion ARM chips in the wild. What’s the fuss about? What will it mean to your business? Will every enterprise buy hundreds of Mac Minis? Joe and Chris enthuse together in this podcast.

Chris Bergey on LinkedIn:
https://www.linkedin.com/in/chrisbergey/

And Joe’s LinkedIn is here:
https://www.linkedin.com/in/josephedwardgreen/

 

Full transcript available.
[showhide type=”transcript” more_text=”Click to read.” less_text=”Click to hide” hidden=”yes”]

Joe Green (host): Welcome to the Tech Means Business podcast. This is a series of conversations that I like to have with interesting people in the worlds of technology and also of business and hopefully where those two areas of industry come together.

This week, I’m absolutely delighted to be joined by Chris Bergey of ARM or ARM Holdings, I guess. They’ve been acquired for an enormous amount of money recently by Nvidia, or at least a decent portion of the company has. And so it’s a good time to sort of catch on to that wave of interest and maybe talk about all things ARM and, in this case, well, let’s see where it takes us.

Now, as you can probably tell by my dulcet tones, I’m an Englishman, terribly proud of it. And I remember back in the 80s, of a little town called Cambridge, which is probably most famous for its university. But back then, it also became known for a few technology companies that were born there; ARM is one of them. And, of course, Sinclair was another one. And the Sinclair’s ZX81 was the first computer I had access to: interesting fact there. So Chris, welcome to the podcast. It’s a real pleasure having you on the Tech Means Business podcast.

Chris Bergey (guest): Joe, it’s my pleasure. It’s really exciting to be here today.

Joe Green (host): So Chris, ARM’s in the news at the moment for all sorts of good reasons, really. And I’m personally incredibly excited about the prospects of now and the future, scaling the global heights, I guess. You must be absolutely thrilled with the progress that ARM has made over the last few years?

Chris Bergey (guest): Yeah, it’s been an amazing ride. And I’ve obviously not been there for a lot of it; we’ve actually just celebrated our 30th anniversary of being founded in what we like to call it…we like to refer to as a turkey barn! It was some of the early offices that the team worked out of. And it has been an amazing ride.

And I think one of my favorite statistics is over that 30 years, it took us almost 26 years for ARM processors to ship basically 100 billion devices. And we are going to actually hit our next 100 billion in four years. So going from what took 26 years to get, just in four years accomplishing again, gives you an idea of the trajectory and how pervasive ARM has become in many of the electronics areas.

Joe Green (host): I think it’s very easy, isn’t it, to become almost blasé about the extent of technology like this? Just looking around the room now, I’m looking at an audio amp, and it’s got a glowing display on it, which probably has an ARM processor in there to show that display. Down the hallway, there’s a washing machine and the TV and all these things who’ve got ARM processors in them. Is that part of the marketing schtick? From that, it’s almost like talking about Linux, isn’t it? “It’s everywhere, but you just don’t realize it”?

Chris Bergey (guest): Well, I mean, I think we sometimes do I guess refer to that, but I think it’s really just about the evolution of semiconductors and the world that we’re living in today, as you highlight that, everything has a microcontroller in it or something that scales a lot higher than a microcontroller when you think about the smartphone in your pocket or whatever.

And so ARM is really written on those waves, right. And I think that the two biggest waves that ARM has written are the things or what we call IoT today. So all of the devices and how we vote, we added microcontrollers or intelligence to all these devices, and now we’re adding connectivity, whether that be… I think I have a connected instapot or, my washer or dryer or my sprinkler system actually has Wi-Fi in it, right, something I would never have thought of. I work quite a bit in Wi-Fi and, as that boom was happening, but and then also the smartphone wave.

And, and I think that’s really these computing waves that occur. And it’s really those billions of devices that have really helped to mature and make the ARM architecture as widespread as it is. And it’s like you said, it’s hard to find a device that doesn’t have some amount of ARM technology in it today.

Joe Green (host): Absolutely. And I mentioned some of those hidden devices or at least smaller devices, which are pretty much everywhere. Is that going to be a challenge do you think for ARM? How are you going to break out of that mold as If we power small devices, how are you going to make that transition? From powering the little things to, for instance, powering data centers?

Chris Bergey (guest): Well, it’s you’re absolutely right that those are not the areas that ARM is most thought of. But, we actually started over ten years ago in seriously making investments and having a desire to participate in those markets and participate in those markets in a meaningful way.

One of the examples I would give is, over ten years ago, ARM started to work with some national agencies and really looked at what it would take for supercomputers to be built on ARM, and I’m sure most of your audience is familiar with the supercomputer space. But it’s, it’s gotten some extra noise these days with COVID testing and some of the modeling and things like that, that we need these supercomputers to do.

And so we’ve been on a long journey, a ten-year journey to try to achieve that. And we’re actually very proud that this year, actually the world’s number one supercomputer, they keep a list of the Top 500 supercomputers. And number one this year by a large margin by 2.8 times faster than the second number two is based on ARM, and that’s the Fugaku system in Japan that that has been built there by the Rakin group, the record labs. So, I think that it, yes, it’s been something we aspired to do. It’s been a journey. We’ve had some ups and downs. But we believe that it is happening today. And we think that it’s an exciting future going forward in data centers for ARM.

Joe Green (host): Perhaps people aren’t aware of the importance of data centers, I mean, to us, we just pick up our phone, and we tap on a screen, and stuff happens in 95% of the cases, of course. Actually, what’s happening is happening distantly in these huge data centers. So obviously, it’s a wildly important market. Why is ARM technology so well suited to the data center environment? In your opinion? Obviously, the route one answer is that it’s low power and therefore low heat. But is there more to it than that?

Chris Bergey (guest): Well, it’s, it’s funny, you bring that up, because it’s very similar, actually, to this ten-year journey, I was just talking about where, I think, when ARM aspired to get into infrastructure, and to get into data centers.

Front and center was the pitch you just said of, hey, we’re low power, hey, we know it’s gonna be great, right? It’s gonna be great. And, quite honestly, it wasn’t; it wasn’t met with a lot of excitement from the operators. And, of course, the cloud wasn’t what it is today, and the concentrations you talked about, but it really was, Hey, we’re plugged into the wall, we’re not battery powered, what we really care about is performance. And we need to really run these workloads.

And so that was really what this in the ten-year journey I’ve been mentioning, was really to go attack those two things, I think we felt like, we need to do a lot of work on the software side to make sure that the software ecosystem and the types of software workloads that the cloud providers care about that we could work well in those environments. And of course, and on the other side, we need to have a very competitive processor core. And that was something that we knew we could get there. It was, it was a lot of it was just, some dollars and some focus. So that’s really what we focused on.

And I think if you look at our penetration or some of our early success, most recently, it’s the fact that we have closed the performance gap. And people are seeing on cloud-native workloads or cloud software that basically Hey, we’re getting as good or similar performance as we would get from the leading alternative processors, in the x86 world.

Ironically, once we hit that performance threshold, all of a sudden, now people are saying, Hey, what, that power thing is cool. And something we really find valuable. Because what, as you mentioned, these data centers and they are just amazing, multiple football field type of size, and I’m talking about English football, not American football. But, uh, they’re just enormous and, and one of the biggest constraints they actually have is how big of a power station can they build to feed these beasts, right and, and really what you do is you start breaking that down to, you maybe have 500 megawatts or just some huge enormous amount of power you’re providing. And of course, that gets broken down into, how much power can you deliver to each rack? Well, if that’s your fixed metric, and if because of the power density of ARM, you can offer 3000 virtual CPUs versus, let’s say, 1000 virtual CPUs, that’s a big deal, because that’s, obviously, three X amount of computing, and maybe three X amount of revenue dollars. It really changes the economies of scale.

And so that’s one of the biggest things that is really getting exciting for us is that we’ve closed the performance gap, and we’ve closed the software ecosystem. And now people are able to take advantage of the innate power advantage that we have to even increase the density.

Joe Green (host): Chris, you’ll have to forgive me if this is a terribly ignorant question. Is there something to be said for current “fashionable” development methods, I’m thinking about microservices and containers? Is that type of development more suited to ARM than more monolithic x86 development environments? Or is my opinion just one that’s coming from ignorance and hearsay?

Chris Bergey (guest): No, no, no, you’ve absolutely hit it on the head. So, if you look back for the last, say, 10, maybe 10-15 years, there’s been a push for what people would call cloud-native software development. And it’s actually not just a set of languages, but it’s also a set of a methodology of using something called continuous integration, continuous development tools. And the idea is, is that, you’re basically creating the software that is abstracted from hardware, right? So again, and of course, you can even go to function as a service or software as a service, you’re highlighting, but even cloud-native really thinks about things like containerization.

And there is this abstraction from the instruction set, or at least there is this ability, for example, in a CI-CD environment, continuous integration, continuous development, to when you check in your code, and you do a build every night, you can actually build it on ARM as much as you build on an x86.

And so as we’ve made our investments, we became this first-class citizen in that world. And so yes, so cloud-native software, it’s pretty quick for you to be able to compile on ARM just like you would have, compiled on x86. So cloud-native software, we estimate, is about 50% of the workloads in the cloud today. And it’s clearly the dominant growth factor if you look at workloads going forward. So Cloud Native software has been a big enabler for ARM. And as one of the things that is really helping us make some significant inroads,

Joe Green (host): I’ve now got this awful image in my head, very sweaty Steve Ballmer, back in the day running up and down stage screaming, developers, developers, developers. Is that not the key to it really, I mean, are we gonna be seeing a lot of rabble-rousing a lot of running up and down stage on behalf of ARM? Is it simple? Just recompile, and applications run on these chips that don’t take as much power as x86 chips?

Chris Bergey (guest): Yes, I feel like we need to do that. But what’s really exciting is that we have some of the biggest industry heavyweights doing that for us. So, we have some of the industry leaders driving that story, and one of the more vocal that I can talk about is Amazon and AWS. Last year at Re: invent their big conference, the keynote, one of the key messages was their commitment to ARM and how they believe that their Graviton and Graviton 2 offering was going to change the future of cloud computing. And that is really based on our Neoverse IP. And what they were promising to customers was basically a 40% advantage in price performance versus the fifth generation of x86 instances that they had offered.

So they they are out there, promoting that as well. As we have in China, we’ve got quite a bit of traction as well and, and there are other cloud guys that are making some inroads too, but what’s exciting is what you will actually see, and it’s actually exciting for me, just to log on to my Twitter feed.

Honestly, every morning is the excitement that’s building around that ecosystem. So literally, I go on my Twitter, and it’s different companies of, Hey, I moved to Graviton 2, and, only took us a week, and hey, we’re seeing 20% performance gains, and, I reduced my Amazon bill by 30%. And, it’s just unbelievable feedback in real-time.

And so it’s really a flywheel that’s happening at this point in time where, I think, there was a perception that, hey, going to ARM was going to be difficult. And so as you see, this ecosystem of tech leaders that are saying, hey, I’ve done it, and it’s not that hard, and I’m getting a great advantage.

It’s really starting to get great traction, and really, the whole ecosystem is taking a look at that. And, as you mentioned, cloud computing is an awesome power. And, I think what a lot of us are going through this work from home. And I think we are amazed with the way infrastructure has been able to adapt, right, to be able to [change our] workloads [to change] so quickly. But, and that’s the beauty of the cloud.

The analogy I give for cloud is that is it’s like your credit card, right? It’s great that you have all this extra spending, and you can get all this extra compute. But the challenge is, you get the bill at the end of the month, and it catches up with you.

And so for someone like Amazon to be able to offer a 40% price performance offering. Man, I mean, for the CFOs, CEOs, CIOs, cloud spending is top of mind. And if it’s portable, if it’s something that they can go and get that discount without having to renegotiate, or just off the top and have it available worldwide from Amazon [which] has over eight locations worldwide, that they’re offering their ARM servers.

It is a game-changer. So I would say yes, we’re screaming and we’re Yes, we’re jumping up and down. But the awesome thing is this is ARM, and it’s about an ecosystem. We have so many more people jumping up and down and shouting for us. And it is just it! As I said, I feel like it’s Christmas morning every day when I get to look at my choice of server. Yeah, it’s great. It’s really fun.

Joe Green (host): At present, Amazon is clearly leading the field in terms of market share for cloud computing. And over the next few years, obviously, ARM is going to be developing into data centers. Do you think that there’ll be a way? I’m just trying to think of the best way to put it really? Do you think that ARM chips will be coexisting for quite a long time alongside x86 platforms? And so it’ll almost be separate offerings on two different types of chips to enterprise users? On the one hand, they’ll be, I don’t know, ARM, on the other hand, there’ll be the “serious” cloud users; how do you see it going?

Chris Bergey (guest): Well, I can focus on what we’re gonna be offering. Right. And I think what we offer is his choice. Right? And, and I think that’s really, why we’re gonna do quite well is, if you look at what the trends, you see a lot of move towards vertical integration, right.

And I think that tightly coupled hardware and software and you see more companies just like Amazon, right, building their own silicon. Well, that’s something that fits ARM’s market model quite well.

And so I think that is something that we, that the partners appreciate about us and, they can build the product they want.

Where one thing that people don’t realize maybe is that data centers today are very different than the way on-prem data centers were built. And I mentioned just the 500-megawatt power, but it’s more than that. It’s gone away from something that was very monolithic, where you rack and stack, 2P servers, and it was just floor to ceiling row after row, 2P. That’s not how you build a cloud data center today; you actually have very much specialty hardware. Because the world was really built around general purpose and general-purpose was great because, as big as the inner Enterprise Server market was, it really needed scale, right and what the right solution was a compromise. It was a compromise that allowed it to have the scale that it had.

When you now go into cloud scale, you really start seeing the benefits of really making domain-specific compute. And that’s where you see things like GPUs or ML accelerators coming in, in these cloud data centers, you have services like S3, where storage is disaggregated, you’ve got flash disaggregated. So, you really end up with these racks of specialty hardware or specialty processors, not just general-purpose processors.

And again, that really, really fits the ARM model, as well, as, with some of the Moore’s Law, scaling challenges that we have going forward. That customization is gonna be even more required to get some of the computing benefits that we need to be able to continue to provide computing, keep up with the computing demands, without while trying to keep power consumption, or, other things in check , like costs, and those things in check.

Joe Green (host): Now, of course, at this point, if I may, I think we should make some differentiations. For our listeners, we’ve mentioned, company names, AMD and Intel, both produce a particular type of chip, largely the same in terms of overall structure, but very different between vendors. Perhaps you could explain to our listeners quite what the difference is between an air quotes traditional x86 processor creators or vendors, and an organization like ARM?

Chris Bergey (guest): sure, so, ARM is at its core, an IP development company. And what that means is that we are the curators of the ARM architecture, the ARM instruction set. And we create cores or, basically, soft IP that implements the ARM processors.

But then we hand that to a semiconductor manufacturer or, could be, people like NXP Broadcom, Qualcomm, MediaTech, you go down the list, and then they put that core into their product.

So if in my example, if Broadcom is building a router chip, a Wi-Fi router chip, they’ll get a core from ARM, they’ll put it in that Wi-Fi router chip, and it will run the Linux code or whatever the router chip is doing. And they’re able to leverage that ARM ecosystem, but you don’t buy the chip.

If you’re a router manufacturer from ARM, you buy it from Broadcom. So that’s very different than Intel or AMD, where you go to AMD or Intel is a one-stop-shop. And basically, they provide you a chip and maybe even referenced [inaudible] for a motherboard.

ARM’s approach is much more of collaboration through our ecosystem, which we think is quite powerful. Because, again, we’re able to just cover so many more alternatives that if you want a big one, a fat one, a skinny one, a red one, a blue one, basically you can you find a partner, there you go, you can get what you want.

Joe Green (host): And, that’s really the power of the ARM ecosystem. Course, the thing about the ARM licensing model is that any OEM any manufacturer can buy a license and make their own chips. Do you think that’s the way forward for these big operators?

Chris Bergey (guest): I don’t think there’s going to be one size that fits all. I think that, there is clearly value in tightly coupling, kind of, there’s this move to more of owning the stack, let’s say of kind of, hardware to software and for certain products, that there’s companies that have a certain scale, that desire to go down that path, creating semiconductors.

One of the things with the Moore’s Law scaling challenge I mentioned where it’s not only are we not getting the transistor performance that we were getting to reduce that power or get the performance, the costs are starting to go nonlinear. Or maybe I’ve always been nonlinear, but we’re getting even to a steeper part of the curve.

So I think there’s this natural balance of, the cost to build semiconductors continues to increase and which means that you’ve got to have a certain market size, and I think that’s where are the traditional semiconductor players really fill that requirement, and I don’t see that requirement going away.

I think it will just morph into other areas and, so I see there is an interest, and as I mentioned, there is interest in and as something that ARM supports for customization but, it is an expensive endeavor; you need to have significant scale and expertise.

So I think that both models will exist; I don’t see it going one way or the other in a huge way.

Joe Green (host): …and there comes the music. And that, of course, means that we’re gonna have to leave it there. It only remains for me to say, Chris Bergey of ARM, Senior Vice President of Infrastructure Line of Business, thank you ever so much for joining us on the Tech Means Business podcast.

Chris Bergey (guest): Joe, it’s been my pleasure. I look forward to talking to you again soon.

Joe Green (host): And so I turned to you now listeners. Thanks so much for joining me. We’ll be discussing more with other people from ARM over the next few months, I hope. I’ve got a few things lined up, so watch this space. Until that happens, and until the next episode of the Tech Means Business podcast, thanks for joining me, and I hope to hear from you soon. Bye.
[/showhide]

The post ARMing the low-power data center appeared first on TechWire Asia.

]]>
Business insights from smart monitoring? AppDynamics joins the dots https://techwireasia.com/podcast/network-infrastructure-business-processes-appdynamics-enterprise-s02e07/ Thu, 01 Oct 2020 07:50:14 +0000 https://techwireasia.com/?post_type=podcast&p=205103 Mapping complex IT systems is one thing, but how to translate what you find into how to improve the business? That's quite another. But AppDynamics can show us the way.

The post Business insights from smart monitoring? AppDynamics joins the dots appeared first on TechWire Asia.

]]>

Show Notes for Series 02 Episode 07

This podcast is produced in conjunction with AppDynamics, a Cisco company.
Many enterprises run IT systems that are necessarily complex. So complex, sometimes, that the number and variety of services, applications and connections comes as a shock, even to the IT Department.

Finding the cause of an outage or slow-down, therefore, can be frustrating and time-consuming. Additionally, it can lead to all sorts of internal “politicking” (to put it nicely).

But even once there’s a coherent “map”, correlating what the business wants with what the IT stack can provide is another headache, and one that’s not easily achieved. Unless, of course, you happen to use AppDynamics.

In this talk with Jim Cavanaugh from the company, we look at how organizations are joining up business aims and objectives with the technical nitty-gritty of networks, clouds, connections and all the whirring boxes in data centers.

Connect with Jim Cavanaugh on LinkedIn:
https://www.linkedin.com/in/jim-cavanaugh-b3157a/

Joe Green, the podcast hostess with the mostest on LinkedIn:

https://www.linkedin.com/in/josephedwardgreen/

 

Full transcript available.
[showhide type=”transcript” more_text=”Click to read.” less_text=”Click to hide” hidden=”yes”]
Joe Green (host): Welcome to the Tech Means Business podcast. Now, each episode, I like to talk to interesting individuals from companies and organizations who I might feel have got something to say and to contribute to this space, where we can talk about this thing called technology that’s at the heart of every business.
Today, we’re talking about customer experience — I know it’s something of a buzz phrase — and how that customer experience relates to and is affected by issues like network infrastructure; the influence customer experience takes from choice of cloud provider, or even at the end of the day, even the choice of ethernet cables in the company. Now it sounds odd!
But here to explain all this, and let us know how we can effectively look at the effects in a business that technology can have, I’m delighted to be joined by Jim Cavanaugh, from AppDynamics. Jim is in charge of the APAC and Japan for AppDynamics. So Jim, please tell us a little bit about yourself, what you do and what AppDynamics does.

Jim Cavanaugh (guest): Thanks, Joe, for having me, I really appreciate it. I’ve had the pleasure of living in Singapore for almost five years now with my wife, and I have two young girls at home.
On the work side, I have the responsibility and pleasure of running Asia Pacific and Japan for AppDynamics. Really, the magic of where AppDynamics plays is providing the correlation between end-user behavior and business outcomes. So if you think about it in our personal lives, whether you’re using a mobile device to get online and leverage a banking service, order food delivery, maybe order a car, order any type of good or service, well think in terms of consumers have, what’s that experience like?
Now for the company providing that service, and for the IT group providing that service, that’s a pretty complicated set of things that happened behind that. So while we may push two, three, or four clicks on a mobile app, suddenly we have food at our doorstep, or we have a car waiting for us, so we don’t have to get in the rain, or we’ve moved money from one side of the world to the other. And we expect that to happen flawlessly! There’s a whole bunch of things that happen behind that application, that allow for those services to work in real-time. And AppDynamics helps companies to fix those and optimize those mobile apps in real-time.

Joe Green (host): So as you say, behind those services, behind the application that might be on your smartphone or behind the website, you know, there’s a whole stack of stuff isn’t there? There’s a database and the website server and, I don’t know, a kind of credit-checker. And all these different components, if you like, traditionally, have lived in a company’s data center. But as far as I understand it, that’s changed, hasn’t it? I mean, the topology of the network, the topology of systems are spread out over the cloud, you know, via API’s: I’ve got that right?

Jim Cavanaugh (guest): You nailed it! And the challenge is that many of the IT tools and systems were built to be able to monitor monolithic applications that were built pre-microservices, pre-cloud services. So all the things that you just articulated are things that many of the systems that IT people are forced to use, really weren’t built to go off and monitor and tackle.
And the second difficult thing for an IT organization is that most organizations have dozens and dozens of tools. So even if they find a tool that will work in one component of their environment, they then have to correlate across multiple tools to draw some analysis. And as we know, the world is so dynamic today, you may take out your phone, and you go to book that dinner or lunch. And if the application is at all slow, then you’ll just select another service to book that dinner or lunch.
And same thing in the ride service business. If you’re waiting, and you’re trying to beat the rain, most people have two, three, four different car services that they can potentially leverage. So the pressure on the IT group to be able to resolve problems in real-time is immense. Because from our personal experience, you and I and others are thinking literally in microseconds of: will that “application/service/ whatever I’m using” be slow? So I’m gonna go use something else.

Joe Green (host): Yeah, I think that’s a powerful analogy. And I think there’s more to it as well, I think that when you start using an app, or a website or a service, there’s a significant amount of buy-in, you know: you need to give people your email address, you need to create a password, and then you have to wait for the email to come back and confirm your account, and then you’re up and away on your new app or your new service. Now, if what you’ve bought into, if you like, through that process, this thing that you’ve committed to, if it doesn’t work, or if it’s slow? I mean, that’s incredibly frustrating. And isn’t that really what enterprises at the end of the day need to avoid?

Jim Cavanaugh (guest): Yeah, you’re right. And, you know, a couple of statistics to validate what you just said. The APAC tension index survey, 54% of consumers said they’d actually pay more for a good app or service that had a better digital experience. So in spite of all the work we go through to download that app and put in all of our information, and again, most people have multiple applications that would allow them to do whatever they’re trying to do. And consumers are acknowledging that they’ll pay more for that experience. Another interesting stat: 85% of consumers said that over the next three years, the digital experience will actually drive the selection of the brands that they choose.

Joe Green (host): So I guess that means that it’s the back end, as well as the GUI, the graphical user interface of apps and websites, that will drive decisions. And it’s those elements really, that are more effective in terms of customer experience and brand loyalty than say, you know, enormous ad campaigns. Because, we’ve all done it, we’ve switch to a new product or a new platform, because of a big an ad campaign. But if the experience that we get is, for want of a better word, fairly crappy, then we can certainly get turned off the whole brand. You know, it reminds me really of talking to my non-technical, if you like, family members, who will show me an app or show me a website and say, you know, this is rubbish, this is garbage, because I don’t know — I tapped here, and nothing seems to be happening. And actually, it’s just one element, you know, in what’s often a very, very complicated and complex back end, that taints an entire brand.

Jim Cavanaugh (guest): That’s absolutely correct. And the difficult thing for again, for that IT organization is being able to find whatever that one thing is: is it a third party service? Is it a service provider? Is it that line of code that was added in? Is it a security component, and there are hundreds and hundreds and many times thousands of different things that could go wrong in that stack. Meanwhile, the consumer literally in seconds wants that problem resolved, so back to, you know, what do we do? And how do we help customers, by leveraging our technology, which is inclusive of AI and ML. We can actually allow our customers to anticipate a problem before it happens. So they can go resolve the problem before you and I or other consumers actually have that experience.
So think about the concept of ordering car in the rain. And you’ve got massive load as everybody decides that they want to order a car as opposed to take another mode of transportation. Our technology in real-time can: number one, identify that there’s massive load, and that there’s going to be a potential performance issue. And then prior to that issue, actually being experienced by the end-user, [the AppDynamics customer] can leverage our technology to move a workload — as an example. So now we’ve remediated the problem prior to the consumer even experiencing that degradation, maybe even more impactfully, we can actually quantify for our customers, again, in real-time, what the financial impact of that was.
So it’ll actually it probably makes sense: if the mobile app for that car ride service is performing poorly in the rain, they’re probably going to lose some revenue. But wouldn’t it be nice if they knew in real-time exactly how many people experienced a performance degradation and then what those users did, and what the potential impact of that revenue was by those users. Either they’re abandoning the site, or choosing to do something else on that site that then book car.

Joe Green (host): That’s really got me thinking, actually about the rain issue. And it’s I know it’s a it’s a kind of silly analogy, really. But it’s almost like, you know, we’re quantifying the cost of rain, literally, to the business, we’re actually putting in empirical figures. I wonder if you’re the person to ask really, because it strikes me just while you were talking about that I was thinking about a weather app, which I was running on Android, and is now no longer available called Dark Sky. And it kind of taps into local weather radar. And I thought, wow, it’d be really powerful, wouldn’t it to be able to, you know, pull in that data that minute by minute view, from rainfall radar, and then actually be able to predict the financial impact of rain on a business, you know, talking about the ride hailing business there? Because essentially, we’ve got everything there at our fingertips, haven’t we, we know it’s going to rain. And we know our systems aren’t really capable, say, or are capable of coping with any peaks in demand. And, and we know, therefore, what will the next downpour literally cost us in terms of dollars. And it’s that type of intelligent big data, I think that has some really exciting possibilities.
It’s gathering this information together in a business sense, which is usually where things like artificial intelligence can help because, you know, computers are good at munching through big amounts of data, as you know.
So in AppDynamics’ case, you guys call it the Central Nervous System, tell us a bit more about that particular implementation, about how that works in a business.

Jim Cavanaugh (guest): Sure, some of it goes back to the real-time pressure that IT organizations face. As customers continue to want more services, they want them faster, they want a seamless performance in the delivery of those. And the back end is more complicated. As we talked about, IT organizations now have to deal with the challenges of cloud and microservices and all the other complex things that are in an organization that today deliver a service. In addition to that, customers are trying to make real-time business decisions.
So in your analogy of ordering a car, and the impact of rain and or other things on that, if we leverage human interaction, if we wait until humans actually have the time to go in and analyze all that data, then very likely the opportunity to leverage that is already passed.
So technologies like AppDynamics do provide customers with the ability to, in real-time, leverage AI, ML, to go off, and without that human interaction, go off and make tweaks or changes to that IT infrastructure, so that the customer doesn’t feel any degradation of performance.

Joe Green (host): Yes, given an infinite budget and 10,000 staff, of course, you can tweak your infrastructure, you know, based on data, because you’ve got 10,000 people sifting through endless Excel spreadsheets. But of course, who’s got 10,000 people at hand? And I think the point about AI is that it can do just that: it can munch through data at a much lower cost than an equivalent number of human beings. And of course, doesn’t need the bathroom and works all around the world, 24/7!

Jim Cavanaugh (guest): So if you think about it, in “the old days,’ people would have technology — monitoring technology — that would give them green, yellow, red, and they would have some level of performance for their application or service. And if things were green, then that was good. The interesting thing is that doesn’t tell them things are green. But if you increase the performance, if you actually gave your consumers a better experience, would you actually bring in more revenue?
So what AppDynamics can show is in real-time, where consumers are having a better experience than maybe was identified as green, or optimal in the legacy system. We can show how more revenue is coming in from those customers that are specifically achieving better performance.
So as an example, if the application is now giving [or] returning the experience to the user at 10%, then last time, we might all think all’s well. Then maybe more users will use the service, maybe they’ll click a few more times and the customer doing that can monetize it. What we can do in real-time is show the customer.
Think about a bank. We can show a bank in real-time, your users that have an experience that’s 10% better than your “green”; than your normal. Actually go through the online application to fill out for a mortgage, a motorcycle, a car, a boat, or in the food example, food delivery example. While there’s a baseline that says the performance of the app is fine, the customers that have a 12% better experience than the average normal, are actually ordering 17% more items.
So what the IT organization can then do is go to the business and say, instead of being a cost center, I could potentially be a revenue center. The data shows that we can increase by 17%, for this set of customers if we provide the application 10% faster. And then the CIO can go to the CFO, the board, etc. And, specifically say, here’s the investment that I need, so that I can provide that, because certainly there might be some investment associated with providing that faster service.
But in the past, that was guesswork: I’m gonna go build a battleship, my customers are going to come to my application, and I’m going to monetize it. So now we can actually think about it as a dial that you could “turn revenue up and down” based on the performance of your application, with the ability to quantify that in real-time,

Joe Green (host): Yes, gone are the days, if they ever existed, of course, when the IT function could go to the boss and say, “Look, there’s this new box that we want, and it’s got flashing lights on the front that light up and amusing colors. And we want it because it might work or, you know, it might help or at the end of the day, we just want it because it’s coooool!
But these days, what you really need to do is to have to go to the boss — to move up to the C suite, if you like — and say: purchase this system, this service or framework that we’re going to buy, it’s going to create an uplift in sales, or it’s going to cut costs here, and it’s going to cut costs here. And the point is, I guess, that the more figures, the more empirical data you can take to your boss, the more weight you hold, if you like, or the more weight your argument holds.
And in that way, you’re beginning to move it away from being a cost drain, and an endless succession of costs leaving the departments and actually turn IT into a more of a strategic function.

Jim Cavanaugh (guest): Absolutely. I had a CIO of a large retailer, say (his words), by leveraging your technology, you’ve changed my relationship with my CFO! So what does that mean? He said, I used to have discussions with the CFO that started with “I think”: “I think if you invest in this project, I can do that,” “I think if you give me this amount of money, we can deliver that.” He said, Now I go in, and the words that I use are “I know based on this data that I’ve leveraged from AppDynamics, I know that when our customers experience X, they spend Y. And that’s drastically changed that relationship, because he and the CFO are dealing with real data, as opposed to guessing based on extrapolation.

Joe Green (host): Yes, and you can, of course, dive deeper into customer experiences here, and begin to model customer behaviors and potential customer behaviors. And therefore, the quality of customer experience.
If for instance, we have a peak of 10 times in demand, say, at the end of Ramadan, or the beginning of a holiday season, or Couples Day or rain, as we talked about earlier on, or the train shut down because of a fault on the track somewhere up the way, on a particular day: then, we can extrapolate empirical data. (I love that word empirical, by the way!) You know, how that data affects infrastructure and how it affects resources, and therefore how we might get new infrastructure or changes in resources going forward. And that leads, of course, into this correlation between gathered data, or business activity and the physical infrastructure, and the physical facilities. How do we draw those strands together? What’s the best way of going about that?

Jim Cavanaugh (guest): Sure, there’s a couple things that orient on. The first one is this concept of a end-user journey, or a user journey. So as opposed to thinking about the IT organization from the IT perspective, thinking about the silos of physical gear, software connectivity, different data centers, third party services, we think about the world from the end-user’s perspective, even when approaching our customers.
So what our technology allows our customers to do, is look at the journey from the mobile application, or the website if someone’s coming in via a website, all the way back through that entire spaghetti web, that complicated web that the customer has.
And as I mentioned, the first thing that we provide is this correlation between what type of experiences the user have, and then what type of behavior, i.e., are they spending money or selecting a service, when things aren’t optimal, when something’s going wrong?
What our technology does in real-time is identify exactly where the issue is. So if you were trying to order ice cream, tonight, for your kids’ after dinner, and you went to the app, and it wasn’t working, we could tell that delivery service that it’s actually a specific line of code, or it is a service provider challenge. Or maybe it’s even your home situation, because you’re on your wifi and someone else is watching Netflix and someone else is working from home in the other room and the kids are studying in the other rooms. But, we can tell them in real-time exactly where the issue is.
Then in many instances I use the example of workload optimization, we can offload or we can remediate that problem in real-time. In the case where there are challenges within that IT organization that need remediation, they need to upgrade things, they need to put more storage behind something. Again, obviously, the customer would have to go off and action some things, but [it] will tell them in real-time exactly where the challenge is. So that they can go off and remediate that.

Joe Green (host): Yeah, I think that’s a good example. And let’s take it as an analogy, a contention ratio problem, if you like: too many people are on a limited connection. You used the example of being at home. And so therefore, it might be too many people on Netflix, I mean that the situation is the same.
But of course, the trick is finding out what the issue is. And its causes amongst not just a simple home, wifi network, and ADSL, but you know, amongst a wildly complex enterprise IT system.

Jim Cavanaugh (guest): The other thing is that people have, in the past, sort of guessed at what the consumer going to demand. So an app’s designed, and in theory, [the developer is] going to provide some type of SLA for that app, [and] it gets pushed out to the world. And then people love it, or they don’t. And sometimes they don’t love it, because there are some challenges with the app. Sometimes it’s the situation you just articulated, where there’s a bunch of contention at home. But sometimes, the company just might have missed what the consumer is really going to go and demand. That challenge is that consumers really wants that right away and resolved in real-time. If there’s an issue, the application might be performing as designed. But the company may be missing a big business opportunity, because people are looking at that application and saying, well, it works. But it doesn’t work at the speed that I want it to.
The other interesting thing; when we when we talk about applications and performances, three quarters of people say that if they have an application, and they don’t like the performance, and they delete that app: they’re not going to take the time to go tell anyone. So we think of applications in terms of, well, an application has a poor rating, or a poor score in an app store or online. But the reality is very few people will actually go and complain prior to deleting the application.
So the challenge, obviously for the app owner, is they create an application, they push it out, they start to get some data, but they may have a lot of people leaving or deleting their application or maybe they don’t even take the time to delete it, they just stop using it and start using something else.

Joe Green (host): Now in many cases at the lower end of the market, you know, losing 1% or even 5% of business for whatever reason, well, it’s probably okay. And part of that’s going to be that the cost of amelioration, the cost of fixing the cause of that 1% or 5% loss is going to be higher than lost sales.
Okay, so that’s, I think, that’s probably fair enough, I would say. But at scale a 1% loss, you know, due to poor customer experience, which is what we’re talking about here, that can spell millions in lost revenue.

Jim Cavanaugh (guest): Absolutely.

Joe Green (host): Now I want to touch on another issue. And that’s that of large systems, you know, bought and developed over time; that kind of [system] well rooted into an enterprise. We’re talking really about packages like NetSuite, SAP, Salesforce, and the like, you know, ERPs: enterprise resource planning software. Now, if it turns out that after our investigations that it’s those proprietary, more closed systems that are the problem, and I’m assuming there’s not much I can do about it?

Jim Cavanaugh (guest): The first challenge associated with applications, whether it’s SAP or another large application is very often the service, the app that is tied into that is leveraging more than just the core SAP application. So the first piece of, or, the first opportunity for us to help is identifying back, to if it’s an application that’s leveraging SAP. We can actually show the customer, what else is it leveraging, and believe it or not, [has the] IT organization become too complex?
One of the things that we provide is this end-user journey map, that’s documenting from the end-user all the way back through the complex systems. But very often, when we show that to an IT organization, their response will be, well, I didn’t realize that application actually calls our credit check application, I didn’t realize that application actually has a call to our external third party provider! I didn’t realize! And most of them say I didn’t realize how complicated our application was!
So the first part is just mapping the topography to allow a customer to figure out what components it is touching. And then the second piece is while there are tools (back to my comment before) there are tools that allow you to go optimize inside of SAP as an example, or optimize inside of Oracle or optimize inside of other areas. Because applications are typically going across a bunch of different components.
You really want to think about a tool that allows you to traverse multiple hops. In the past, what was happening is you would have people [who] would have their silo-ed tool. So there might be a database person that looked at their tool, there would be a server person or storage person that would look at it infrastructure tool, you’d have a networking person that would look at a networking tool, and they would all have a view of the world.
But back to you, you and I, and the consumer: what we really think in terms of is, well, how fast did that application work for me? Was I able to order that car in the rain in a matter of seconds? We’re not thinking about whether the network’s up, or whether there’s contention on the server, or whether the line of code inside the database, whether that’s a packaged application, or a custom written application, whether the line of code is perfect or not. So, it really goes back to having one tool that allows the IT organization to figure out in real-time, where is the issue. So that’s the first piece.
The second piece is baselining. So if you think about the concept, most businesses would have an appreciation that there are some periods of time where they have a stronger load on their environment than other periods. A dramatic example would be: we have a customer in Asia Pacific, it’s a large government organization, and they collect tax payments, and obviously give out tax refunds.
So as you can imagine, they have some acute pressure when it comes to the tax filing deadline of people using their application. So on their normal Tuesday prior to the deadline is different than their normal, regular Tuesday. What we can help them understand is the baseline: on tax day, over the past three tax days, the performance of the application has been X at this time of day with this number of people filing and this number of people hitting the application. That allows them to do some really dynamic things around anticipating, before issues come up.
The third thing that I talked about is the ability to dynamically, in real-time, kick off actions. So I gave the example of the ability to optimize workloads by moving a workload from point A to point B, so that you can remediate a challenge in real-time without an end-user being impacted.

Joe Green (host): Now, just to go back to that first stage, when you’re trying to pinpoint a problem. Now, there are going to be lots of different people involved in the IT function, and each of whom are carrying their own rattle bags of Bash scripts and tools and bits of software and methods that are designed specifically for their role: databases, or websites, or APIs, or DevOps, and so on and so forth. And of course, this brings us then to the concept of the war room, where everyone has to sit down in the war room and try and get to the heart of an issue or a problem. Let’s be honest about this, I think there’s a good deal of finger pointing that goes on. And of course, really, that’s the last thing that a grown up enterprise needs, isn’t it, this kind of blame game, this endless war rooming?

Jim Cavanaugh (guest): Yeah, you’re absolutely right. We had a customer that explained it. He said that AppDynamics was their flashlight. He gave the analogy: he said the war room was, in his words, people from different departments that were all in a dark room, and they were all positioning and talked about MTTI as “mean time to innocence” as opposed to identification! So his task in that room was to prove that it wasn’t his organization’s fault. And so if you think about that war room, and you think about this concept of a flashlight, what our customers can do very quickly is go identify the exact nature of the problem, and then go figure out what the resolution is in real-time.
What we see for customers is from an identification or MTTI, and from a resolution or MTTR, drastic reductions — very often more than 75% [in times to resolution].
And another example, we have a bank in Asia Pacific that (in their words): they’d spent more than three months, and they had a weekly war room where they had not just IT but the different components of the business; marketing, security. They would spend several hours every week trying to diagnose what they deemed a very critical business challenge. They installed our software and within 20 minutes, they were able to find the line of code. And that day, they were able to release the code. And the problem was then solved.
So, the ramification of being able to go identify the root cause very quickly, as well as the remediation, for many of these companies, is massive when it comes to increasing top line, but also decreasing the cost associated with remediating IT problems.

Joe Green (host): Now, if people want to go and have a look, and maybe get proof of concept for themselves, and see what the possibilities are, what are the next steps that they need to take?

Jim Cavanaugh (guest): People can go to the AppDynamics website, https://appdynamics.com, and they can download a free trial. We allow customers to use our technology for a couple of weeks as a proof point. In addition to that, we will work with the customer to help them identify the business value associated with remediating whatever the application is that they select.

Joe Green (host): Now, as ever on this podcast, we’re running out of time probably well before we’ve talked ourselves out. So it only really remains for me to say at this point, a big thank you to Jim Cavanaugh, from AppDynamics. It’s been a really good talk, and I think it’s going to appeal actually, to both business and IT professionals, which is of course, what Tech Means Business means — it’s all in the name, it’s what it’s all about. So Jim, thank you!

Jim Cavanaugh (guest): It was a pleasure, Joe, the pleasure was mine.

Joe Green (host): And I hope you can join me, you the listener next time we take a dive into the technology that underpins just about every business, from the one man band right up to the global multinational. Until then, take care and see you next time. Bye!
[/showhide]

The post Business insights from smart monitoring? AppDynamics joins the dots appeared first on TechWire Asia.

]]>