Agentic Edge

What actually matters in enterprise automation in 2026?

Automation Anywhere Season 1 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 20:58

Every week brings a new AI technique, a new protocol, a new framework claiming to change everything. So how do you actually decide what to build, what to wait on, and what to ignore entirely?

In this episode, Adi Kuruganti, Chief AI and Development Officer at Automation Anywhere joins Micah Smith & Kate Ressler, to share how he thinks about separating signal from noise, what it really takes to move agentic AI from POC to production, and the two contrarian bets his team is making right now on data architecture and conversational UX. Grounded, direct, and worth your time.

Welcome and Guest Intro

SPEAKER_04

Welcome to Agentic Edge, where we explore the frontier of AI agents, enterprise orchestration, and the architectures that are shaping tomorrow's intelligent enterprises. Now, the reality is that not everything that comes out every week in AI blogs and news feeds is actually a game changer, like they want you to believe. In today's episode, we're joined by visionary executive and product leader Adi Kurokanti, who is Automation Anywhere's chief AI and development officer, who's going to offer us some insight into how he thinks about these initiatives and how he hones in on what really matters in AI in 2026. Adi, I want to start with the same question that we've asked almost every guest so far. How do you define AI agents?

SPEAKER_05

I define them as cognitive probabilistic dots that can reason, think, and act. Act is the big part of it. And so the word of automation obviously is how do you buy automation outcomes? And so for a lot of the unstructured cognitive tasks, this is where AI agents are super powerful.

Agentic Process Automation

SPEAKER_04

Yeah. And in the context of automation anywhere, we're leading the way in agentic process automation. AI agents are one part of BASRA. So how do you think about where AI agents sit within an agentic process and what that means?

From POC to Production

SPEAKER_05

Our entire purpose here is to enable our customers to automate emission critical operations, auto management, fire authorization, healthcare, KYC, anti-money laundering. There's just so many business processes that touch lots of different financial systems, legacy applications, model cloud-based applications, but really business critical systems. And automating these processes will drive significant value to our customers, whether it's operational efficiency, cash flow, regulatory compliance, or even new lines of revenue growth. And so that's the focus of HTT process automation. So when we think about how customers are going to automate these processes, it's a combination of deterministic processes, which is what Mozart Orchestrator is kind of the brains as well as the system behind it. And the agents are really focused on automating more of those cognitive tasks where you're dealing with, for example, a lot of unstructured content. Maybe you have to look at a bunch of contracts and based on the customer profile, figure out what's the right product set to offer that customer in a part of the creation scenario. Or you need to find out the right loan rate based on credit check, customer profile, previous orders, and a bunch of different, you know, information about the customer. Those are where these cognitive AI agents have a lot of value. So for us, it's a mix of deterministic and cognitive. I do think it's more 80-20 rule, maybe 80% deterministic to 80% cognitive. The technology is evolving so fast. You know, it could be 80-20 the other way around very soon.

SPEAKER_00

What are the things that had stood out to you as changing the most that indicate for a really optimistic future?

SPEAKER_05

I think four of our customers are deploying APA processes in production. That's kind of the ultimate goal here. That's how we know it's working. And so when you start, we have over 1,500 live deployments of production. We have lots more POCs. Obviously, we have a big focus of us to go from POC to production. We launched AI agents a little over 18 months ago. We now have over 3.5 million AI agent executions in production. So we've seen amazing growth. We just took getting started, right? And so with the adoption, you also see the evolution of tech. So there's a lot more like agent evals. So we are rolling out evaluations where of agents and how agents are actually working. Some of this is design time, but then also in production, where as you see drift in the agent behavior, how do you respond to that drift, right? Maybe escalated to a UI for review. So there are more and more governance, evals, and other capabilities coming in that make the efficacy of these agents far better. Because I think what the biggest question our customers have is agents are all cool, AI is really cool, but if I'm going to automate my mission critical process and operations, it needs to be that good. It needs to be real, it needs to be reliable, always on. And they can't be one day it works at 90%, another day it works at 60%. You can lose cash flow, impacts customer NPS and lots of other impacts downstream. So I think our focus is how do we improve the reliability and efficacy of these AI agents as they're part of these business critical workflows.

SPEAKER_04

With so many platforms claiming to be agentic now, I think every single application we use says that they've got some kind of agent play. How do you see automation anywhere standing out and what's your approach? Because you're really responsible for a lot of how we're positioning ourselves and where we're going with this.

SPEAKER_05

So again, we have a lens of those mission critical processes. In fact, how we think about what to build next is based on what outcomes can we drive to achieve that goal, especially we're all in Silicon Valley. You have new agent startups, agent-tic AI startups coming out every day, maybe five every day, not one every day. I think again, it's about what problem you're trying to solve. And while there might be a lot of pure play AI native startups, you have enterprise vendors incorporate AI agents as they should into their core portfolio. For us, it's how do we automate those mission critical processes? And what are the tools? For example, mask your PI data, how do you make sure as this work flows to our process, the AI agents do not have access to customer-sensitive ADA. So those are tools we will build into our APA system. So that's how we differentiate. We want to build, partner with companies that, again, address the core automator drive for our customers, which is automating those mission control processes.

SPEAKER_04

When we think about agentic tooling, I think there's really kind of a span here, right? There's personal productivity stuff, there's SMB, and then there's enterprise. And I know you talk a lot about automation anywhere being an enterprise application. Talk some more about how you see that being different and what separates automation anywhere as an enterprise platform.

Adi Closeup of above section

SPEAKER_05

Yeah, so they got lots of different types of agents, lots of different types of use cases. We focus on those mission critical, long-running business processes, which typically is pad application boundaries, system boundaries, legacy, cloud, doesn't really matter, and departments are. So these are, again, like I said, mission critical. So those are the use cases like auto management, AML, anti-money brothering, KYCs, banking, fellas services and manufacturing, those are the kinds of use cases we care about. All of us use a lot of post-productivity tools, whether it is designing a new prototype using Figma Make or using Chat GPT or Copipe. And then you have others within the enterprise vendors. They have their own agents for their specific application use cases, what we call improving productivity of those applications, whether it's Salescard and Salesforce or for an IKSF, all have great use cases. They're all very valid in their respective spots. And what we expect customers, and we see customers doing, is using the right tool for the right use case. We're really looking at those mission-critical business processes. It's having the right enterprise guardrails, the governance, the observability of those agents, as well as the ability to act when you find that there's a drift or the process is not behaving the way you expect. Because if a KYC process goes wrong, you have bigger implications for the business versus summarizing a case or submarizing an email, you could still make certain changes in a personal productivity.

SPEAKER_04

Adi, I want to switch gears real quickly to talk about separating signal from noise. It seems like every week I see a bunch of stuff on social media and these blogs that I read about the new hottest technique. Recently it was like, oh, tune versus XML versus JSON for communicating with an LLM. How do you think about these kind of things? And I know this kind of stuff comes out for the engineering team as well. How do you separate that hype from the noise on a what seems like daily or weekly basis?

SPEAKER_05

There's gonna be a lot of that, and every day is gonna be a new day. I just go back to post principles, which is there's space for experimentation and discovery. But when we're building as a product team and a technology team, we don't want to go too far before knowing if there's that aim. I like where the offer is a little bit more mature because if we're talking about enterprise setting, it's different for personal productivity if I'm just using it for myself. But as a head of product and tech, I want to use tools and kind of new ideas that are a little bit more baked versus just a PowerPoint. Again, going back to whatever we use, we want to try it initially, maybe a POC, test it out for a certain outcome, and then we'll see if we could pick. And we won't try everything. So that again, we'll wait for some things to get over the hype phase, and then we'll see, okay, should we try it out? And a lot of this, by the way, is bottoms up. In many cases, we're using tools not because Hardy decided we shall use this tool. It's because a developer or a designer or a product manager, we're just trying something, and there they start sharing it with their teammates. And that's how people started using it. You might as well actually make this a static.

Context Graphs and Data Moats

SPEAKER_04

What's really interesting about that is it speaks to the momentum that an advocate can have for a particular platform or piece of software that they're using because they can help to say, hey, this is why I'm using it, this is why it's cool. Yeah. And that's having influence on you as an executive and the rest of their team for what they want to use. And yeah, do a POC that demonstrates value is really how you get to the point of getting approval for these things within your organization. Has there been a recent release news development that you thought was overhyped and got maybe too much attention than it deserved, or the other way around, didn't get enough attention and was something that you're glad that the team has started to look into, implement?

SPEAKER_05

I think something new, which is the Elnora hype in the market, at least in the Silicon Valley, is context graphs, which essentially means that we have a ontology of how work actually gets done across systems of record. Yeah, knowledge graphs, which is essentially RAC systems, right? But now, more and more, we've seen this with when we launched Fastest Racing Engine. The beauty of the kind of secret source is when you can use all the transactional data and the decision traces to basically improve the outcome or the output of these agents. Because you can see what agents is performing at runtime, and then you can use that to improve the next type of drugs, right? That entire system is what now folks are calling context graph. A year ago we called it the process reasoning engine. But you know, what is kind of getting around. I do think that's one where it's definitely high in the hype cycle in terms of terminology. I mean, not a day goes by when I'm on LinkedIn and I have five five different posts talking about that. But I think there's something there because we've seen it in how they built process reasoning engine, and we are still, say, version one of that, is by using both the ontology, eye research, knowledge grasp, but as well as the transactional data, we're already seeing 30% improvement in agent efficacy and response. I think that'll become even better. And I think where the differentiation will be in the market is where the data, where's the transactional data, not necessarily the system or record data, but the transactional data, because most decisions happen across systems. And so we are in a good spot because processes typically go across systems. And we understand how a sales force interacts with the SAP, interacts with a service tower, workday, oil legacy system. So we have those traces, and that's our goal is to use that data as part of our process resuming.

SPEAKER_04

I want to switch gears a little bit because not only are you leading an engineering team, but you're also a technology executive. So our team has benefited from the use of a bunch of AI tools from cursor to Clog Code to ChatGPT Enterprise, and it helps the Pathfinder community learning team be more productive with what we're doing and put out cool innovative things. How are you evaluating these kind of investments? Because you've got UX, you've got product, you've got engineering, you've got the community team all kind of reporting up to you. All of them have a ton of asks of like, oh, we want to do this, we want this tool, we want this. How are you evaluating that and how are you making those decisions? At the start of the year, we would discuss this as a exec team.

SPEAKER_05

And everybody had their own, well, favorite tool that they wanted to use. But when I look at it and uh post-discussion, what we agreed on was can we have a common metric? So how do we drive faster innovation cycles? So instead of good having quarter uh releases, can we have monthly releases? It's a challenge because you are you have so many new so many products. We have over 5,000 customers. Those we said after we ten good, load regressions, all that stuff, and you want to innovate, right? You want to move fast. So that was an example essentially with how came up with a common V2 OKR or OKR essentially, to say we want to have monthly releases, go from quarterly to monthly, and we want to drive as much as 20% higher productivity because you're not adding new headcars. It's the same set of folks, and we want to we want to deliver more. Again, that's common common OKR, right? And so when we do that, then we look at end-to-end life cycles. Because you think about building product and rolling out product, it's nothing but engineering, developing it. It starts at the very very beginning where you're figuring out what are the needs of the customer. Which customer are you actually addressing? What's the persona, what's the user? What outcome are they trying to drive? So that's typically a uh a conversation between design and VN. And to come up with the experience, historically, a design drives of experience, be able to use it, then they go to do some user research. The entire process might take anywhere in a month plus. Now with vibe coding, you can do it designing, you know, again, using AI and collaboration exercise between design, product, give engineering, and putting in front of customers and users to get early feedback. They can do that with a weaker source. That's you know one piece of it. Then that same design can be converted into the real code as a start of a code for a giant to actually build, again, using whether it's Curso, GitHub, or whatever it might be. So net that looking at the entire life cycle and saying what are the tools that you can use across to achieve that goal of faster innovation, monthly releases, create percentage more, high productivity across the board. Not only for engineering, for product design across the board. And then when we go through community and we want to talk about our products, talk about our innovations, that's how we are teaching our customers, our partners, how to use our product, kind of evangelizing what we use. So that's the full cycle of how one needs to think about it. At least that's what's worked for us. Because if you look at it very individually, it might improve productivity, it might be cool for that specific team, but broadly doesn't help the business. It has to help the business ultimate has to help the customer.

SPEAKER_04

Yeah. So if I take something away from that, I think as a leader who's coming to an executive to say, hey, I want you to invest in my program and my thing and my request, I need to be able to articulate that in terms of the business value metrics that you hear about. Yes. What outcomes are you driving? And what outcomes am I responsible for? And how does this help me move forward more quickly? Yeah.

SPEAKER_05

It's like, what's the arrow at the budget? What do you can do with it? Yeah. So it's the same process.

SPEAKER_01

Or at the start of 2026, what do you think the landscape's gonna look like by the end of this year?

Bets, Rapid Fire, and Wrap Up

SPEAKER_05

I think we can all count on even more innovation, lots more hype. Does AI agents last year? It's already started with Connects Prop. Who knows? Middle of the year, what it'll be. The goal is how do you drive more and higher value outcomes using, again, for us, agency process automation. The market is obviously looking at AI agents and how to drive higher efficacy and value through AI agents. They actually monetize these AI agents. I think there's more and more tech. Obviously, there'll be work happening in models. I I personally believe there's gonna be a lot of innovation at the application layer. I see us between applications and platform. But I see more and more applications getting some velocity in the market. So I do expect innovations and kind of new companies in the application layer to come out, which are using basically agents and kind of AI native.

SPEAKER_04

Are there bets that you're making right now that seem old, green and contrarian?

SPEAKER_05

I think one big bet we're making. So we've always thought about ourselves as a process automation company, so about ordering these processes. We've always been a little bit careful about having customers' data. So actual business data. But I'm more and more realizing that I think with this concept of context graph, you need a combination of business data and operational data. Together is where the differentiation will be. We have the operational data and using ontologies obviously we can get it, isn't it? So I think building a data layer on or within AP is going to be a big deal. The second big bent is more user experience. Because you're a process automation company, it's all about process flows and stuff like that. One of the things we are playing with is should the entire experience be AI-led? Are we ready for our users to interact with the entire application or building processes and the operations of crosses that are already running through a conversational experience? Not only business users, but actual our IKEA developers and others. So we'll play that game. We'll see where it goes. I think data experience, those are the two areas that I'm thinking of. And I think the third, and maybe a shout out to you guys, I think more what we learned last year is it's only about building product and tech. You have to get users to actually use it. We need our customers and partners to do the same with automation anywhere. You have Profit Summit, the where it's training camps. We gotta do more of those. We gotta do more of those in person. I think that's the best way to get our users to get on the journey with us because there's theory and there's practice. I think we need more and more practice with our customers.

unknown

Yeah.

SPEAKER_04

Tully ring, the tangible part of it, getting hands on it. Yeah, the important.

SPEAKER_03

We hear that e-back all the time, which is why we invest so much time in it. That's right. Yeah. I think it's time for the rapid fire. All right.

SPEAKER_04

So, quick, this is time for the wrap-up. I just want you to give your gut reaction to these phrases in just a couple of things. Truly got him, no, you're just asking. Yeah. Multi-agent systems, future. Organizations building their own LLMs from scratch. Waste of time. On-prem LLMs required for regulatory higher every regular industries.

SPEAKER_05

Open versus closed platforms. We believe in open. Composable workflows. Still very important. I think the colonizational-based experience of building the natural language experience is obviously we think is that we prime. But I think composable workflows, pre-built workflows, templates, because remember, customers want to move fast. And there's already IP that we have, and it's been done before. Why we invent pay? So I think that's still important. Vibe coding automations. I think it's good, but it's not gonna create mission-critical processes. It's great for prototyping workflows and design experiences. I still don't think wipe coding is great for building production-ready code. Could it be in a couple of years or in this later this year? Maybe. I know some other five words, but I don't know if it's ready for priority. Yeah. Expanded funding for community programs next year. Number one priority for me. Um, and I know it's something here, dear to your heart. Yeah. So I'll make sure I think about that as priority number 10. Good.

SPEAKER_04

We I we absolutely agree on that one.

SPEAKER_03

All right. So what did we uh hear today?

SPEAKER_02

Identif AI isn't just a buzzword, but there are plenty of buzzwords out there in the market. We are really all about orchestrating knowledge and action at scale. And automation anywhere is cutting through the noise with our APA platform that's Waxmal, Enterprise Ready, and built to deliver outcomes.

SPEAKER_04

Audi, Kate, thank you so much for joining this conversation for the Agency Cache. If you want to go deeper on any of these topics, we've got some links in the show notes so you can explore demos, case studies, and some blog posts written by Audi himself. Thanks for joining, and we'll see you in the next one.