Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 1

I’ve written quite a bit recently about Oracle AI Agent Studio and what it can do at a high level, and those posts have led to some really valuable conversations. A question that keeps coming up, though, is a very practical one: “This all sounds great, but how does it actually fit into the rest of my technology landscape?” Closely followed by, “How do I build something that’s reliable and production‑ready, rather than just impressive in a demo?” This post is my attempt to answer both, drawing on the training content from the AI World partner sessions, which goes into a level of detail that genuinely changes how you think about building with AI Agent Studio, particularly when it comes to integration, robustness and real‑world use. I should say up front that this series is a little more technical than my usual blogs, but it feels important to share, because this is the detail that turns AI from an interesting idea into something you can confidently run and scale.

One announcement I haven’t covered yet is the invokeAsync REST API, introduced in 26A, which allows external applications to programmatically call published Agent Teams in Fusion, and this is a significant step forward for organisations where Fusion sits alongside other enterprise systems. It means those systems can now trigger Fusion AI agents directly, without a user ever needing to open the chat interface. The process works in two stages: an external application sends a POST request to the invoke endpoint, passing either a user prompt or structured data, and because AI agents may need time to reason or retrieve information, the call returns a job ID rather than an immediate response. That job ID is then used to poll a separate status endpoint, which returns the final output along with useful metadata such as status, conversation ID, trace ID and timing information. For development and testing, adding ?invocationMode=ADMIN to the status endpoint provides detailed debug output, including the full node execution trace, which is invaluable when you are building and troubleshooting. Authentication follows the standard OAuth 2.0 bearer token approach used across Fusion APIs, so if you are already familiar with OCI IAM and IDCS, there is nothing fundamentally new to configure.

For those looking for a more standardised integration approach, Oracle also introduced support for the A2A, or Agent‑to‑Agent, protocol in 26A. A2A allows a client to discover agents through a well‑known metadata endpoint, often referred to as the agent card, initiate a task by sending a message, and then poll for the result using the same job ID pattern. The agents search endpoint makes it possible to query for published agents by name, which is particularly useful when you are building orchestration layers that need to dynamically discover and delegate work to specialist agents. The agent card itself returns the agent’s capabilities and supported methods in a structured format, making it much easier to establish interoperability between Fusion agents and third‑party systems without relying on custom, point‑to‑point integrations.

The relationship between Fusion and the outside world does not just work in one direction. In 26A, Oracle introduced Data Source Applications within the Credentials tab of AI Agent Studio, allowing administrators to configure OAuth connections to non‑Fusion systems such as EPM Cloud, WMS, or any external application with a compatible identity provider. Once a Data Source Application is set up with the base URL, IDCS URL, client ID, scope and key pair, it becomes available as a selectable source when creating a Business Object using the resource type “Other Data Source Application”. This makes it much easier for AI agents to securely reach out to external systems, bringing data back into Fusion in a controlled and repeatable way rather than relying on custom or hard‑coded integrations.

At runtime, when a Business Object function within a workflow needs to call an external system, the platform simply uses the saved configuration to obtain an OAuth token and invoke the target API behind the scenes. From the workflow designer’s point of view, it behaves exactly like any other Business Object node, with no additional complexity to manage. This opens up some genuinely powerful patterns, such as an HCM Workflow Agent that validates a job requisition in Fusion, checks headcount or budget in EPM, and then writes the outcome back to a Fusion record, all within a single, governed automation that remains transparent, secure and easy to maintain.

This blog is the first in a short series of four, where I will walk through some of the latest and most important functionality in Oracle AI Agent Studio introduced in 26A. In this opening post, I’ve focused on how agents connect to the wider enterprise landscape, because integration, reliability and governance are what ultimately determine whether AI delivers real value. In the next blogs, I’ll build on this foundation and explore other new capabilities in more detail, looking at how they work in practice and how you can apply them confidently in real‑world scenarios.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Agentic Applications for HCM Cloud

At the AI World London HCM Partner Summit, Oracle unveiled 22 new Agentic Applications across the Fusion suite, including eight designed specifically for HCM Cloud. One of the standout additions is the Workforce Operations Command Centre, which brings scheduling, time, and absence management into one coordinated hub. It highlights real‑time risks, helps managers make confident coverage decisions, and streamlines day‑to‑day operations. During the demo, we saw a live priority queue flagging shift conflicts and timecard issues by severity, with simple one‑click options to approve, reassign, or review — making it far easier to stay ahead of workforce challenges.

Oracle has also introduced a series of new workspaces designed to streamline everyday manager and employee tasks. The Hiring Workspace for Store Managers brings candidate details, interview scheduling, and urgent hiring requests together to support faster decisions, while the Manager Concierge Workspace unifies compensation, performance, talent, and absence insights with simple, policy‑backed actions. The Team Learning Workspace helps managers stay ahead of compliance risks and focus on development priorities, and the Career Advancement Command Centre connects employees to suitable roles, required skills, and training. Alongside this, the My Help Workspace offers a clear view of open requests and relevant knowledge articles, and Contracts Intelligent Counsel, also known as Agentic Compliance, provides continuous, autonomous monitoring of contract terms and policy changes to reduce compliance overhead.

Oracle also unveiled Oracle Manager Edge, a new personal AI coach designed to give managers practical, data‑driven guidance directly within Touchpoints, with suggested actions seamlessly linked to Oracle Team Touchpoints. Although it isn’t an Agentic Application, it will be available through the AI Agent Studio once released, offering organisations an accessible way to bring personalised, context‑aware coaching into everyday management without additional complexity.


Oracle also confirmed six dedicated Payroll Agents designed to cut manual effort and improve payroll accuracy. The Payslip Analyst, already live in 25D, helps employees resolve payslip queries and has been shown to reduce inquiry costs by up to 70 per cent with a rapid ROI. The Compliance Update Agent (26C) converts legislative changes into proactive configuration updates, removing up to 90 per cent of the manual workload. The Court Order Processing Assistant (26A) fully automates garnishment intake, while the Tax Calculation Statement Agent (26C), currently specific to the US and California, explains the detailed tax logic behind each payroll run. The W‑4 Compliance Agent (26B) automates US tax‑form completion, and the Pay Run Agent (26C) provides real‑time summaries and flags exceptions, reducing manual review efforts by as much as 70 per cent. For UK and global payroll teams, the Payslip Analyst and Compliance Update Agent are the most relevant today, with the remaining agents focused on US‑specific requirements.

As Oracle continues to expand its portfolio of Agentic and AI‑driven capabilities, the direction is clear: more guidance, more automation, and less friction across everyday HR and payroll operations. For organisations already using Fusion, these new applications offer a practical way to improve decision‑making, strengthen compliance, and deliver a smoother experience for managers and employees alike. And with more innovation on the horizon, now is an ideal time to explore how these tools can support your roadmap and help your teams work smarter, not harder.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines.

Under the Hood: How Oracle’s Workflow Agents Actually Work

After sharing my initial thoughts on the Oracle AI World announcements earlier this week, I’ve since taken a closer look at what sits behind the headlines. The announcements focused on what Oracle is delivering, but what really interests me now is the how. That is where things get genuinely exciting for those of us who will be hands-on, building and configuring these new capabilities.

One thing that really helped me make sense of Oracle’s approach was the clear distinction between workflow agents and hierarchical agents. They serve very different purposes, and treating them as interchangeable would quickly lead to the wrong outcomes. Workflow Agents follow policy‑bound orchestration with contextual reasoning and are designed for predictability, auditability and stable SLAs, making them ideal for things like payroll deductions, purchase requisitions or leave approvals where governance and consistency are essential. Hierarchical Agents work differently, using LLM‑led decomposition with specialist sub‑agents, which makes them a better fit for open‑ended problems with many possible paths where multi‑domain reasoning matters more than repeatability. Oracle has intentionally designed the two to complement each other, with Workflow Agents providing the structure by defining stages, approvals, retries and SLAs, while Hierarchical Agents take on the heavier analytical or generative work within specific steps. The result is a balanced model that preserves governance while still giving teams the flexibility to tackle more complex reasoning tasks.

Oracle has outlined seven composable design patterns for building Workflow Agents, each suited to a different type of process. Chaining uses sequential intelligence to pass enriched context from one step to the next, which works well for extract‑validate‑decide‑act processes. Parallel execution allows multiple branches to run at the same time and then consolidates their outputs into a single decision, making it a strong fit for compliance or risk scenarios. Switch flows use context‑aware decisioning to route work based on intent, profile, state and policy; for example, an employee updating deductions after a new baby can trigger both Benefits and Payroll updates automatically with no handoff. Iteration supports adaptive refinement by recalculating until constraints are met, which suits planning and scheduling tasks. Looping introduces self‑correction, such as regenerating and revalidating an invoice when OCR results do not match. RAG‑assisted Reasoning retrieves the right policy information before applying thresholds or routing logic. Finally, timer‑based execution triggers actions on a schedule, such as checking invoice status and notifying the accounts payable owner before an SLA is at risk.

The Workflow Agent canvas in AI Agent Studio groups its building blocks into four areas that shape how an automation behaves. AI nodes include LLM, Agent, Workflow and the RAG Document Tool. Data nodes cover things like the Document Processor, Business Object Function, External REST, Tool and the Vector DB Reader or Writer. Logic nodes provide Code and Set Variables, while the Workflow Control nodes handle governance through Human Approval, If Condition, For Loop, While Loop, Switch, Run in Parallel, Wait and Return. At workflow level, the Triggers tab supports Webhook, Email and Schedule triggers, and the Error Handling section lets you notify recipients by email if a workflow reaches a permanent failure, using context expressions such as $context.$workflow.$traceId. For image-related tasks, the Vision LLM node is the correct choice, although it is classed as a premium tool and comes with associated pricing considerations.

METRO, Oracle’s monitoring layer for Measurement, Evaluation and Testing for Real‑time Observability, gives teams a clear view of what their Workflow Agents are doing across inbound emails, approvals and scheduled runs. From the 26C release, it will also surface AI Unit consumption, which becomes increasingly important as organisations scale their use of agents and need tighter visibility and cost control.

Pricing has been a major consideration for customers exploring AI Agents, and the new structure aims to simplify things through the introduction of AI Units, or AUs. Oracle is expected to publish the full details in April or May, but the core concept is that an AU costs roughly $0.01 and is calculated as: AU consumption = CEILING((Input Tokens + Output Tokens) / 10,000) × Action Value Factor. The Action Value Factor varies depending on the action type and the LLM tier being used. General actions such as Q&A, approvals and reasoning have a 0x factor on the Basic LLM, while Premium and Bring Your Own apply higher factors. Artifact creation and audio generation sit in higher tiers again, with video generation marked as coming soon. Every Fusion customer receives 20,000 AUs per month at no charge, pooled across all pillars with unused units rolling over to the end of the contract term. Additional AUs are available in $1,000 increments.

What I find most compelling about this architecture is that it’s built for the realities of enterprise work rather than an idealised version of it. The self‑correction loops, governance controls, evaluation framework and hybrid agent pattern all acknowledge that real business processes can be messy and that auditability is essential. The 22 new agentic applications arriving in 26B across ERP, HCM, SCM and CX give us a clear benchmark for what good looks like in practice. If you’re interested in exploring how Workflow Agents could support your organisation’s processes, now is a great time to start that conversation.

In the meantime, why not check out my earlier post covering the Oracle AI World announcements? You can find it here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

A New Chapter for Fusion Cloud: Oracle Debuts Agentic Applications

I’m writing this from Oracle AI World in London today, where Oracle has just unveiled its new Agentic Applications for Oracle Fusion Cloud. It’s genuinely one of the biggest announcements we’ve seen in quite some time. These aren’t early ideas or future promises, they’re real applications, with names, availability details, defined process areas and clear outcomes for customers. Here’s a quick look at what matters and why it’s worth paying attention to.

Oracle’s message today is pretty clear: traditional enterprise systems may hold vast amounts of information, but they don’t genuinely understand it. They capture data, store it, and generate reports, but it’s always been down to people to interpret what’s happening and drive the next step. Agentic Applications change that. They bring reasoning, context awareness and prioritisation into the flow of work, acting within your existing governance and security boundaries without needing constant, step‑by‑step direction.

In this updated architecture, Oracle positions Agentic Applications in a new, composable layer above the existing Fusion transactional systems, ERP, HCM and CX. Beneath that, they can tap into a broad set of LLMs from OpenAI, Cohere, Meta, Anthropic, xAI and Google. At the centre is Oracle AI Agent Studio, which provides the development and configuration layer that connects everything together. Oracle also introduced a helpful maturity model: GenAI Assisted, delivering around a 5–10% productivity uplift; Agent Optimised, offering 10–30% efficiency gains; and Agent Re‑invented, where processes are re‑designed around autonomous agents and Oracle is seeing improvements of 40% or more in operational agility.

Oracle was also keen to emphasise that not all AI in Fusion is created equal. There’s a spectrum that runs from AI Workflows, logic‑driven automation using LLM nodes, loops and if/then conditions, through to Workflow Agents, which introduce real‑time reasoning and greater autonomy. Above that sit AI Agents, which are goal‑based, specialised and able to operate more independently, and AI Agent Teams, where a lead agent coordinates several specialist agents working together. Each step up the ladder trades a little predictability for far greater capability. For anyone who’s spent years configuring Oracle, the workflow layer will feel familiar, but the agentic layers above it represent genuinely new behaviour.

Oracle also announced a set of new capabilities today, including an Agentic App Builder, interoperability through MCP (Model Context Protocol) and A2A (Agent‑to‑Agent) protocols, Contextual Memory, Content Intelligence, multi‑modal support, an Agent ROI Dashboard, and a full suite of security, audit and governance controls. The standout is the Agentic App Builder: you simply describe your objective in natural language and the system assembles reusable agents and workflows into a composable agentic application.

One final announcement that will please a lot of customers: Oracle has simplified its agent pricing. The old split between Seeded Agents (Oracle‑built) and Custom Agents has been removed. If you use Oracle’s standard LLMs, every agent, irrelevant of who has built it, is now free. Premium third‑party LLMs such as OpenAI or Anthropic come with a straightforward consumption‑based charge. Every Fusion customer also receives a monthly allocation of 20,000 AI Units (AUs), with unused units rolling over until the end of the contract term. For organisations that have held back on AI because of cost uncertainty, that barrier has effectively disappeared.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Why Embedded AI Is the Real Differentiator in Enterprise Automation

AI is everywhere at the moment, there’s really no getting away from it, but not all AI is created equal. Plenty of platforms promise co‑pilots, assistants and automation, yet very few can actually take action inside core business systems. That’s why Oracle’s approach with AI Agent Studio for Fusion Applications really stands out.

Most enterprise AI tools today act as co‑pilots: brilliant for drafting content or answering questions, but far less capable when it comes to genuine process automation. Oracle has taken a different approach. Its AI agents are embedded directly within Fusion Applications, giving them the power to:

  • Understand business data in real time
  • Make decisions with full context
  • Write back into transactional systems securely
  • Automate tasks end‑to‑end without relying on extra tools

This isn’t AI “bolted on”, it’s AI woven right into the core of the application. And yes, I realise I’m starting to sound like an Oracle salesperson, but the difference between Oracle and other providers is hard to ignore. The image below highlights just how significant that gap is, comparing what Oracle delivers with five other major service providers.

Across the market, vendors are running into the same familiar obstacles:

  1. Added AI costs. Whether it’s Copilot capacity, ServiceNow’s AI tiers or Salesforce Flex Credits, AI is frequently positioned as an optional extra rather than something included as standard.
  2. Co‑pilots, not agents. Microsoft, Salesforce, Workday and others are excellent at generating content and offering recommendations, but they seldom deliver true autonomous action within transactional systems.
  3. Fragmented platforms. Many providers depend on several data models, clouds and integration layers. Their AI often relies on separate analytics environments or copied data, which strips away workflow context and makes automation far more difficult.

You may have already come across Oracle’s One Platform, and this is where Oracle really pulls ahead of the pack. Oracle keeps things refreshingly straightforward: Fusion Applications, data and AI all sit within a single, unified ecosystem. That brings three key advantages:

  • Embedded intelligence — agents work directly inside the applications employees use every day.
  • A unified security and data model — consistent governance and safer, more reliable automation.
  • True write‑back — agents can update transactions natively, without middleware or separate AI clouds getting in the way.

So you might be wondering, “That all sounds impressive, but what does it actually mean for me?” At its core, agentic AI is about cutting down manual effort, improving accuracy and boosting operational efficiency. Oracle’s embedded approach ensures that automation is reliable, properly governed and able to scale as your business does. Crucially, it isn’t stitched together from multiple products. With Oracle, AI is built in, a fundamental part of the system that runs your business, not an optional extra layered on top. It marks a shift from AI that simply talks, to AI that genuinely delivers.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

METRO – The Jewel in the Crown in Oracle AI Agent Studio

Have you heard about METRO? METRO is the embedded measurement and observability framework within Oracle AI Agent Studio. It stands for:

Oracle have introduced METRO as part of release 25D. It is a brand-new set of monitoring and evaluation tools built right into Oracle AI Agent Studio. Think of METRO as your control centre for AI agents in production. It gives you everything you need to keep an eye on accuracy, compliance, performance, costs, latency and even token usage. In short, it helps you make sure your agents are doing exactly what you expect, without any surprises.

The world of AI agents isn’t straightforward, responses can vary, workflows aren’t always predictable, and traditional checks don’t always work. That’s where METRO steps in. It’s designed for this new reality, offering smart ways to check semantic correctness, track detailed performance metrics and keep guardrails in place against risks like prompt injection. Whether you’re running complex multi-agent setups or more structured workflows, METRO makes it easier to stay on top of quality and reliability.

One of the best parts? The dashboard. It gives you a clear view of key stats like latency, error rates, token counts and correctness scores. And if you want to dig deeper, you can trace every single step an agent took, from LLM calls to tools used and outputs generated. This level of detail means you can troubleshoot quickly and optimise performance without the guesswork.

METRO also helps with testing, using the industry-standard LLM-as-a-Judge approach to score responses and provide feedback you can act on. Combine that with new features in 25D, like deterministic workflow agents and support for models such as GPT‑5 mini, and you’ve got flexibility to build agents for any scenario.

METRO isn’t just another feature, it’s a game-changer for anyone looking to manage AI agents with confidence. By combining deep insights, flexible evaluation tools and full traceability, it gives you everything you need to keep your AI ecosystem running smoothly. And with the added power of new models and workflow options in 25D, you’ve got all the tools to innovate faster and smarter. The future of AI governance starts here and METRO makes it simple!