Reinventing How Work Works: The Business Case for Oracle Fusion Agentic Applications

Oracle has been making a clear and increasingly consistent argument over the past few months: enterprise software has reached the limits of what a system of record can do. I’ve written before about the introduction of Agentic Applications at AI World London, and about the specific HCM applications that were announced alongside them. But there’s a broader story here that I haven’t fully explored yet, and it’s one that I think matters for every Fusion customer, not just those focused on HCM.

This post draws on the “Reinventing How Work Works” webinar, which stepped back from individual applications and made the architectural and commercial case for why the shift to agentic is happening, what it actually looks like in practice, and where Oracle is taking this next. If you’re trying to build internal momentum for agentic adoption, or if you’re trying to explain to a leadership team why this is different from previous AI announcements, this is the post to share.

The framing that Oracle used throughout this webinar is, I think, one of the clearest explanations of what has actually changed. Traditional enterprise systems, including Fusion as it has historically operated, are systems of record. They follow fixed rules, capture what happened, retrieve information when asked, and complete transactions. They document the business. What they don’t do is run the business.

Agentic Applications represent a move to what Oracle calls systems of outcomes. Rather than waiting for a person to interpret data and decide what to do next, a system of outcomes works toward objectives, makes things happen, solves problems, and achieves results. The underlying system of record doesn’t go away. The data, governance, approval hierarchies, role-based access control, and audit history are still there, and in Oracle’s case, they’re still the source of truth for every transaction. What changes is the layer operating on top of that foundation.

This architecture diagram is worth studying if you haven’t seen it. Agentic Applications sit in a new composable layer above the existing ERP, HCM, and CX transactional applications. That layer is powered by teams of AI agents coordinated through Oracle AI Agent Studio, drawing on the full enterprise data model, security model, and process history that already exists in Fusion. Beneath all of this, Oracle Cloud Infrastructure (OCI) provides the AI data platform, and a range of large language models (LLMs), including those from OpenAI, Cohere, Meta, Anthropic, xAI, and Google, are available depending on the task and preference.

Every Fusion Agentic Application is built around four core dynamic areas. Understanding these is useful when you’re evaluating a specific application or explaining the concept to stakeholders.

The first is the Advisor, which is the “Ask Oracle” conversational interface. This is where a user can ask natural language questions and get contextual, data-aware responses rather than navigating to a report. The second is the Information Summary, which provides an intelligent, prioritised view of what’s happening right now in that area of the business, surfaced automatically rather than requiring the user to run queries. The third is Priority Actions, a curated queue of recommended next steps that the agents have identified based on current conditions, risk signals, and business objectives. The fourth is Communications, which handles notifications, responses, and outbound actions within the appropriate governance boundaries.

These four areas appear consistently across all 22 applications, which is deliberate. Oracle’s position is that once a user understands the structure in one application, they can navigate any other agentic application without relearning the interface.

One of the most practically useful concepts introduced in this webinar is what Oracle calls the Autonomy Dial. It’s a spectrum with three positions, and it addresses one of the most common concerns I hear from customers and consultants: how much control do we give up?

At the “Human in the Loop” end, the agent assists and a person decides. The agent drafts, recommends, and prepares; the human reviews and approves. This builds trust, improves speed and consistency, and keeps people firmly in control. The business impact is described as immediate productivity gains.

In the middle is “Human in the Lead”, where the agent executes and a person monitors. The agent handles routine work and manages to policy; a person steps in for genuine exceptions. This scales output without adding headcount and frees teams for higher-value work. The impact here is scaled operations.

At the “Autonomous Execution” end, the agent drives and a person owns. End-to-end execution happens within policy, continuous real-time optimisation takes place, and human involvement is reserved for true exceptions. The impact is described as business transformation.

What I find compelling about this model is that it isn’t prescriptive. Oracle isn’t saying every organisation should start at one end or aim for the other. Each position on the dial represents a valid operating model depending on the process, the risk tolerance, and the maturity of the organisation. A payroll close process might comfortably sit at Human in the Lead. A workforce scheduling decision for a critical shift might warrant Human in the Loop until confidence is established. A high-volume procurement matching task might be a good candidate for Autonomous Execution relatively quickly.

My earlier posts covered the eight HCM applications in detail. The full announcement of 22 applications in releases 26B / 26C spans ERP/SCM, HCM, and CX, and it’s worth understanding the breadth of this, because it signals how Oracle is positioning agentic across the entire Fusion suite rather than as an HCM-specific capability.

On the ERP and SCM side, the applications include Design-to-Source Workspace, Product Readiness Workspace, Production Shift Operations Workspace, Sales Order Command Centre, Batch Process Manufacturing Workspace, Logistics Execution Command Centre, Maintenance Operations Workspace, Warehouse Operations Workspace, Cost Accounting Close Workspace, Sourcing Command Centre, Collectors Workspace, and Security Command Centre.

The Design-to-Source Workspace is a useful example of the transformation logic. Previously, product design and bill of materials work happened in separate systems. Sourcing relied on items entered manually. Negotiation delays accumulated when information was missing or unresolved. With the agentic application, product specifications translate automatically into qualified supplier lists, bills of materials are generated directly from CAD files, at-risk negotiations are flagged automatically, and bids are evaluated across cost, lead time, quality, and risk in a single view. The outcome is faster time to market and improved sourcing cycle times.

On the CX side, three applications have been announced: Cross-Sell Program Workspace, Contract Compliance Workspace, and Sales Command Centre. For CX teams, the Sales Command Centre in particular brings together the kind of deal health monitoring, risk flagging, and next-step recommendation that previously required significant manual analysis across multiple reports.

I’ve written in detail about Oracle AI Agent Studio in previous posts, but the webinar highlighted several new capabilities that are worth calling out specifically, because some of them genuinely change what’s possible for teams building custom agentic applications.

The most significant new addition is the Agentic App Builder, which is released in 26C. This is what Oracle describes as a “no-code agentic brain”: you describe your objective in natural language, the system explains and builds the workflow, generates agents and the underlying code automatically, and allows you to diagnose and fix issues in real time. In the demo, a user types a description of a sales opportunity health and risk management app, and within moments a structured agentic application is assembled from reusable agents, with a Deal Summary Agent, a Risk Agent, a Customer Insights Agent, and a Process Agent already in place and connected. It’s a significant step forward from the existing builder experience.

Alongside this, several other capabilities have been marked as new in the current release: Workflow Orchestration, Content Intelligence, Contextual Memory, Multi-Modal support, an Agent ROI Dashboard, and enhanced Security, Auditability, and Governance controls. Contextual Memory is worth paying attention to particularly, because it allows agents to retain information across interactions, which is what enables genuinely personalised, continuous support rather than stateless responses to each individual query.

The studio now also supports full interoperability through MCP (Model Context Protocol) and A2A (Agent-to-Agent) protocols, which means agents built in Fusion can exchange context with agents or tools running outside the Fusion estate, provided the appropriate governance controls are in place.

One thing the webinar made very clear is that Oracle isn’t building this alone. The Fusion AI ecosystem now includes 73,400 certified builders, 10,000 developers actively building agents, and over 100 pre-built agent templates in the AI Agent Marketplace, which is now open to all partners for submissions. Open standard support includes native MCP integration across connectors and an agent-to-agent registry within Oracle AI Agent Studio itself.

For customers, this matters because it means the pool of available agents and expertise is growing rapidly. You don’t need to build everything from scratch, and you don’t need to rely solely on Oracle to extend the platform. The open partner submission model for the marketplace is a meaningful shift, and it’s one that will accelerate the availability of domain-specific and industry-specific agents over the coming months.

The summary that Oracle closed with is a useful way to frame internal conversations: Fusion is moving from systems of record to systems of outcomes. Agentic Applications get work done. Oracle AI Agent Studio lets you build, deploy, and scale agents specific to your organisation. OCI AI Advantage runs it all securely at scale.

What I’d encourage any Fusion customer to take from this is that the window to start is now. The pricing model has already been simplified significantly (covered in my earlier post), the tooling to build and extend has matured substantially, and the evidence base from production deployments is solid. Starting with one application in one process area, positioned at Human in the Loop on the autonomy dial, is a low-risk, high-value entry point that builds organisational confidence while delivering measurable results.

If you’re thinking about where to start or how to make the case internally, I’m happy to talk it through. In the meantime, why not check out my earlier post on the HCM-specific Agentic Applications announced at AI World London? You can find it here. And if you missed the original announcement post covering the architecture, the maturity model, and the updated pricing, that’s a useful starting point too, and you can find it here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 4

Over the last three blogs, I’ve explored how AI Agent Studio connects to the wider enterprise, how agents are triggered and interacted with, and how workflows are designed to be reliable and production‑ready. In this final part of the series, I want to pull those threads together and focus on the capabilities that help agents scale safely and operate with confidence over time. This is where governance, control and operational discipline really come into play, and where the newer 26A and 26B features start to show how Oracle is shaping AI Agent Studio for long‑term, enterprise use rather than short‑lived experimentation.

Choosing the right document or memory node is an area where I see a lot of confusion in conversations with clients, so it is worth being very clear about what each one is designed to do. The Document Processor node is intended for runtime documents, attachments that arrive as part of a specific workflow execution, such as a supplier quote received by email, an invoice uploaded through chat, or a UCM attachment linked to a Fusion business object. Its job is to retrieve the file, extract the text, and pass that content on to the next node in the workflow. It is not designed for querying a stable or long‑lived corpus of documents, such as policy or reference material that you want to reuse and search repeatedly over time.

The RAG Document Tool node is designed for exactly that stable, reusable collection of information. You curate a set of documents within an Oracle AI Agent Studio Document Tool, move them through the lifecycle from Ready to Publish to Published, and the RAG node then performs semantic retrieval against that content to ground downstream LLM reasoning in your own policies, playbooks or manuals. To get the best results, it is important to use specific queries with clear discriminators such as module, process area, country or version, which helps improve retrieval precision. It is also good practice to include an explicit “no results” fallback path in your workflow, rather than allowing the LLM to guess when retrieval confidence is low.

The Vector DB Reader and Writer nodes serve a different purpose again, providing durable semantic memory that persists across workflow runs. They are best used to store normalised, reusable knowledge units such as validated resolution summaries, previous exception details, or extracted entity representations. Entries should be kept short and semantically focused, enriched with meaningful metadata to support filtering, and assigned stable document IDs to avoid duplicates. Raw PII or permission‑restricted data should never be stored without a deliberate access control design. When reading from the vector store, metadata filters should always be applied, and low‑confidence matches should be treated the same as no result at all, routing the workflow to a deterministic fallback rather than continuing on uncertain ground.

One theme that came through strongly in the partner training sessions, and one I think represents genuinely good discipline, is treating Workflow Agent testing as a first‑class concern rather than something bolted on at the end. Oracle’s evaluation framework for Workflow Agents, often referred to as Workflow Evals, is based on supplying structured JSON test inputs and asserting expected outputs. These evaluations are intended to be run as a regression suite whenever you change a prompt, adjust a node configuration, swap a tool, or update a policy, helping you catch unintended side effects early and keep agent behaviour stable as it evolves.

A good starting point is to define around five core paths through the workflow: the happy path, two or three of the most common exception scenarios, and at least one case that deals with missing or poor‑quality input data. From there, you should be tracking things like overall pass rate, branch accuracy, schema validity, and retry or escalation behaviour. The aim is not simply to prove that the workflow reaches an end state, but to make sure it routes correctly and predictably under every condition that genuinely matters in production.

For anyone building more complex workflows, the full context variable reference is well worth bookmarking. In practice, a small set of variables tends to do a lot of the heavy lifting, such as $context.$nodes.<nodecode>.$status to check whether a preceding node succeeded or failed, and $context.$nodes.<human_node_code>.$actionPerformed to capture whether a Human Approval step resulted in APPROVE, REJECT or REQUEST_CHANGES. You can also use $context.$nodes.<human_node_code>.$feedbackReceived to pick up any comments provided by the approver, and $context.$workflow.$traceId to generate idempotency keys or include trace references in error notifications. For conversational workflows, $context.$system.$chatHistory is particularly useful, as it exposes the full session history and allows the agent to reason about what has already been discussed.

The 26A roadmap also includes several upcoming capabilities that will significantly extend what is possible in the near term. Support for the Model Context Protocol, or MCP, means Workflow Agents will be able to invoke tools exposed by MCP servers, broadening the integration landscape well beyond traditional REST APIs. The Agent Studio Help Assistant, an AI‑driven guide embedded directly within the studio, should also make agent design far more accessible, particularly for practitioners who are new to the tooling. Alongside this, multi‑modal enhancements, including end‑user Q&A over images and documents uploaded in chat and semantic search across non‑text assets, open up an entirely new set of document understanding and reasoning use cases.

Looking a little further ahead, the roadmap includes capabilities such as breakpoint‑style debugging, automated prompt engineering, multi‑user development environments, and a Bring Your Own LLM option, alongside additional interaction channels including WhatsApp, SMS and telephony. Taken together, these signal a sustained level of investment in the platform and a clear focus on making AI Agent Studio more powerful, more accessible, and more suitable for enterprise‑scale use. The overall direction is a positive one, and it is clear that Oracle is building towards a mature, long‑term agent platform rather than a short‑term experiment.

The partner training sessions that informed this post covered a lot of practical ground, and I genuinely believe they will save teams a significant amount of time as they start building in earnest. If you are already exploring AI Agent Studio and would like to talk through any of these patterns in more detail, I would be very happy to continue the conversation. And if you have not yet read the earlier posts in this series, it is worth starting at the beginning with the overview of how Workflow Agents are structured, which sets the context for everything covered here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 3

In the first two blogs, I looked at how AI Agent Studio connects to the wider enterprise landscape and how agents are triggered and engaged, whether by systems, schedules or users. In this third part, I want to step back slightly and focus on what happens inside the agent itself, specifically how workflows are structured, how context is managed, and how you start designing for reliability rather than experimentation. This is the point where agent design shifts from “can we make it work?” to “can we trust it to run consistently in production?”, and the 26A capabilities give you far more control here than many people realise. To check out the previous blog, please click here.

The Wait node, which is being introduced as part of the 26B release, addresses a long‑standing gap in workflow design, where there was no clean way for a workflow to pause and resume later without either completing immediately or blocking indefinitely. When a Wait node is reached, the workflow moves straight into a Waiting state and pauses execution for a configured period of time, up to a maximum of 60 minutes. Once that wait period expires, the workflow can optionally loop back to an earlier point before continuing, allowing it to re‑evaluate conditions or check for updates. This looping behaviour is controlled through two simple settings: the Loop Back Node, which defines where execution returns to, and Maximum Iterations, which limits how many times the workflow can loop before it continues forward regardless.

In practice, this enables a clean polling pattern that is otherwise difficult to model. For example, imagine a workflow that creates a receipt request in Fusion and then needs to confirm that the receipt has been posted before it can move on. By using a Wait node configured for five minutes and looping back to a Business Object read node up to ten times, the workflow effectively gives itself a 50‑minute window to detect the receipt posting automatically before either continuing or escalating. During each wait cycle, the node outputs ORA_USER_INPUT_REQUIRED, and once all iterations are exhausted it returns WAIT_TIME_EXPIRED_AND_MAX_ITERATIONS_REACHED, both of which can be evaluated in downstream If Condition nodes to route the flow appropriately.

The Code node is one of the most powerful building blocks in a Workflow Agent, and also one of the most commonly underestimated. It executes JavaScript and returns a single value, whether that is an array, boolean, number, object or string. Its real value lies in handling the deterministic work that you should never push into an LLM node, such as data normalisation, threshold calculations, schema validation, array filtering and payload shaping. Used well, it provides a clean separation between predictable logic and probabilistic reasoning, which is a key ingredient in building workflows that behave consistently and are easier to trust in production.

There are a few important constraints to be aware of when designing logic for the Code node. Execution is limited to five seconds, with an upper limit of 100,000 statement executions, and functions cannot be defined within the code, which means recursion is not supported. Most built‑in JavaScript methods are available, but there is no external access, so no REST calls, file system operations, console logging or library imports. The code can read from $context, $currentItem and $currentItemIndex, but it cannot modify the $context object directly. Instead, it simply returns a value, and that returned output is the sole result of the node.

Some of the most effective patterns I’ve seen make particularly good use of the Code node for this kind of deterministic work. Common examples include normalising inconsistent date strings and currency values into canonical formats before passing them to a Business Object write node, or calculating variance percentages for three‑way match validation so that an If Condition node receives a simple boolean rather than needing to express complex arithmetic. Other strong patterns include generating idempotency keys using a combination of $context.$workflow.$traceId and object identifiers to prevent duplicate writes during retries, and filtering arrays returned from Business Object reads so that only active or primary records are passed into a For Loop for further processing.

For workflows that are triggered through the AI chat interface, 26A also introduced support for file uploads during conversations with an agent, allowing users to attach up to five files with a combined size of 50 MB. A wide range of formats is supported, including PDF, DOCX, XLSX, PPTX, PNG, JPEG, HTML, Markdown, JSON, XML, CSV and ZIP. To work with these attachments inside a Workflow Agent, 26A required the delivered MultiFileProcessor tool to be added to an agent and that agent then included within the main workflow. This capability significantly expands what chat‑driven workflows can handle, particularly when dealing with documents, structured data and supporting evidence provided directly by the user. In 26B, this has been simplified significantly. Rather than introducing a separate agent, you can now add a Tool node directly into your Workflow Agent and select Chat Attachments Reader as the tool type. This keeps the workflow much cleaner and removes an unnecessary orchestration step. The tool reads the files uploaded in the current chat session and exposes the extracted content directly to downstream nodes, making it easier to act on user‑provided documents without additional plumbing or indirection.

Support is also in place for third‑party file storage, allowing users to upload files directly from Google Drive, Dropbox or Microsoft OneDrive, provided those credentials are configured under the Chat Experience tab in Credentials. Enabling this involves registering an OAuth application with the relevant provider, obtaining the client credentials, configuring the account in Credentials, and then switching on the option to allow users to upload files from connected cloud storage accounts on the agent’s Chat Experience tab. Once configured, this gives users a seamless way to bring external documents into agent‑driven workflows without needing to download and re‑upload files manually.

This third blog has focused on what really makes Workflow Agents robust in practice, from pausing and polling patterns, through deterministic logic in Code nodes, to handling documents and attachments cleanly inside workflows. These are the building blocks that move agents beyond experimentation and into something you can rely on day to day. In the final post in this four‑part series, I’ll bring everything together and look at the remaining 26A and 26B capabilities that round out the platform, focusing on how they support governance, scale and long‑term operational confidence when running AI agents in production.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 2

In the first blog, I focused on how AI Agent Studio connects to the wider enterprise landscape, but once those connections are in place, the next question is how and when agents are actually set in motion. I touched briefly on triggers in an earlier post, but the depth available here really deserves a closer look. In AI Agent Studio, a published Workflow Agent can be kicked off in three distinct ways: via a webhook, through email, or on a schedule. Each option supports very different use cases, from event‑driven automation to time‑based controls, and understanding how to use them effectively is key to building agents that fit naturally into day‑to‑day operations rather than feeling bolted on.

The webhook trigger is the mechanism behind the invokeAsync API call discussed earlier, but it also supports a more flexible and powerful pattern. When configuring a webhook trigger, you can define named input variables, which are passed into the REST call as part of the parameters object. Within the workflow, those values are exposed via $context.$triggers.REST.$input.<InputName>, allowing you to build parameterised workflows that adapt their behaviour based on what the calling system provides. This is particularly useful when you want a single workflow to handle multiple variations of a process, with the external system supplying the context that determines how the agent responds.

The email trigger is one I find particularly practical. You configure a Google or Microsoft email account under Credentials, set the account type to Inbound, and from that point on, any new email arriving in the inbox automatically kicks off the workflow. The email body, sender address, subject, headers and even attachment content are all exposed as context variables, such as $context.$triggers.EMAIL.$input.content, $context.$triggers.EMAIL.$input.fromAddress, and processed attachment text via $context.$triggers.EMAIL.$input.attachments[0].context. This makes document ingestion workflows genuinely straightforward to build. For example, a supplier can email a quote to a monitored inbox, the email trigger fires, a Document Processor node extracts the line items, and the workflow creates a purchase requisition in Fusion, with no human involvement unless an exception is identified.

The schedule trigger supports two distinct patterns, depending on how you want your workflow to run. Interval scheduling fires on a repeating, time‑based cadence, configured in seconds, minutes, hours or days from a defined anchor point, while recurrence scheduling uses more familiar calendar‑based patterns, either one‑off or repeating, such as weekly on specific days. One practical point to be aware of is that the user creating a scheduled workflow must be assigned the FAI Batch Job Manager Duty role (ORA_DR_FAI_BATCH_JOB_MANAGER_DUTY) for the scheduling job to be created successfully. It is an easy detail to miss during initial setup, but one that is worth flagging early to your security or roles team to avoid unnecessary delays.

The 26A release also introduced native channel integrations for both Microsoft Teams and Slack, and I expect these to become the primary way many organisations interact with AI agents, rather than relying on the embedded Fusion chat widget. At a high level, the Microsoft Teams setup involves configuring the channel under Credentials, supplying the Teams bot or app details, generating and downloading the app manifest, and then uploading it to Teams as a custom application. Once this is in place, users can discover and select available agents directly within Teams and interact with them in exactly the same way they would through the native Fusion chat experience, but in a collaboration tool they already use every day.

One final point worth calling out is that a new duty role has been introduced to group all channel‑related permissions for both Microsoft Teams and Slack. This role includes permissions such as ChannelManifest and ExternalChatCorrelation, and it is required for any user who needs to configure channel integrations or interact with agents through Teams or Slack. As with any new security object, it is worth factoring this into your role review and security planning early, so it does not become a blocker when 26A goes live in your environment.

This blog is the second in a series of four exploring the latest capabilities in Oracle AI Agent Studio introduced in 26A. In this part, I’ve focused on how Workflow Agents are triggered and how those triggers shape real‑world usage, from event‑driven integrations through to scheduled and collaborative interactions. In the next post, I’ll move on to another key area of the platform, building on these foundations and looking at how the newer features work together to support robust, production‑ready agent solutions.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 1

I’ve written quite a bit recently about Oracle AI Agent Studio and what it can do at a high level, and those posts have led to some really valuable conversations. A question that keeps coming up, though, is a very practical one: “This all sounds great, but how does it actually fit into the rest of my technology landscape?” Closely followed by, “How do I build something that’s reliable and production‑ready, rather than just impressive in a demo?” This post is my attempt to answer both, drawing on the training content from the AI World partner sessions, which goes into a level of detail that genuinely changes how you think about building with AI Agent Studio, particularly when it comes to integration, robustness and real‑world use. I should say up front that this series is a little more technical than my usual blogs, but it feels important to share, because this is the detail that turns AI from an interesting idea into something you can confidently run and scale.

One announcement I haven’t covered yet is the invokeAsync REST API, introduced in 26A, which allows external applications to programmatically call published Agent Teams in Fusion, and this is a significant step forward for organisations where Fusion sits alongside other enterprise systems. It means those systems can now trigger Fusion AI agents directly, without a user ever needing to open the chat interface. The process works in two stages: an external application sends a POST request to the invoke endpoint, passing either a user prompt or structured data, and because AI agents may need time to reason or retrieve information, the call returns a job ID rather than an immediate response. That job ID is then used to poll a separate status endpoint, which returns the final output along with useful metadata such as status, conversation ID, trace ID and timing information. For development and testing, adding ?invocationMode=ADMIN to the status endpoint provides detailed debug output, including the full node execution trace, which is invaluable when you are building and troubleshooting. Authentication follows the standard OAuth 2.0 bearer token approach used across Fusion APIs, so if you are already familiar with OCI IAM and IDCS, there is nothing fundamentally new to configure.

For those looking for a more standardised integration approach, Oracle also introduced support for the A2A, or Agent‑to‑Agent, protocol in 26A. A2A allows a client to discover agents through a well‑known metadata endpoint, often referred to as the agent card, initiate a task by sending a message, and then poll for the result using the same job ID pattern. The agents search endpoint makes it possible to query for published agents by name, which is particularly useful when you are building orchestration layers that need to dynamically discover and delegate work to specialist agents. The agent card itself returns the agent’s capabilities and supported methods in a structured format, making it much easier to establish interoperability between Fusion agents and third‑party systems without relying on custom, point‑to‑point integrations.

The relationship between Fusion and the outside world does not just work in one direction. In 26A, Oracle introduced Data Source Applications within the Credentials tab of AI Agent Studio, allowing administrators to configure OAuth connections to non‑Fusion systems such as EPM Cloud, WMS, or any external application with a compatible identity provider. Once a Data Source Application is set up with the base URL, IDCS URL, client ID, scope and key pair, it becomes available as a selectable source when creating a Business Object using the resource type “Other Data Source Application”. This makes it much easier for AI agents to securely reach out to external systems, bringing data back into Fusion in a controlled and repeatable way rather than relying on custom or hard‑coded integrations.

At runtime, when a Business Object function within a workflow needs to call an external system, the platform simply uses the saved configuration to obtain an OAuth token and invoke the target API behind the scenes. From the workflow designer’s point of view, it behaves exactly like any other Business Object node, with no additional complexity to manage. This opens up some genuinely powerful patterns, such as an HCM Workflow Agent that validates a job requisition in Fusion, checks headcount or budget in EPM, and then writes the outcome back to a Fusion record, all within a single, governed automation that remains transparent, secure and easy to maintain.

This blog is the first in a short series of four, where I will walk through some of the latest and most important functionality in Oracle AI Agent Studio introduced in 26A. In this opening post, I’ve focused on how agents connect to the wider enterprise landscape, because integration, reliability and governance are what ultimately determine whether AI delivers real value. In the next blogs, I’ll build on this foundation and explore other new capabilities in more detail, looking at how they work in practice and how you can apply them confidently in real‑world scenarios.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Agentic Applications for HCM Cloud

At the AI World London HCM Partner Summit, Oracle unveiled 22 new Agentic Applications across the Fusion suite, including eight designed specifically for HCM Cloud. One of the standout additions is the Workforce Operations Command Centre, which brings scheduling, time, and absence management into one coordinated hub. It highlights real‑time risks, helps managers make confident coverage decisions, and streamlines day‑to‑day operations. During the demo, we saw a live priority queue flagging shift conflicts and timecard issues by severity, with simple one‑click options to approve, reassign, or review — making it far easier to stay ahead of workforce challenges.

Oracle has also introduced a series of new workspaces designed to streamline everyday manager and employee tasks. The Hiring Workspace for Store Managers brings candidate details, interview scheduling, and urgent hiring requests together to support faster decisions, while the Manager Concierge Workspace unifies compensation, performance, talent, and absence insights with simple, policy‑backed actions. The Team Learning Workspace helps managers stay ahead of compliance risks and focus on development priorities, and the Career Advancement Command Centre connects employees to suitable roles, required skills, and training. Alongside this, the My Help Workspace offers a clear view of open requests and relevant knowledge articles, and Contracts Intelligent Counsel, also known as Agentic Compliance, provides continuous, autonomous monitoring of contract terms and policy changes to reduce compliance overhead.

Oracle also unveiled Oracle Manager Edge, a new personal AI coach designed to give managers practical, data‑driven guidance directly within Touchpoints, with suggested actions seamlessly linked to Oracle Team Touchpoints. Although it isn’t an Agentic Application, it will be available through the AI Agent Studio once released, offering organisations an accessible way to bring personalised, context‑aware coaching into everyday management without additional complexity.


Oracle also confirmed six dedicated Payroll Agents designed to cut manual effort and improve payroll accuracy. The Payslip Analyst, already live in 25D, helps employees resolve payslip queries and has been shown to reduce inquiry costs by up to 70 per cent with a rapid ROI. The Compliance Update Agent (26C) converts legislative changes into proactive configuration updates, removing up to 90 per cent of the manual workload. The Court Order Processing Assistant (26A) fully automates garnishment intake, while the Tax Calculation Statement Agent (26C), currently specific to the US and California, explains the detailed tax logic behind each payroll run. The W‑4 Compliance Agent (26B) automates US tax‑form completion, and the Pay Run Agent (26C) provides real‑time summaries and flags exceptions, reducing manual review efforts by as much as 70 per cent. For UK and global payroll teams, the Payslip Analyst and Compliance Update Agent are the most relevant today, with the remaining agents focused on US‑specific requirements.

As Oracle continues to expand its portfolio of Agentic and AI‑driven capabilities, the direction is clear: more guidance, more automation, and less friction across everyday HR and payroll operations. For organisations already using Fusion, these new applications offer a practical way to improve decision‑making, strengthen compliance, and deliver a smoother experience for managers and employees alike. And with more innovation on the horizon, now is an ideal time to explore how these tools can support your roadmap and help your teams work smarter, not harder.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines.

Under the Hood: How Oracle’s Workflow Agents Actually Work

After sharing my initial thoughts on the Oracle AI World announcements earlier this week, I’ve since taken a closer look at what sits behind the headlines. The announcements focused on what Oracle is delivering, but what really interests me now is the how. That is where things get genuinely exciting for those of us who will be hands-on, building and configuring these new capabilities.

One thing that really helped me make sense of Oracle’s approach was the clear distinction between workflow agents and hierarchical agents. They serve very different purposes, and treating them as interchangeable would quickly lead to the wrong outcomes. Workflow Agents follow policy‑bound orchestration with contextual reasoning and are designed for predictability, auditability and stable SLAs, making them ideal for things like payroll deductions, purchase requisitions or leave approvals where governance and consistency are essential. Hierarchical Agents work differently, using LLM‑led decomposition with specialist sub‑agents, which makes them a better fit for open‑ended problems with many possible paths where multi‑domain reasoning matters more than repeatability. Oracle has intentionally designed the two to complement each other, with Workflow Agents providing the structure by defining stages, approvals, retries and SLAs, while Hierarchical Agents take on the heavier analytical or generative work within specific steps. The result is a balanced model that preserves governance while still giving teams the flexibility to tackle more complex reasoning tasks.

Oracle has outlined seven composable design patterns for building Workflow Agents, each suited to a different type of process. Chaining uses sequential intelligence to pass enriched context from one step to the next, which works well for extract‑validate‑decide‑act processes. Parallel execution allows multiple branches to run at the same time and then consolidates their outputs into a single decision, making it a strong fit for compliance or risk scenarios. Switch flows use context‑aware decisioning to route work based on intent, profile, state and policy; for example, an employee updating deductions after a new baby can trigger both Benefits and Payroll updates automatically with no handoff. Iteration supports adaptive refinement by recalculating until constraints are met, which suits planning and scheduling tasks. Looping introduces self‑correction, such as regenerating and revalidating an invoice when OCR results do not match. RAG‑assisted Reasoning retrieves the right policy information before applying thresholds or routing logic. Finally, timer‑based execution triggers actions on a schedule, such as checking invoice status and notifying the accounts payable owner before an SLA is at risk.

The Workflow Agent canvas in AI Agent Studio groups its building blocks into four areas that shape how an automation behaves. AI nodes include LLM, Agent, Workflow and the RAG Document Tool. Data nodes cover things like the Document Processor, Business Object Function, External REST, Tool and the Vector DB Reader or Writer. Logic nodes provide Code and Set Variables, while the Workflow Control nodes handle governance through Human Approval, If Condition, For Loop, While Loop, Switch, Run in Parallel, Wait and Return. At workflow level, the Triggers tab supports Webhook, Email and Schedule triggers, and the Error Handling section lets you notify recipients by email if a workflow reaches a permanent failure, using context expressions such as $context.$workflow.$traceId. For image-related tasks, the Vision LLM node is the correct choice, although it is classed as a premium tool and comes with associated pricing considerations.

METRO, Oracle’s monitoring layer for Measurement, Evaluation and Testing for Real‑time Observability, gives teams a clear view of what their Workflow Agents are doing across inbound emails, approvals and scheduled runs. From the 26C release, it will also surface AI Unit consumption, which becomes increasingly important as organisations scale their use of agents and need tighter visibility and cost control.

Pricing has been a major consideration for customers exploring AI Agents, and the new structure aims to simplify things through the introduction of AI Units, or AUs. Oracle is expected to publish the full details in April or May, but the core concept is that an AU costs roughly $0.01 and is calculated as: AU consumption = CEILING((Input Tokens + Output Tokens) / 10,000) × Action Value Factor. The Action Value Factor varies depending on the action type and the LLM tier being used. General actions such as Q&A, approvals and reasoning have a 0x factor on the Basic LLM, while Premium and Bring Your Own apply higher factors. Artifact creation and audio generation sit in higher tiers again, with video generation marked as coming soon. Every Fusion customer receives 20,000 AUs per month at no charge, pooled across all pillars with unused units rolling over to the end of the contract term. Additional AUs are available in $1,000 increments.

What I find most compelling about this architecture is that it’s built for the realities of enterprise work rather than an idealised version of it. The self‑correction loops, governance controls, evaluation framework and hybrid agent pattern all acknowledge that real business processes can be messy and that auditability is essential. The 22 new agentic applications arriving in 26B across ERP, HCM, SCM and CX give us a clear benchmark for what good looks like in practice. If you’re interested in exploring how Workflow Agents could support your organisation’s processes, now is a great time to start that conversation.

In the meantime, why not check out my earlier post covering the Oracle AI World announcements? You can find it here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

A New Chapter for Fusion Cloud: Oracle Debuts Agentic Applications

I’m writing this from Oracle AI World in London today, where Oracle has just unveiled its new Agentic Applications for Oracle Fusion Cloud. It’s genuinely one of the biggest announcements we’ve seen in quite some time. These aren’t early ideas or future promises, they’re real applications, with names, availability details, defined process areas and clear outcomes for customers. Here’s a quick look at what matters and why it’s worth paying attention to.

Oracle’s message today is pretty clear: traditional enterprise systems may hold vast amounts of information, but they don’t genuinely understand it. They capture data, store it, and generate reports, but it’s always been down to people to interpret what’s happening and drive the next step. Agentic Applications change that. They bring reasoning, context awareness and prioritisation into the flow of work, acting within your existing governance and security boundaries without needing constant, step‑by‑step direction.

In this updated architecture, Oracle positions Agentic Applications in a new, composable layer above the existing Fusion transactional systems, ERP, HCM and CX. Beneath that, they can tap into a broad set of LLMs from OpenAI, Cohere, Meta, Anthropic, xAI and Google. At the centre is Oracle AI Agent Studio, which provides the development and configuration layer that connects everything together. Oracle also introduced a helpful maturity model: GenAI Assisted, delivering around a 5–10% productivity uplift; Agent Optimised, offering 10–30% efficiency gains; and Agent Re‑invented, where processes are re‑designed around autonomous agents and Oracle is seeing improvements of 40% or more in operational agility.

Oracle was also keen to emphasise that not all AI in Fusion is created equal. There’s a spectrum that runs from AI Workflows, logic‑driven automation using LLM nodes, loops and if/then conditions, through to Workflow Agents, which introduce real‑time reasoning and greater autonomy. Above that sit AI Agents, which are goal‑based, specialised and able to operate more independently, and AI Agent Teams, where a lead agent coordinates several specialist agents working together. Each step up the ladder trades a little predictability for far greater capability. For anyone who’s spent years configuring Oracle, the workflow layer will feel familiar, but the agentic layers above it represent genuinely new behaviour.

Oracle also announced a set of new capabilities today, including an Agentic App Builder, interoperability through MCP (Model Context Protocol) and A2A (Agent‑to‑Agent) protocols, Contextual Memory, Content Intelligence, multi‑modal support, an Agent ROI Dashboard, and a full suite of security, audit and governance controls. The standout is the Agentic App Builder: you simply describe your objective in natural language and the system assembles reusable agents and workflows into a composable agentic application.

One final announcement that will please a lot of customers: Oracle has simplified its agent pricing. The old split between Seeded Agents (Oracle‑built) and Custom Agents has been removed. If you use Oracle’s standard LLMs, every agent, irrelevant of who has built it, is now free. Premium third‑party LLMs such as OpenAI or Anthropic come with a straightforward consumption‑based charge. Every Fusion customer also receives a monthly allocation of 20,000 AI Units (AUs), with unused units rolling over until the end of the contract term. For organisations that have held back on AI because of cost uncertainty, that barrier has effectively disappeared.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Why Embedded AI Is the Real Differentiator in Enterprise Automation

AI is everywhere at the moment, there’s really no getting away from it, but not all AI is created equal. Plenty of platforms promise co‑pilots, assistants and automation, yet very few can actually take action inside core business systems. That’s why Oracle’s approach with AI Agent Studio for Fusion Applications really stands out.

Most enterprise AI tools today act as co‑pilots: brilliant for drafting content or answering questions, but far less capable when it comes to genuine process automation. Oracle has taken a different approach. Its AI agents are embedded directly within Fusion Applications, giving them the power to:

  • Understand business data in real time
  • Make decisions with full context
  • Write back into transactional systems securely
  • Automate tasks end‑to‑end without relying on extra tools

This isn’t AI “bolted on”, it’s AI woven right into the core of the application. And yes, I realise I’m starting to sound like an Oracle salesperson, but the difference between Oracle and other providers is hard to ignore. The image below highlights just how significant that gap is, comparing what Oracle delivers with five other major service providers.

Across the market, vendors are running into the same familiar obstacles:

  1. Added AI costs. Whether it’s Copilot capacity, ServiceNow’s AI tiers or Salesforce Flex Credits, AI is frequently positioned as an optional extra rather than something included as standard.
  2. Co‑pilots, not agents. Microsoft, Salesforce, Workday and others are excellent at generating content and offering recommendations, but they seldom deliver true autonomous action within transactional systems.
  3. Fragmented platforms. Many providers depend on several data models, clouds and integration layers. Their AI often relies on separate analytics environments or copied data, which strips away workflow context and makes automation far more difficult.

You may have already come across Oracle’s One Platform, and this is where Oracle really pulls ahead of the pack. Oracle keeps things refreshingly straightforward: Fusion Applications, data and AI all sit within a single, unified ecosystem. That brings three key advantages:

  • Embedded intelligence — agents work directly inside the applications employees use every day.
  • A unified security and data model — consistent governance and safer, more reliable automation.
  • True write‑back — agents can update transactions natively, without middleware or separate AI clouds getting in the way.

So you might be wondering, “That all sounds impressive, but what does it actually mean for me?” At its core, agentic AI is about cutting down manual effort, improving accuracy and boosting operational efficiency. Oracle’s embedded approach ensures that automation is reliable, properly governed and able to scale as your business does. Crucially, it isn’t stitched together from multiple products. With Oracle, AI is built in, a fundamental part of the system that runs your business, not an optional extra layered on top. It marks a shift from AI that simply talks, to AI that genuinely delivers.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

METRO – The Jewel in the Crown in Oracle AI Agent Studio

Have you heard about METRO? METRO is the embedded measurement and observability framework within Oracle AI Agent Studio. It stands for:

Oracle have introduced METRO as part of release 25D. It is a brand-new set of monitoring and evaluation tools built right into Oracle AI Agent Studio. Think of METRO as your control centre for AI agents in production. It gives you everything you need to keep an eye on accuracy, compliance, performance, costs, latency and even token usage. In short, it helps you make sure your agents are doing exactly what you expect, without any surprises.

The world of AI agents isn’t straightforward, responses can vary, workflows aren’t always predictable, and traditional checks don’t always work. That’s where METRO steps in. It’s designed for this new reality, offering smart ways to check semantic correctness, track detailed performance metrics and keep guardrails in place against risks like prompt injection. Whether you’re running complex multi-agent setups or more structured workflows, METRO makes it easier to stay on top of quality and reliability.

One of the best parts? The dashboard. It gives you a clear view of key stats like latency, error rates, token counts and correctness scores. And if you want to dig deeper, you can trace every single step an agent took, from LLM calls to tools used and outputs generated. This level of detail means you can troubleshoot quickly and optimise performance without the guesswork.

METRO also helps with testing, using the industry-standard LLM-as-a-Judge approach to score responses and provide feedback you can act on. Combine that with new features in 25D, like deterministic workflow agents and support for models such as GPT‑5 mini, and you’ve got flexibility to build agents for any scenario.

METRO isn’t just another feature, it’s a game-changer for anyone looking to manage AI agents with confidence. By combining deep insights, flexible evaluation tools and full traceability, it gives you everything you need to keep your AI ecosystem running smoothly. And with the added power of new models and workflow options in 25D, you’ve got all the tools to innovate faster and smarter. The future of AI governance starts here and METRO makes it simple!