Reinventing How Work Works: The Business Case for Oracle Fusion Agentic Applications

Oracle has been making a clear and increasingly consistent argument over the past few months: enterprise software has reached the limits of what a system of record can do. I’ve written before about the introduction of Agentic Applications at AI World London, and about the specific HCM applications that were announced alongside them. But there’s a broader story here that I haven’t fully explored yet, and it’s one that I think matters for every Fusion customer, not just those focused on HCM.

This post draws on the “Reinventing How Work Works” webinar, which stepped back from individual applications and made the architectural and commercial case for why the shift to agentic is happening, what it actually looks like in practice, and where Oracle is taking this next. If you’re trying to build internal momentum for agentic adoption, or if you’re trying to explain to a leadership team why this is different from previous AI announcements, this is the post to share.

The framing that Oracle used throughout this webinar is, I think, one of the clearest explanations of what has actually changed. Traditional enterprise systems, including Fusion as it has historically operated, are systems of record. They follow fixed rules, capture what happened, retrieve information when asked, and complete transactions. They document the business. What they don’t do is run the business.

Agentic Applications represent a move to what Oracle calls systems of outcomes. Rather than waiting for a person to interpret data and decide what to do next, a system of outcomes works toward objectives, makes things happen, solves problems, and achieves results. The underlying system of record doesn’t go away. The data, governance, approval hierarchies, role-based access control, and audit history are still there, and in Oracle’s case, they’re still the source of truth for every transaction. What changes is the layer operating on top of that foundation.

This architecture diagram is worth studying if you haven’t seen it. Agentic Applications sit in a new composable layer above the existing ERP, HCM, and CX transactional applications. That layer is powered by teams of AI agents coordinated through Oracle AI Agent Studio, drawing on the full enterprise data model, security model, and process history that already exists in Fusion. Beneath all of this, Oracle Cloud Infrastructure (OCI) provides the AI data platform, and a range of large language models (LLMs), including those from OpenAI, Cohere, Meta, Anthropic, xAI, and Google, are available depending on the task and preference.

Every Fusion Agentic Application is built around four core dynamic areas. Understanding these is useful when you’re evaluating a specific application or explaining the concept to stakeholders.

The first is the Advisor, which is the “Ask Oracle” conversational interface. This is where a user can ask natural language questions and get contextual, data-aware responses rather than navigating to a report. The second is the Information Summary, which provides an intelligent, prioritised view of what’s happening right now in that area of the business, surfaced automatically rather than requiring the user to run queries. The third is Priority Actions, a curated queue of recommended next steps that the agents have identified based on current conditions, risk signals, and business objectives. The fourth is Communications, which handles notifications, responses, and outbound actions within the appropriate governance boundaries.

These four areas appear consistently across all 22 applications, which is deliberate. Oracle’s position is that once a user understands the structure in one application, they can navigate any other agentic application without relearning the interface.

One of the most practically useful concepts introduced in this webinar is what Oracle calls the Autonomy Dial. It’s a spectrum with three positions, and it addresses one of the most common concerns I hear from customers and consultants: how much control do we give up?

At the “Human in the Loop” end, the agent assists and a person decides. The agent drafts, recommends, and prepares; the human reviews and approves. This builds trust, improves speed and consistency, and keeps people firmly in control. The business impact is described as immediate productivity gains.

In the middle is “Human in the Lead”, where the agent executes and a person monitors. The agent handles routine work and manages to policy; a person steps in for genuine exceptions. This scales output without adding headcount and frees teams for higher-value work. The impact here is scaled operations.

At the “Autonomous Execution” end, the agent drives and a person owns. End-to-end execution happens within policy, continuous real-time optimisation takes place, and human involvement is reserved for true exceptions. The impact is described as business transformation.

What I find compelling about this model is that it isn’t prescriptive. Oracle isn’t saying every organisation should start at one end or aim for the other. Each position on the dial represents a valid operating model depending on the process, the risk tolerance, and the maturity of the organisation. A payroll close process might comfortably sit at Human in the Lead. A workforce scheduling decision for a critical shift might warrant Human in the Loop until confidence is established. A high-volume procurement matching task might be a good candidate for Autonomous Execution relatively quickly.

My earlier posts covered the eight HCM applications in detail. The full announcement of 22 applications in releases 26B / 26C spans ERP/SCM, HCM, and CX, and it’s worth understanding the breadth of this, because it signals how Oracle is positioning agentic across the entire Fusion suite rather than as an HCM-specific capability.

On the ERP and SCM side, the applications include Design-to-Source Workspace, Product Readiness Workspace, Production Shift Operations Workspace, Sales Order Command Centre, Batch Process Manufacturing Workspace, Logistics Execution Command Centre, Maintenance Operations Workspace, Warehouse Operations Workspace, Cost Accounting Close Workspace, Sourcing Command Centre, Collectors Workspace, and Security Command Centre.

The Design-to-Source Workspace is a useful example of the transformation logic. Previously, product design and bill of materials work happened in separate systems. Sourcing relied on items entered manually. Negotiation delays accumulated when information was missing or unresolved. With the agentic application, product specifications translate automatically into qualified supplier lists, bills of materials are generated directly from CAD files, at-risk negotiations are flagged automatically, and bids are evaluated across cost, lead time, quality, and risk in a single view. The outcome is faster time to market and improved sourcing cycle times.

On the CX side, three applications have been announced: Cross-Sell Program Workspace, Contract Compliance Workspace, and Sales Command Centre. For CX teams, the Sales Command Centre in particular brings together the kind of deal health monitoring, risk flagging, and next-step recommendation that previously required significant manual analysis across multiple reports.

I’ve written in detail about Oracle AI Agent Studio in previous posts, but the webinar highlighted several new capabilities that are worth calling out specifically, because some of them genuinely change what’s possible for teams building custom agentic applications.

The most significant new addition is the Agentic App Builder, which is released in 26C. This is what Oracle describes as a “no-code agentic brain”: you describe your objective in natural language, the system explains and builds the workflow, generates agents and the underlying code automatically, and allows you to diagnose and fix issues in real time. In the demo, a user types a description of a sales opportunity health and risk management app, and within moments a structured agentic application is assembled from reusable agents, with a Deal Summary Agent, a Risk Agent, a Customer Insights Agent, and a Process Agent already in place and connected. It’s a significant step forward from the existing builder experience.

Alongside this, several other capabilities have been marked as new in the current release: Workflow Orchestration, Content Intelligence, Contextual Memory, Multi-Modal support, an Agent ROI Dashboard, and enhanced Security, Auditability, and Governance controls. Contextual Memory is worth paying attention to particularly, because it allows agents to retain information across interactions, which is what enables genuinely personalised, continuous support rather than stateless responses to each individual query.

The studio now also supports full interoperability through MCP (Model Context Protocol) and A2A (Agent-to-Agent) protocols, which means agents built in Fusion can exchange context with agents or tools running outside the Fusion estate, provided the appropriate governance controls are in place.

One thing the webinar made very clear is that Oracle isn’t building this alone. The Fusion AI ecosystem now includes 73,400 certified builders, 10,000 developers actively building agents, and over 100 pre-built agent templates in the AI Agent Marketplace, which is now open to all partners for submissions. Open standard support includes native MCP integration across connectors and an agent-to-agent registry within Oracle AI Agent Studio itself.

For customers, this matters because it means the pool of available agents and expertise is growing rapidly. You don’t need to build everything from scratch, and you don’t need to rely solely on Oracle to extend the platform. The open partner submission model for the marketplace is a meaningful shift, and it’s one that will accelerate the availability of domain-specific and industry-specific agents over the coming months.

The summary that Oracle closed with is a useful way to frame internal conversations: Fusion is moving from systems of record to systems of outcomes. Agentic Applications get work done. Oracle AI Agent Studio lets you build, deploy, and scale agents specific to your organisation. OCI AI Advantage runs it all securely at scale.

What I’d encourage any Fusion customer to take from this is that the window to start is now. The pricing model has already been simplified significantly (covered in my earlier post), the tooling to build and extend has matured substantially, and the evidence base from production deployments is solid. Starting with one application in one process area, positioned at Human in the Loop on the autonomy dial, is a low-risk, high-value entry point that builds organisational confidence while delivering measurable results.

If you’re thinking about where to start or how to make the case internally, I’m happy to talk it through. In the meantime, why not check out my earlier post on the HCM-specific Agentic Applications announced at AI World London? You can find it here. And if you missed the original announcement post covering the architecture, the maturity model, and the updated pricing, that’s a useful starting point too, and you can find it here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 4

Over the last three blogs, I’ve explored how AI Agent Studio connects to the wider enterprise, how agents are triggered and interacted with, and how workflows are designed to be reliable and production‑ready. In this final part of the series, I want to pull those threads together and focus on the capabilities that help agents scale safely and operate with confidence over time. This is where governance, control and operational discipline really come into play, and where the newer 26A and 26B features start to show how Oracle is shaping AI Agent Studio for long‑term, enterprise use rather than short‑lived experimentation.

Choosing the right document or memory node is an area where I see a lot of confusion in conversations with clients, so it is worth being very clear about what each one is designed to do. The Document Processor node is intended for runtime documents, attachments that arrive as part of a specific workflow execution, such as a supplier quote received by email, an invoice uploaded through chat, or a UCM attachment linked to a Fusion business object. Its job is to retrieve the file, extract the text, and pass that content on to the next node in the workflow. It is not designed for querying a stable or long‑lived corpus of documents, such as policy or reference material that you want to reuse and search repeatedly over time.

The RAG Document Tool node is designed for exactly that stable, reusable collection of information. You curate a set of documents within an Oracle AI Agent Studio Document Tool, move them through the lifecycle from Ready to Publish to Published, and the RAG node then performs semantic retrieval against that content to ground downstream LLM reasoning in your own policies, playbooks or manuals. To get the best results, it is important to use specific queries with clear discriminators such as module, process area, country or version, which helps improve retrieval precision. It is also good practice to include an explicit “no results” fallback path in your workflow, rather than allowing the LLM to guess when retrieval confidence is low.

The Vector DB Reader and Writer nodes serve a different purpose again, providing durable semantic memory that persists across workflow runs. They are best used to store normalised, reusable knowledge units such as validated resolution summaries, previous exception details, or extracted entity representations. Entries should be kept short and semantically focused, enriched with meaningful metadata to support filtering, and assigned stable document IDs to avoid duplicates. Raw PII or permission‑restricted data should never be stored without a deliberate access control design. When reading from the vector store, metadata filters should always be applied, and low‑confidence matches should be treated the same as no result at all, routing the workflow to a deterministic fallback rather than continuing on uncertain ground.

One theme that came through strongly in the partner training sessions, and one I think represents genuinely good discipline, is treating Workflow Agent testing as a first‑class concern rather than something bolted on at the end. Oracle’s evaluation framework for Workflow Agents, often referred to as Workflow Evals, is based on supplying structured JSON test inputs and asserting expected outputs. These evaluations are intended to be run as a regression suite whenever you change a prompt, adjust a node configuration, swap a tool, or update a policy, helping you catch unintended side effects early and keep agent behaviour stable as it evolves.

A good starting point is to define around five core paths through the workflow: the happy path, two or three of the most common exception scenarios, and at least one case that deals with missing or poor‑quality input data. From there, you should be tracking things like overall pass rate, branch accuracy, schema validity, and retry or escalation behaviour. The aim is not simply to prove that the workflow reaches an end state, but to make sure it routes correctly and predictably under every condition that genuinely matters in production.

For anyone building more complex workflows, the full context variable reference is well worth bookmarking. In practice, a small set of variables tends to do a lot of the heavy lifting, such as $context.$nodes.<nodecode>.$status to check whether a preceding node succeeded or failed, and $context.$nodes.<human_node_code>.$actionPerformed to capture whether a Human Approval step resulted in APPROVE, REJECT or REQUEST_CHANGES. You can also use $context.$nodes.<human_node_code>.$feedbackReceived to pick up any comments provided by the approver, and $context.$workflow.$traceId to generate idempotency keys or include trace references in error notifications. For conversational workflows, $context.$system.$chatHistory is particularly useful, as it exposes the full session history and allows the agent to reason about what has already been discussed.

The 26A roadmap also includes several upcoming capabilities that will significantly extend what is possible in the near term. Support for the Model Context Protocol, or MCP, means Workflow Agents will be able to invoke tools exposed by MCP servers, broadening the integration landscape well beyond traditional REST APIs. The Agent Studio Help Assistant, an AI‑driven guide embedded directly within the studio, should also make agent design far more accessible, particularly for practitioners who are new to the tooling. Alongside this, multi‑modal enhancements, including end‑user Q&A over images and documents uploaded in chat and semantic search across non‑text assets, open up an entirely new set of document understanding and reasoning use cases.

Looking a little further ahead, the roadmap includes capabilities such as breakpoint‑style debugging, automated prompt engineering, multi‑user development environments, and a Bring Your Own LLM option, alongside additional interaction channels including WhatsApp, SMS and telephony. Taken together, these signal a sustained level of investment in the platform and a clear focus on making AI Agent Studio more powerful, more accessible, and more suitable for enterprise‑scale use. The overall direction is a positive one, and it is clear that Oracle is building towards a mature, long‑term agent platform rather than a short‑term experiment.

The partner training sessions that informed this post covered a lot of practical ground, and I genuinely believe they will save teams a significant amount of time as they start building in earnest. If you are already exploring AI Agent Studio and would like to talk through any of these patterns in more detail, I would be very happy to continue the conversation. And if you have not yet read the earlier posts in this series, it is worth starting at the beginning with the overview of how Workflow Agents are structured, which sets the context for everything covered here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 3

In the first two blogs, I looked at how AI Agent Studio connects to the wider enterprise landscape and how agents are triggered and engaged, whether by systems, schedules or users. In this third part, I want to step back slightly and focus on what happens inside the agent itself, specifically how workflows are structured, how context is managed, and how you start designing for reliability rather than experimentation. This is the point where agent design shifts from “can we make it work?” to “can we trust it to run consistently in production?”, and the 26A capabilities give you far more control here than many people realise. To check out the previous blog, please click here.

The Wait node, which is being introduced as part of the 26B release, addresses a long‑standing gap in workflow design, where there was no clean way for a workflow to pause and resume later without either completing immediately or blocking indefinitely. When a Wait node is reached, the workflow moves straight into a Waiting state and pauses execution for a configured period of time, up to a maximum of 60 minutes. Once that wait period expires, the workflow can optionally loop back to an earlier point before continuing, allowing it to re‑evaluate conditions or check for updates. This looping behaviour is controlled through two simple settings: the Loop Back Node, which defines where execution returns to, and Maximum Iterations, which limits how many times the workflow can loop before it continues forward regardless.

In practice, this enables a clean polling pattern that is otherwise difficult to model. For example, imagine a workflow that creates a receipt request in Fusion and then needs to confirm that the receipt has been posted before it can move on. By using a Wait node configured for five minutes and looping back to a Business Object read node up to ten times, the workflow effectively gives itself a 50‑minute window to detect the receipt posting automatically before either continuing or escalating. During each wait cycle, the node outputs ORA_USER_INPUT_REQUIRED, and once all iterations are exhausted it returns WAIT_TIME_EXPIRED_AND_MAX_ITERATIONS_REACHED, both of which can be evaluated in downstream If Condition nodes to route the flow appropriately.

The Code node is one of the most powerful building blocks in a Workflow Agent, and also one of the most commonly underestimated. It executes JavaScript and returns a single value, whether that is an array, boolean, number, object or string. Its real value lies in handling the deterministic work that you should never push into an LLM node, such as data normalisation, threshold calculations, schema validation, array filtering and payload shaping. Used well, it provides a clean separation between predictable logic and probabilistic reasoning, which is a key ingredient in building workflows that behave consistently and are easier to trust in production.

There are a few important constraints to be aware of when designing logic for the Code node. Execution is limited to five seconds, with an upper limit of 100,000 statement executions, and functions cannot be defined within the code, which means recursion is not supported. Most built‑in JavaScript methods are available, but there is no external access, so no REST calls, file system operations, console logging or library imports. The code can read from $context, $currentItem and $currentItemIndex, but it cannot modify the $context object directly. Instead, it simply returns a value, and that returned output is the sole result of the node.

Some of the most effective patterns I’ve seen make particularly good use of the Code node for this kind of deterministic work. Common examples include normalising inconsistent date strings and currency values into canonical formats before passing them to a Business Object write node, or calculating variance percentages for three‑way match validation so that an If Condition node receives a simple boolean rather than needing to express complex arithmetic. Other strong patterns include generating idempotency keys using a combination of $context.$workflow.$traceId and object identifiers to prevent duplicate writes during retries, and filtering arrays returned from Business Object reads so that only active or primary records are passed into a For Loop for further processing.

For workflows that are triggered through the AI chat interface, 26A also introduced support for file uploads during conversations with an agent, allowing users to attach up to five files with a combined size of 50 MB. A wide range of formats is supported, including PDF, DOCX, XLSX, PPTX, PNG, JPEG, HTML, Markdown, JSON, XML, CSV and ZIP. To work with these attachments inside a Workflow Agent, 26A required the delivered MultiFileProcessor tool to be added to an agent and that agent then included within the main workflow. This capability significantly expands what chat‑driven workflows can handle, particularly when dealing with documents, structured data and supporting evidence provided directly by the user. In 26B, this has been simplified significantly. Rather than introducing a separate agent, you can now add a Tool node directly into your Workflow Agent and select Chat Attachments Reader as the tool type. This keeps the workflow much cleaner and removes an unnecessary orchestration step. The tool reads the files uploaded in the current chat session and exposes the extracted content directly to downstream nodes, making it easier to act on user‑provided documents without additional plumbing or indirection.

Support is also in place for third‑party file storage, allowing users to upload files directly from Google Drive, Dropbox or Microsoft OneDrive, provided those credentials are configured under the Chat Experience tab in Credentials. Enabling this involves registering an OAuth application with the relevant provider, obtaining the client credentials, configuring the account in Credentials, and then switching on the option to allow users to upload files from connected cloud storage accounts on the agent’s Chat Experience tab. Once configured, this gives users a seamless way to bring external documents into agent‑driven workflows without needing to download and re‑upload files manually.

This third blog has focused on what really makes Workflow Agents robust in practice, from pausing and polling patterns, through deterministic logic in Code nodes, to handling documents and attachments cleanly inside workflows. These are the building blocks that move agents beyond experimentation and into something you can rely on day to day. In the final post in this four‑part series, I’ll bring everything together and look at the remaining 26A and 26B capabilities that round out the platform, focusing on how they support governance, scale and long‑term operational confidence when running AI agents in production.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 2

In the first blog, I focused on how AI Agent Studio connects to the wider enterprise landscape, but once those connections are in place, the next question is how and when agents are actually set in motion. I touched briefly on triggers in an earlier post, but the depth available here really deserves a closer look. In AI Agent Studio, a published Workflow Agent can be kicked off in three distinct ways: via a webhook, through email, or on a schedule. Each option supports very different use cases, from event‑driven automation to time‑based controls, and understanding how to use them effectively is key to building agents that fit naturally into day‑to‑day operations rather than feeling bolted on.

The webhook trigger is the mechanism behind the invokeAsync API call discussed earlier, but it also supports a more flexible and powerful pattern. When configuring a webhook trigger, you can define named input variables, which are passed into the REST call as part of the parameters object. Within the workflow, those values are exposed via $context.$triggers.REST.$input.<InputName>, allowing you to build parameterised workflows that adapt their behaviour based on what the calling system provides. This is particularly useful when you want a single workflow to handle multiple variations of a process, with the external system supplying the context that determines how the agent responds.

The email trigger is one I find particularly practical. You configure a Google or Microsoft email account under Credentials, set the account type to Inbound, and from that point on, any new email arriving in the inbox automatically kicks off the workflow. The email body, sender address, subject, headers and even attachment content are all exposed as context variables, such as $context.$triggers.EMAIL.$input.content, $context.$triggers.EMAIL.$input.fromAddress, and processed attachment text via $context.$triggers.EMAIL.$input.attachments[0].context. This makes document ingestion workflows genuinely straightforward to build. For example, a supplier can email a quote to a monitored inbox, the email trigger fires, a Document Processor node extracts the line items, and the workflow creates a purchase requisition in Fusion, with no human involvement unless an exception is identified.

The schedule trigger supports two distinct patterns, depending on how you want your workflow to run. Interval scheduling fires on a repeating, time‑based cadence, configured in seconds, minutes, hours or days from a defined anchor point, while recurrence scheduling uses more familiar calendar‑based patterns, either one‑off or repeating, such as weekly on specific days. One practical point to be aware of is that the user creating a scheduled workflow must be assigned the FAI Batch Job Manager Duty role (ORA_DR_FAI_BATCH_JOB_MANAGER_DUTY) for the scheduling job to be created successfully. It is an easy detail to miss during initial setup, but one that is worth flagging early to your security or roles team to avoid unnecessary delays.

The 26A release also introduced native channel integrations for both Microsoft Teams and Slack, and I expect these to become the primary way many organisations interact with AI agents, rather than relying on the embedded Fusion chat widget. At a high level, the Microsoft Teams setup involves configuring the channel under Credentials, supplying the Teams bot or app details, generating and downloading the app manifest, and then uploading it to Teams as a custom application. Once this is in place, users can discover and select available agents directly within Teams and interact with them in exactly the same way they would through the native Fusion chat experience, but in a collaboration tool they already use every day.

One final point worth calling out is that a new duty role has been introduced to group all channel‑related permissions for both Microsoft Teams and Slack. This role includes permissions such as ChannelManifest and ExternalChatCorrelation, and it is required for any user who needs to configure channel integrations or interact with agents through Teams or Slack. As with any new security object, it is worth factoring this into your role review and security planning early, so it does not become a blocker when 26A goes live in your environment.

This blog is the second in a series of four exploring the latest capabilities in Oracle AI Agent Studio introduced in 26A. In this part, I’ve focused on how Workflow Agents are triggered and how those triggers shape real‑world usage, from event‑driven integrations through to scheduled and collaborative interactions. In the next post, I’ll move on to another key area of the platform, building on these foundations and looking at how the newer features work together to support robust, production‑ready agent solutions.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 1

I’ve written quite a bit recently about Oracle AI Agent Studio and what it can do at a high level, and those posts have led to some really valuable conversations. A question that keeps coming up, though, is a very practical one: “This all sounds great, but how does it actually fit into the rest of my technology landscape?” Closely followed by, “How do I build something that’s reliable and production‑ready, rather than just impressive in a demo?” This post is my attempt to answer both, drawing on the training content from the AI World partner sessions, which goes into a level of detail that genuinely changes how you think about building with AI Agent Studio, particularly when it comes to integration, robustness and real‑world use. I should say up front that this series is a little more technical than my usual blogs, but it feels important to share, because this is the detail that turns AI from an interesting idea into something you can confidently run and scale.

One announcement I haven’t covered yet is the invokeAsync REST API, introduced in 26A, which allows external applications to programmatically call published Agent Teams in Fusion, and this is a significant step forward for organisations where Fusion sits alongside other enterprise systems. It means those systems can now trigger Fusion AI agents directly, without a user ever needing to open the chat interface. The process works in two stages: an external application sends a POST request to the invoke endpoint, passing either a user prompt or structured data, and because AI agents may need time to reason or retrieve information, the call returns a job ID rather than an immediate response. That job ID is then used to poll a separate status endpoint, which returns the final output along with useful metadata such as status, conversation ID, trace ID and timing information. For development and testing, adding ?invocationMode=ADMIN to the status endpoint provides detailed debug output, including the full node execution trace, which is invaluable when you are building and troubleshooting. Authentication follows the standard OAuth 2.0 bearer token approach used across Fusion APIs, so if you are already familiar with OCI IAM and IDCS, there is nothing fundamentally new to configure.

For those looking for a more standardised integration approach, Oracle also introduced support for the A2A, or Agent‑to‑Agent, protocol in 26A. A2A allows a client to discover agents through a well‑known metadata endpoint, often referred to as the agent card, initiate a task by sending a message, and then poll for the result using the same job ID pattern. The agents search endpoint makes it possible to query for published agents by name, which is particularly useful when you are building orchestration layers that need to dynamically discover and delegate work to specialist agents. The agent card itself returns the agent’s capabilities and supported methods in a structured format, making it much easier to establish interoperability between Fusion agents and third‑party systems without relying on custom, point‑to‑point integrations.

The relationship between Fusion and the outside world does not just work in one direction. In 26A, Oracle introduced Data Source Applications within the Credentials tab of AI Agent Studio, allowing administrators to configure OAuth connections to non‑Fusion systems such as EPM Cloud, WMS, or any external application with a compatible identity provider. Once a Data Source Application is set up with the base URL, IDCS URL, client ID, scope and key pair, it becomes available as a selectable source when creating a Business Object using the resource type “Other Data Source Application”. This makes it much easier for AI agents to securely reach out to external systems, bringing data back into Fusion in a controlled and repeatable way rather than relying on custom or hard‑coded integrations.

At runtime, when a Business Object function within a workflow needs to call an external system, the platform simply uses the saved configuration to obtain an OAuth token and invoke the target API behind the scenes. From the workflow designer’s point of view, it behaves exactly like any other Business Object node, with no additional complexity to manage. This opens up some genuinely powerful patterns, such as an HCM Workflow Agent that validates a job requisition in Fusion, checks headcount or budget in EPM, and then writes the outcome back to a Fusion record, all within a single, governed automation that remains transparent, secure and easy to maintain.

This blog is the first in a short series of four, where I will walk through some of the latest and most important functionality in Oracle AI Agent Studio introduced in 26A. In this opening post, I’ve focused on how agents connect to the wider enterprise landscape, because integration, reliability and governance are what ultimately determine whether AI delivers real value. In the next blogs, I’ll build on this foundation and explore other new capabilities in more detail, looking at how they work in practice and how you can apply them confidently in real‑world scenarios.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Oracle Enterprise Data Management Cloud: Where AI Readiness Actually Starts

If you’ve been working in Oracle Cloud for any length of time, you’ll know that data governance quality determines the quality of everything downstream. Reports, forecasts, consolidations, AI outputs: they’re all only as good as the master data behind them. Following the EDM London Spotlight I attended, here’s where the product stands and where it’s heading.

What EDM is (and isn’t)

The naming matters. EDM is “Enterprise Data” Management, not Enterprise “Data Management.” It’s not a database tool or storage layer. It’s a governed system of reference for managing the data that describes your enterprise: hierarchies, dimensions, master data, reference data, data maps, taxonomies, reporting structures.

The problem it solves is one most Oracle customers will recognise. Without EDM, a chart of accounts change request starts with an email to IT, fans out to half a dozen application admins, and ends somewhere between a hierarchy mismatch in Planning and a data kickout in Financial Consolidation and Close (FCC). No audit trail, no systematic workflow, no single point of control. EDM replaces that with a governed, collaborative process where changes are requested, validated, approved, and propagated in a controlled sequence.

EDM versus Oracle DRM

DRM was built for waterfall implementation: gather requirements, build the model, deliver, train, repeat. EDM is designed for an agile, incremental approach. Turn it on, start using it, add rules and policies as they become evident. The traditional MDM big-bang approach has a well-documented failure rate, and EDM’s application-centric model sidesteps it. You start with one application, demonstrate value, and grow from there. New applications are onboarded incrementally without disrupting what’s already in place. For organisations still on DRM, the migration path is practical: users continue in DRM while it’s registered inside EDM as an application, and the legacy system is archived once the transition is complete.

Implementation design patterns

The London session was clear on which pattern works best. Nominate an originating application rather than using a master application as the front door to all changes. The originating application pattern keeps data, objects, and validations scoped to the application that owns them. Downstream applications subscribe to changes. This avoids the problem where a single undifferentiated data model makes it impossible to isolate which rules belong to which application. The master application pattern can work if you reduce it to canonical properties only, but it adds complexity and makes onboarding new applications more disruptive.

EDM and AI

Oracle’s AI approach in EDM operates at two levels.

Internal assistants work within EDM’s existing request and approval model. The Registration Assistant (25.12) generates application metadata and configuration artefacts from a sample data file, accelerating new application setup considerably. The Conversational Request Assistant lets users query master data in natural language, ask questions about existing requests, and generate bulk update actions, all within normal governance controls. Future internal assistants on the roadmap include a Data Profiling Assistant and a Data Matching Assistant using hybrid string, fuzzy, and semantic match rules.

Foundational data governance for AI is arguably the most consequential angle. When enterprise data objects lack clear intent in their descriptions, AI models infer incorrectly. Conflicting hierarchies across ERP, EPM, SCM, and HCM produce inconsistent answers. EDM’s governed descriptions, properties, hierarchies, and cross-application mappings become the ground truth that AI models rely on, reducing hallucination risk and making outputs auditable. If your organisation is investing in enterprise AI, getting master data governance right isn’t optional preparation: it’s what determines whether your AI outputs are trustworthy.

Multi-domain MDM and the roadmap

EDM was built domain-agnostic from day one, which is a genuine competitive differentiator. Competitors largely started in a single domain and expanded. EDM covers Party, Product, Location, Finance, and other domains natively. For Fusion ERP customers, CDM (Customer Data Management) remains the right starting point for mastering customer party records. EDM enriches those with alternate hierarchies, data maps, and cross-application alignment before distributing to EPM and Analytics. For heterogeneous environments with multiple Salesforce instances across regions, EDM can act as the central master customer data hub.

If your Oracle Cloud implementation hasn’t included an EDM conversation yet, it probably should. And if you’re planning an AI initiative on top of Oracle Fusion, EDM is where the trusted data foundation that makes AI outputs reliable actually gets built.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Unlocking Real‑World Value with Oracle’s AI Factory

AI is moving at an incredible pace, and it can feel hard to know where to start, or how to get real results quickly. That’s exactly where Oracle’s AI Factory comes in. It gives organisations a clear, practical way forward, with step‑by‑step guidance, useful tools, and expert support to help you adopt, scale, and make the most of AI across Oracle Cloud Infrastructure, Oracle Database, the Fusion Applications Suite, and industry solutions.

Instead of trying to figure everything out on your own, you can tap into prebuilt use cases, clear guidance, and tried‑and‑tested patterns that cut down on both effort and risk. The framework is all about driving real, measurable improvements, whether that means modernising legacy systems, unlocking real‑time insights, or making the most of the embedded AI already available across Oracle’s application suite.

Many organisations don’t realise just how deeply AI is already built into Oracle’s technology stack. Once they see what’s possible, the same questions usually come up: Where do we begin? What does success look like? How do we make sure we see real value? Oracle’s AI Factory is designed to answer exactly those questions and guide customers through every step of the journey. With support from Oracle’s experts and the AI Customer Excellence Centres, you can validate ideas, shape use cases, and run pilots with confidence, all while keeping innovation moving without taking on unnecessary risk.

The first tool in the new AI Factory toolkit is one I already use, and genuinely love. Oracle’s Cloud Success Navigator is an AI‑powered platform that supports customers through every stage of their Oracle Cloud journey, from best‑practice guidance to migration support, feature updates, and resources that help speed up adoption. It brings everything together in one place: useful tools, expert insights, and clear, structured guidance to help organisations get implementations right, stay up to date with new capabilities, and get more value from their investments. By cutting through the complexity that often comes with cloud decisions, it helps teams move faster, reduce risk, and take full advantage of Oracle’s latest innovations across infrastructure and applications.

Oracle’s AI Customer Excellence Center gives organisations a practical, low‑risk way to explore and validate advanced AI and multicloud architectures by working directly with Oracle experts. It’s essentially a global hub where customers can try out proof‑of‑concepts, refine complex designs, and make sure their AI solutions perform reliably at scale. With this expert support on hand, teams can move faster, de‑risk big ideas, and make well‑informed decisions as they modernise and adopt AI across cloud, hybrid, or multicloud environments. It’s all about giving customers the confidence to innovate, without the unnecessary guesswork.

Oracle’s Operate services help organisations run their Oracle technology more efficiently by giving them hands‑on expertise across infrastructure, databases, and applications, while also opening the door to AI‑powered capabilities that streamline day‑to‑day work and support better decision‑making. The idea is simple: free up your internal teams to focus on innovation and higher‑value work, while Oracle takes care of the smooth running, optimisation, and ongoing evolution of your cloud environment. It’s a practical way to stay agile, reduce operational risk, and get more from your Oracle investment.

Oracle’s Innovate services are all about helping organisations adopt new Oracle capabilities quickly and confidently. They combine expert guidance, proven best practices, and time‑saving automation to speed up transformation and make the whole process feel far more manageable. Because these services are closely connected with Oracle Product Development, and delivered in collaboration with partners, customers get the support they need to embrace AI‑powered features, modern cloud technologies, and continuous improvement across their Oracle estate. The goal is simple: reduce risk, shorten implementation timelines, and ensure organisations can unlock long‑term value and innovation from their Oracle investments, all while keeping pace with fast‑changing business demands.

I love that Oracle has pulled together a complete toolkit across Applications, Data, and Infrastructure to support organisations on their AI journey. I keep a close eye on what others in the market are doing, and time and again it feels like Oracle is ahead, not just in delivering the technology, but in giving customers the tools and guidance they need to get real value from it. I’m genuinely excited to hear more about AI Factory and see what comes next. If you’re interested in exploring how it could help your organisation, now’s the perfect time to start the conversation.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Why Embedded AI Is the Real Differentiator in Enterprise Automation

AI is everywhere at the moment, there’s really no getting away from it, but not all AI is created equal. Plenty of platforms promise co‑pilots, assistants and automation, yet very few can actually take action inside core business systems. That’s why Oracle’s approach with AI Agent Studio for Fusion Applications really stands out.

Most enterprise AI tools today act as co‑pilots: brilliant for drafting content or answering questions, but far less capable when it comes to genuine process automation. Oracle has taken a different approach. Its AI agents are embedded directly within Fusion Applications, giving them the power to:

  • Understand business data in real time
  • Make decisions with full context
  • Write back into transactional systems securely
  • Automate tasks end‑to‑end without relying on extra tools

This isn’t AI “bolted on”, it’s AI woven right into the core of the application. And yes, I realise I’m starting to sound like an Oracle salesperson, but the difference between Oracle and other providers is hard to ignore. The image below highlights just how significant that gap is, comparing what Oracle delivers with five other major service providers.

Across the market, vendors are running into the same familiar obstacles:

  1. Added AI costs. Whether it’s Copilot capacity, ServiceNow’s AI tiers or Salesforce Flex Credits, AI is frequently positioned as an optional extra rather than something included as standard.
  2. Co‑pilots, not agents. Microsoft, Salesforce, Workday and others are excellent at generating content and offering recommendations, but they seldom deliver true autonomous action within transactional systems.
  3. Fragmented platforms. Many providers depend on several data models, clouds and integration layers. Their AI often relies on separate analytics environments or copied data, which strips away workflow context and makes automation far more difficult.

You may have already come across Oracle’s One Platform, and this is where Oracle really pulls ahead of the pack. Oracle keeps things refreshingly straightforward: Fusion Applications, data and AI all sit within a single, unified ecosystem. That brings three key advantages:

  • Embedded intelligence — agents work directly inside the applications employees use every day.
  • A unified security and data model — consistent governance and safer, more reliable automation.
  • True write‑back — agents can update transactions natively, without middleware or separate AI clouds getting in the way.

So you might be wondering, “That all sounds impressive, but what does it actually mean for me?” At its core, agentic AI is about cutting down manual effort, improving accuracy and boosting operational efficiency. Oracle’s embedded approach ensures that automation is reliable, properly governed and able to scale as your business does. Crucially, it isn’t stitched together from multiple products. With Oracle, AI is built in, a fundamental part of the system that runs your business, not an optional extra layered on top. It marks a shift from AI that simply talks, to AI that genuinely delivers.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

METRO – The Jewel in the Crown in Oracle AI Agent Studio

Have you heard about METRO? METRO is the embedded measurement and observability framework within Oracle AI Agent Studio. It stands for:

Oracle have introduced METRO as part of release 25D. It is a brand-new set of monitoring and evaluation tools built right into Oracle AI Agent Studio. Think of METRO as your control centre for AI agents in production. It gives you everything you need to keep an eye on accuracy, compliance, performance, costs, latency and even token usage. In short, it helps you make sure your agents are doing exactly what you expect, without any surprises.

The world of AI agents isn’t straightforward, responses can vary, workflows aren’t always predictable, and traditional checks don’t always work. That’s where METRO steps in. It’s designed for this new reality, offering smart ways to check semantic correctness, track detailed performance metrics and keep guardrails in place against risks like prompt injection. Whether you’re running complex multi-agent setups or more structured workflows, METRO makes it easier to stay on top of quality and reliability.

One of the best parts? The dashboard. It gives you a clear view of key stats like latency, error rates, token counts and correctness scores. And if you want to dig deeper, you can trace every single step an agent took, from LLM calls to tools used and outputs generated. This level of detail means you can troubleshoot quickly and optimise performance without the guesswork.

METRO also helps with testing, using the industry-standard LLM-as-a-Judge approach to score responses and provide feedback you can act on. Combine that with new features in 25D, like deterministic workflow agents and support for models such as GPT‑5 mini, and you’ve got flexibility to build agents for any scenario.

METRO isn’t just another feature, it’s a game-changer for anyone looking to manage AI agents with confidence. By combining deep insights, flexible evaluation tools and full traceability, it gives you everything you need to keep your AI ecosystem running smoothly. And with the added power of new models and workflow options in 25D, you’ve got all the tools to innovate faster and smarter. The future of AI governance starts here and METRO makes it simple!

AI Use in HCM Cloud

Oracle has just introduced a new set of AI agents within its Fusion Cloud Applications, and they’re set to make a real difference for HR teams. These agents are designed to support HR leaders throughout the entire employee journey—from hiring to retirement—by streamlining processes, improving the employee experience, and freeing up time for more strategic work. Whether you’re in recruitment, management, or part of the wider workforce, these tools are here to make everyday tasks easier and more efficient.

Built into Oracle Fusion Applications and running on Oracle Cloud Infrastructure, the AI agents are secure, fast, and included at no extra cost. They’re designed to work seamlessly within your existing workflows, so there’s no need to learn a new system. From helping employees discover internal job opportunities to assisting recruiters with interview scheduling, these agents are all about making HR more intuitive and responsive.

When it comes to career development, Oracle’s new agents offer some genuinely helpful features. Managers can get support with setting and tracking team goals, while employees receive tailored advice on roles that match their skills and aspirations. There’s even a Learning Tutor agent to help employees get more out of training courses, and a Talent Advisor agent that helps managers plan promotions and career growth using real performance data.

Core HR tasks are also getting a boost. Employees can quickly get answers to questions about pay, leave, or benefits through the Employee Concierge agent, while managers have their own version to help with team-related queries. There’s also a Positions Assistant agent that helps HR leaders make smarter staffing decisions by analysing organisational data and policies.

Finally, the agents support the full employee lifecycle, including onboarding, development, and offboarding. The Succession Planning Advisor helps HR teams stay ahead of leadership gaps, and the Payroll Run Analyst keeps payroll running smoothly by flagging anomalies and explaining any issues. Altogether, these AI agents mark a big step forward in making HR more proactive, personalised, and data-driven.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines