Reinventing How Work Works: The Business Case for Oracle Fusion Agentic Applications

Oracle has been making a clear and increasingly consistent argument over the past few months: enterprise software has reached the limits of what a system of record can do. I’ve written before about the introduction of Agentic Applications at AI World London, and about the specific HCM applications that were announced alongside them. But there’s a broader story here that I haven’t fully explored yet, and it’s one that I think matters for every Fusion customer, not just those focused on HCM.

This post draws on the “Reinventing How Work Works” webinar, which stepped back from individual applications and made the architectural and commercial case for why the shift to agentic is happening, what it actually looks like in practice, and where Oracle is taking this next. If you’re trying to build internal momentum for agentic adoption, or if you’re trying to explain to a leadership team why this is different from previous AI announcements, this is the post to share.

The framing that Oracle used throughout this webinar is, I think, one of the clearest explanations of what has actually changed. Traditional enterprise systems, including Fusion as it has historically operated, are systems of record. They follow fixed rules, capture what happened, retrieve information when asked, and complete transactions. They document the business. What they don’t do is run the business.

Agentic Applications represent a move to what Oracle calls systems of outcomes. Rather than waiting for a person to interpret data and decide what to do next, a system of outcomes works toward objectives, makes things happen, solves problems, and achieves results. The underlying system of record doesn’t go away. The data, governance, approval hierarchies, role-based access control, and audit history are still there, and in Oracle’s case, they’re still the source of truth for every transaction. What changes is the layer operating on top of that foundation.

This architecture diagram is worth studying if you haven’t seen it. Agentic Applications sit in a new composable layer above the existing ERP, HCM, and CX transactional applications. That layer is powered by teams of AI agents coordinated through Oracle AI Agent Studio, drawing on the full enterprise data model, security model, and process history that already exists in Fusion. Beneath all of this, Oracle Cloud Infrastructure (OCI) provides the AI data platform, and a range of large language models (LLMs), including those from OpenAI, Cohere, Meta, Anthropic, xAI, and Google, are available depending on the task and preference.

Every Fusion Agentic Application is built around four core dynamic areas. Understanding these is useful when you’re evaluating a specific application or explaining the concept to stakeholders.

The first is the Advisor, which is the “Ask Oracle” conversational interface. This is where a user can ask natural language questions and get contextual, data-aware responses rather than navigating to a report. The second is the Information Summary, which provides an intelligent, prioritised view of what’s happening right now in that area of the business, surfaced automatically rather than requiring the user to run queries. The third is Priority Actions, a curated queue of recommended next steps that the agents have identified based on current conditions, risk signals, and business objectives. The fourth is Communications, which handles notifications, responses, and outbound actions within the appropriate governance boundaries.

These four areas appear consistently across all 22 applications, which is deliberate. Oracle’s position is that once a user understands the structure in one application, they can navigate any other agentic application without relearning the interface.

One of the most practically useful concepts introduced in this webinar is what Oracle calls the Autonomy Dial. It’s a spectrum with three positions, and it addresses one of the most common concerns I hear from customers and consultants: how much control do we give up?

At the “Human in the Loop” end, the agent assists and a person decides. The agent drafts, recommends, and prepares; the human reviews and approves. This builds trust, improves speed and consistency, and keeps people firmly in control. The business impact is described as immediate productivity gains.

In the middle is “Human in the Lead”, where the agent executes and a person monitors. The agent handles routine work and manages to policy; a person steps in for genuine exceptions. This scales output without adding headcount and frees teams for higher-value work. The impact here is scaled operations.

At the “Autonomous Execution” end, the agent drives and a person owns. End-to-end execution happens within policy, continuous real-time optimisation takes place, and human involvement is reserved for true exceptions. The impact is described as business transformation.

What I find compelling about this model is that it isn’t prescriptive. Oracle isn’t saying every organisation should start at one end or aim for the other. Each position on the dial represents a valid operating model depending on the process, the risk tolerance, and the maturity of the organisation. A payroll close process might comfortably sit at Human in the Lead. A workforce scheduling decision for a critical shift might warrant Human in the Loop until confidence is established. A high-volume procurement matching task might be a good candidate for Autonomous Execution relatively quickly.

My earlier posts covered the eight HCM applications in detail. The full announcement of 22 applications in releases 26B / 26C spans ERP/SCM, HCM, and CX, and it’s worth understanding the breadth of this, because it signals how Oracle is positioning agentic across the entire Fusion suite rather than as an HCM-specific capability.

On the ERP and SCM side, the applications include Design-to-Source Workspace, Product Readiness Workspace, Production Shift Operations Workspace, Sales Order Command Centre, Batch Process Manufacturing Workspace, Logistics Execution Command Centre, Maintenance Operations Workspace, Warehouse Operations Workspace, Cost Accounting Close Workspace, Sourcing Command Centre, Collectors Workspace, and Security Command Centre.

The Design-to-Source Workspace is a useful example of the transformation logic. Previously, product design and bill of materials work happened in separate systems. Sourcing relied on items entered manually. Negotiation delays accumulated when information was missing or unresolved. With the agentic application, product specifications translate automatically into qualified supplier lists, bills of materials are generated directly from CAD files, at-risk negotiations are flagged automatically, and bids are evaluated across cost, lead time, quality, and risk in a single view. The outcome is faster time to market and improved sourcing cycle times.

On the CX side, three applications have been announced: Cross-Sell Program Workspace, Contract Compliance Workspace, and Sales Command Centre. For CX teams, the Sales Command Centre in particular brings together the kind of deal health monitoring, risk flagging, and next-step recommendation that previously required significant manual analysis across multiple reports.

I’ve written in detail about Oracle AI Agent Studio in previous posts, but the webinar highlighted several new capabilities that are worth calling out specifically, because some of them genuinely change what’s possible for teams building custom agentic applications.

The most significant new addition is the Agentic App Builder, which is released in 26C. This is what Oracle describes as a “no-code agentic brain”: you describe your objective in natural language, the system explains and builds the workflow, generates agents and the underlying code automatically, and allows you to diagnose and fix issues in real time. In the demo, a user types a description of a sales opportunity health and risk management app, and within moments a structured agentic application is assembled from reusable agents, with a Deal Summary Agent, a Risk Agent, a Customer Insights Agent, and a Process Agent already in place and connected. It’s a significant step forward from the existing builder experience.

Alongside this, several other capabilities have been marked as new in the current release: Workflow Orchestration, Content Intelligence, Contextual Memory, Multi-Modal support, an Agent ROI Dashboard, and enhanced Security, Auditability, and Governance controls. Contextual Memory is worth paying attention to particularly, because it allows agents to retain information across interactions, which is what enables genuinely personalised, continuous support rather than stateless responses to each individual query.

The studio now also supports full interoperability through MCP (Model Context Protocol) and A2A (Agent-to-Agent) protocols, which means agents built in Fusion can exchange context with agents or tools running outside the Fusion estate, provided the appropriate governance controls are in place.

One thing the webinar made very clear is that Oracle isn’t building this alone. The Fusion AI ecosystem now includes 73,400 certified builders, 10,000 developers actively building agents, and over 100 pre-built agent templates in the AI Agent Marketplace, which is now open to all partners for submissions. Open standard support includes native MCP integration across connectors and an agent-to-agent registry within Oracle AI Agent Studio itself.

For customers, this matters because it means the pool of available agents and expertise is growing rapidly. You don’t need to build everything from scratch, and you don’t need to rely solely on Oracle to extend the platform. The open partner submission model for the marketplace is a meaningful shift, and it’s one that will accelerate the availability of domain-specific and industry-specific agents over the coming months.

The summary that Oracle closed with is a useful way to frame internal conversations: Fusion is moving from systems of record to systems of outcomes. Agentic Applications get work done. Oracle AI Agent Studio lets you build, deploy, and scale agents specific to your organisation. OCI AI Advantage runs it all securely at scale.

What I’d encourage any Fusion customer to take from this is that the window to start is now. The pricing model has already been simplified significantly (covered in my earlier post), the tooling to build and extend has matured substantially, and the evidence base from production deployments is solid. Starting with one application in one process area, positioned at Human in the Loop on the autonomy dial, is a low-risk, high-value entry point that builds organisational confidence while delivering measurable results.

If you’re thinking about where to start or how to make the case internally, I’m happy to talk it through. In the meantime, why not check out my earlier post on the HCM-specific Agentic Applications announced at AI World London? You can find it here. And if you missed the original announcement post covering the architecture, the maturity model, and the updated pricing, that’s a useful starting point too, and you can find it here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

A Day with Oracle: AI Success Navigator and Guided Learning Partner Enablement

Today I had the opportunity to attend and present at a partner enablement event hosted by the Oracle AI Success Navigator product team, focused on how partners like Version 1 can best use Oracle’s tooling to bring genuine, measurable value to our customers. The session brought together presentations, product demos, hands-on labs, and open discussion, covering Oracle Cloud Success Navigator and Oracle Guided Learning (OGL). It was a useful day, and I wanted to share some of the key takeaways while they’re fresh.

If you haven’t come across Cloud Success Navigator yet, it’s Oracle’s digital engagement platform, provided free to Oracle Fusion Cloud customers, designed to help organisations design, implement, and accelerate their cloud and AI roadmaps. It sits at the centre of Oracle’s broader AI Factory offering, which Oracle launched as a bundled set of partner and customer services aimed at speeding up AI adoption.

At its core, Cloud Success Navigator gives customers a single place to discover new features, plan adoption, track key milestones, and access Oracle Modern Best Practice (OMBP) guidance. The sunburst visualisation is particularly useful: it surfaces relevant features based on your production profile, so your team isn’t wading through capabilities that don’t apply to your configuration. You can tag features across Now, Next, and Later columns, which gives a clean, structured view of your innovation roadmap.

A significant addition to the platform is AI Assist, which was made generally available in late 2025. AI Assist is a generative AI-enabled assistant embedded throughout Navigator. It goes beyond a standard chatbot: it provides tailored recommendations, surfaces relevant documentation, highlights release roadmap changes based on your context, and flags project milestone risks. For partners, the practical implication is that our customers now have a self-service layer of intelligent guidance that can accelerate feature discovery and planning without always needing to raise a support request or wait for a consultant touchpoint.

How should Partners be using Success Navigator? This was, for me, the most valuable part of the day. The Oracle product team was clear that Navigator is not just a tool for customers to log into independently. The expectation is that partners should be actively bringing Navigator into their delivery model, whether that’s during implementation, post go-live optimisation, or ongoing managed service.

In practice, that means a few things. During implementation, your partner should be walking you through Navigator as part of onboarding, not treating it as a nice-to-have that gets mentioned at the end of a project. Feature planning sessions are more productive when they’re anchored in Navigator’s release data and OMBP content, rather than relying on spreadsheets or static documentation that goes out of date.

Post go-live, Navigator becomes a continuous value tool. The AI Assist agents can help customer teams stay ahead of quarterly release content, plan for Redwood migration milestones, and identify AI features that fit their production profile. Partners who are actively guiding their customers through this ensure their customers are in a much stronger position than those who are leaving customers to self-serve without direction.

One thing to note: Oracle has indicated that the platform continues to evolve, with enhancements planned around streamlined account management for customers with multiple accounts and improved programme management views. It’s worth keeping an eye on the in-application release announcements for Navigator itself.

The second major focus of the day was Oracle Guided Learning (OGL), Oracle’s digital adoption platform (DAP) built natively for Oracle Cloud applications. OGL delivers in-application guidance, directly overlaid onto the Oracle Fusion interface, so users get real-time, contextual help without having to leave the system or refer to separate documentation. The core capabilities OGL brings to a customer environment are worth spelling out clearly, because I still encounter organisations that underestimate what the platform can do.

Process guides provide step-by-step walkthroughs for complex transactions, walking a user through the exact steps required to complete a task within the application. Smart tips and beacons offer contextual pop-up hints and visual cues at key points in the UI. The Help Panel gives users access to self-service guidance and documentation from within the application. In-app messaging allows administrators to send announcements, policy updates, and maintenance communications directly to users as they work, rather than relying on email campaigns that often go unread. Analytics then close the loop: OGL captures how users are engaging with content, where they’re dropping off, and which features or processes need additional guidance investment.

What’s particularly relevant for customers right now is the AI integration within OGL. The OGL 26A release introduced generative AI capabilities into the content authoring experience: content developers can use an AI assistant within the Full Editor to generate and rephrase step text for process guides, smart tips, beacons, and messages. This significantly reduces the time needed to build and maintain a library of guides, which has historically been a barrier to adoption on smaller or resource-constrained engagements.

OGL also extends beyond Oracle applications. It can be deployed across third-party applications including Salesforce, ServiceNow, Microsoft SharePoint, and others, which is useful context for customers running a mixed application estate.

A thread running through both topics today was change management, and it’s one that I think partners sometimes treat as a soft add-on rather than a structural part of delivery. The reality is that both Navigator and OGL exist precisely because technology adoption is a change management problem as much as a technical one.

Navigator gives you the roadmap visibility and planning structure to keep customers engaged with what’s coming and why it matters. OGL gives you the in-application mechanism to reinforce new behaviours, communicate changes, and support users at the moment of need. Used together, they cover a significant portion of the adoption lifecycle: from feature discovery and prioritisation, through to in-system guidance and analytics-driven optimisation.

The enablement message from Oracle today was straightforward: partners who embed these tools into their delivery model are better placed to demonstrate continuous value to customers. Customers who have a structured adoption programme, supported by Navigator and OGL, tend to see higher feature utilisation and lower support overhead than those who treat go-live as the end of the engagement.

It was a practical and well-structured day. The Oracle AI Success Navigator product team clearly has a strong vision for how the platform should be used within the partner ecosystem, and the investment Oracle has made in AI Assist and the broader AI Factory infrastructure is evident. For those of us working in Oracle Fusion Cloud implementations and managed services, the message is clear: these tools are available, they’re free as part of the Oracle subscription, and using them well is increasingly a differentiator in how we position value to our customers.

If you’re currently working on an Oracle Fusion Cloud engagement and you haven’t had a detailed look at what Cloud Success Navigator and OGL can offer, now is a good time to start that conversation.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Oracle AI Success Navigator and OGL: A Partnership That’s Changing How We Adopt Oracle Fusion

Oracle has rebranded Oracle Cloud Success Navigator as Oracle AI Success Navigator, and while a name change might sound like a cosmetic exercise, what’s happening underneath is far more interesting. Oracle is actively strengthening the partnership between AI Success Navigator and Oracle Guided Learning (OGL), and for those of us who have long championed both products, this new direction is very exciting!

Oracle AI Success Navigator (formerly Oracle Cloud Success Navigator, or CSN) is Oracle’s platform for helping customers plan, implement, and continuously innovate with Oracle Cloud Applications. It’s included as part of your Oracle Cloud subscription, so if you’re not using it, you’re missing a trick to get more from your Fusion instance.

The AI Success Navigator platform gives you four key areas to work with: Latest Feature Innovation, a consolidated view of release readiness materials across your product pillars; Adoption Roadmaps, a personalised and prioritised feature backlog managed directly in the platform; Adoption Centres, theme-based content hubs covering topics like AI and Redwood; and AI Assist, an OCI Generative AI-powered chat interface that I’ll come back to in some detail.

AI Success Navigator, OGL, and MyLearn are all interconnected and Oracle’s Customer Success Services sits in the middle of each. AI Success Navigator is the planning and intelligence layer and OGL is the point-of-need delivery mechanism inside the application. During implementation, OGL is primarily the concern of the project team and partner. Post-go-live, it becomes relevant to all users. MyLearn is the key mechanism for users to learn about Oracle Fusion and therefore is an important consideration.

What’s changing is that these products are no longer operating in isolation. OGL content is now surfaced within AI Success Navigator in the Oracle Modern Best Practice (OMBP) area as job aids, and within Starter Configuration. AI Assist is also being increasingly trained on OGL best practices and project success indicators, meaning the recommendations it produces are grounded in what good OGL adoption actually looks like.

Are you aware of the opportunity to use Success Navigator’s AI Assist to help produce OGL Content? On a recent webinar the presenter asked AI Assist to produce a prioritised list of Recruiting 26B features ranked by end-user impact, with a recommendation on which should have an OGL strategy assigned. The output was a ranked list classifying features as high, medium, or low impact, with a clear rationale for each. Features like Career Coach Enhancements (Interview Management Agents) and the Redwood Experience changes to candidate data management were flagged as high impact, with specific reasoning around setup requirements and workflow changes for end users.

The next step was even more useful. Having identified that the Interview Management Agent feature needed OGL coverage, the presenter asked AI Assist to produce a sample OGL flow. The output was a structured, step-by-step guide covering navigation path, UI element locations, and accessibility notes. When the presenter asked for it in an Excel-ready format, AI Assist reformatted the output into a table with columns for Step Number, Step Title, Step Instruction, UI Element/Location, and Notes/Accessibility, ready for an OGL developer to pick up directly.

So what does this mean in practice? An OGL team no longer has to start from a blank page when a quarterly release drops. AI Success Navigator can triage features, identify which ones need OGL attention, and produce a first-draft flow that a developer can then validate and publish. That’s a material reduction in the time between a feature dropping and users having contextual guidance in the application.

One thing to note: AI-generated flows still need validation against the actual application UI and tailoring to your specific user roles and configuration. The AI is a starting point, not a finished product. But it’s a very good starting point.

The webinar also covered the Testing Agent, which I think gets overlooked. It lets you create test cases from scratch using AI, upload existing test scripts for conversion, and refine them through AI Assist. The connection to OGL is practical: well-structured test cases describe real user workflows, and those workflows are exactly the raw material you need to build accurate OGL guides. If your testing and OGL content creation are happening in silos today, AI Success Navigator gives you a way to bring them closer together.

I’ve always felt that AI Success Navigator and OGL were solving related problems without talking to each other enough. What Oracle is doing now is starting to close that gap, and it’s a direction I’m very happy about.

If you’re not already using Oracle AI Success Navigator and you have an Oracle Cloud subscription, start exploring it. If you’re an OGL practitioner, the AI Assist capability is worth your attention specifically. And if you want to understand how the two products can work together in your programme, now is a good time to start that conversation.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 4

Over the last three blogs, I’ve explored how AI Agent Studio connects to the wider enterprise, how agents are triggered and interacted with, and how workflows are designed to be reliable and production‑ready. In this final part of the series, I want to pull those threads together and focus on the capabilities that help agents scale safely and operate with confidence over time. This is where governance, control and operational discipline really come into play, and where the newer 26A and 26B features start to show how Oracle is shaping AI Agent Studio for long‑term, enterprise use rather than short‑lived experimentation.

Choosing the right document or memory node is an area where I see a lot of confusion in conversations with clients, so it is worth being very clear about what each one is designed to do. The Document Processor node is intended for runtime documents, attachments that arrive as part of a specific workflow execution, such as a supplier quote received by email, an invoice uploaded through chat, or a UCM attachment linked to a Fusion business object. Its job is to retrieve the file, extract the text, and pass that content on to the next node in the workflow. It is not designed for querying a stable or long‑lived corpus of documents, such as policy or reference material that you want to reuse and search repeatedly over time.

The RAG Document Tool node is designed for exactly that stable, reusable collection of information. You curate a set of documents within an Oracle AI Agent Studio Document Tool, move them through the lifecycle from Ready to Publish to Published, and the RAG node then performs semantic retrieval against that content to ground downstream LLM reasoning in your own policies, playbooks or manuals. To get the best results, it is important to use specific queries with clear discriminators such as module, process area, country or version, which helps improve retrieval precision. It is also good practice to include an explicit “no results” fallback path in your workflow, rather than allowing the LLM to guess when retrieval confidence is low.

The Vector DB Reader and Writer nodes serve a different purpose again, providing durable semantic memory that persists across workflow runs. They are best used to store normalised, reusable knowledge units such as validated resolution summaries, previous exception details, or extracted entity representations. Entries should be kept short and semantically focused, enriched with meaningful metadata to support filtering, and assigned stable document IDs to avoid duplicates. Raw PII or permission‑restricted data should never be stored without a deliberate access control design. When reading from the vector store, metadata filters should always be applied, and low‑confidence matches should be treated the same as no result at all, routing the workflow to a deterministic fallback rather than continuing on uncertain ground.

One theme that came through strongly in the partner training sessions, and one I think represents genuinely good discipline, is treating Workflow Agent testing as a first‑class concern rather than something bolted on at the end. Oracle’s evaluation framework for Workflow Agents, often referred to as Workflow Evals, is based on supplying structured JSON test inputs and asserting expected outputs. These evaluations are intended to be run as a regression suite whenever you change a prompt, adjust a node configuration, swap a tool, or update a policy, helping you catch unintended side effects early and keep agent behaviour stable as it evolves.

A good starting point is to define around five core paths through the workflow: the happy path, two or three of the most common exception scenarios, and at least one case that deals with missing or poor‑quality input data. From there, you should be tracking things like overall pass rate, branch accuracy, schema validity, and retry or escalation behaviour. The aim is not simply to prove that the workflow reaches an end state, but to make sure it routes correctly and predictably under every condition that genuinely matters in production.

For anyone building more complex workflows, the full context variable reference is well worth bookmarking. In practice, a small set of variables tends to do a lot of the heavy lifting, such as $context.$nodes.<nodecode>.$status to check whether a preceding node succeeded or failed, and $context.$nodes.<human_node_code>.$actionPerformed to capture whether a Human Approval step resulted in APPROVE, REJECT or REQUEST_CHANGES. You can also use $context.$nodes.<human_node_code>.$feedbackReceived to pick up any comments provided by the approver, and $context.$workflow.$traceId to generate idempotency keys or include trace references in error notifications. For conversational workflows, $context.$system.$chatHistory is particularly useful, as it exposes the full session history and allows the agent to reason about what has already been discussed.

The 26A roadmap also includes several upcoming capabilities that will significantly extend what is possible in the near term. Support for the Model Context Protocol, or MCP, means Workflow Agents will be able to invoke tools exposed by MCP servers, broadening the integration landscape well beyond traditional REST APIs. The Agent Studio Help Assistant, an AI‑driven guide embedded directly within the studio, should also make agent design far more accessible, particularly for practitioners who are new to the tooling. Alongside this, multi‑modal enhancements, including end‑user Q&A over images and documents uploaded in chat and semantic search across non‑text assets, open up an entirely new set of document understanding and reasoning use cases.

Looking a little further ahead, the roadmap includes capabilities such as breakpoint‑style debugging, automated prompt engineering, multi‑user development environments, and a Bring Your Own LLM option, alongside additional interaction channels including WhatsApp, SMS and telephony. Taken together, these signal a sustained level of investment in the platform and a clear focus on making AI Agent Studio more powerful, more accessible, and more suitable for enterprise‑scale use. The overall direction is a positive one, and it is clear that Oracle is building towards a mature, long‑term agent platform rather than a short‑term experiment.

The partner training sessions that informed this post covered a lot of practical ground, and I genuinely believe they will save teams a significant amount of time as they start building in earnest. If you are already exploring AI Agent Studio and would like to talk through any of these patterns in more detail, I would be very happy to continue the conversation. And if you have not yet read the earlier posts in this series, it is worth starting at the beginning with the overview of how Workflow Agents are structured, which sets the context for everything covered here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 3

In the first two blogs, I looked at how AI Agent Studio connects to the wider enterprise landscape and how agents are triggered and engaged, whether by systems, schedules or users. In this third part, I want to step back slightly and focus on what happens inside the agent itself, specifically how workflows are structured, how context is managed, and how you start designing for reliability rather than experimentation. This is the point where agent design shifts from “can we make it work?” to “can we trust it to run consistently in production?”, and the 26A capabilities give you far more control here than many people realise. To check out the previous blog, please click here.

The Wait node, which is being introduced as part of the 26B release, addresses a long‑standing gap in workflow design, where there was no clean way for a workflow to pause and resume later without either completing immediately or blocking indefinitely. When a Wait node is reached, the workflow moves straight into a Waiting state and pauses execution for a configured period of time, up to a maximum of 60 minutes. Once that wait period expires, the workflow can optionally loop back to an earlier point before continuing, allowing it to re‑evaluate conditions or check for updates. This looping behaviour is controlled through two simple settings: the Loop Back Node, which defines where execution returns to, and Maximum Iterations, which limits how many times the workflow can loop before it continues forward regardless.

In practice, this enables a clean polling pattern that is otherwise difficult to model. For example, imagine a workflow that creates a receipt request in Fusion and then needs to confirm that the receipt has been posted before it can move on. By using a Wait node configured for five minutes and looping back to a Business Object read node up to ten times, the workflow effectively gives itself a 50‑minute window to detect the receipt posting automatically before either continuing or escalating. During each wait cycle, the node outputs ORA_USER_INPUT_REQUIRED, and once all iterations are exhausted it returns WAIT_TIME_EXPIRED_AND_MAX_ITERATIONS_REACHED, both of which can be evaluated in downstream If Condition nodes to route the flow appropriately.

The Code node is one of the most powerful building blocks in a Workflow Agent, and also one of the most commonly underestimated. It executes JavaScript and returns a single value, whether that is an array, boolean, number, object or string. Its real value lies in handling the deterministic work that you should never push into an LLM node, such as data normalisation, threshold calculations, schema validation, array filtering and payload shaping. Used well, it provides a clean separation between predictable logic and probabilistic reasoning, which is a key ingredient in building workflows that behave consistently and are easier to trust in production.

There are a few important constraints to be aware of when designing logic for the Code node. Execution is limited to five seconds, with an upper limit of 100,000 statement executions, and functions cannot be defined within the code, which means recursion is not supported. Most built‑in JavaScript methods are available, but there is no external access, so no REST calls, file system operations, console logging or library imports. The code can read from $context, $currentItem and $currentItemIndex, but it cannot modify the $context object directly. Instead, it simply returns a value, and that returned output is the sole result of the node.

Some of the most effective patterns I’ve seen make particularly good use of the Code node for this kind of deterministic work. Common examples include normalising inconsistent date strings and currency values into canonical formats before passing them to a Business Object write node, or calculating variance percentages for three‑way match validation so that an If Condition node receives a simple boolean rather than needing to express complex arithmetic. Other strong patterns include generating idempotency keys using a combination of $context.$workflow.$traceId and object identifiers to prevent duplicate writes during retries, and filtering arrays returned from Business Object reads so that only active or primary records are passed into a For Loop for further processing.

For workflows that are triggered through the AI chat interface, 26A also introduced support for file uploads during conversations with an agent, allowing users to attach up to five files with a combined size of 50 MB. A wide range of formats is supported, including PDF, DOCX, XLSX, PPTX, PNG, JPEG, HTML, Markdown, JSON, XML, CSV and ZIP. To work with these attachments inside a Workflow Agent, 26A required the delivered MultiFileProcessor tool to be added to an agent and that agent then included within the main workflow. This capability significantly expands what chat‑driven workflows can handle, particularly when dealing with documents, structured data and supporting evidence provided directly by the user. In 26B, this has been simplified significantly. Rather than introducing a separate agent, you can now add a Tool node directly into your Workflow Agent and select Chat Attachments Reader as the tool type. This keeps the workflow much cleaner and removes an unnecessary orchestration step. The tool reads the files uploaded in the current chat session and exposes the extracted content directly to downstream nodes, making it easier to act on user‑provided documents without additional plumbing or indirection.

Support is also in place for third‑party file storage, allowing users to upload files directly from Google Drive, Dropbox or Microsoft OneDrive, provided those credentials are configured under the Chat Experience tab in Credentials. Enabling this involves registering an OAuth application with the relevant provider, obtaining the client credentials, configuring the account in Credentials, and then switching on the option to allow users to upload files from connected cloud storage accounts on the agent’s Chat Experience tab. Once configured, this gives users a seamless way to bring external documents into agent‑driven workflows without needing to download and re‑upload files manually.

This third blog has focused on what really makes Workflow Agents robust in practice, from pausing and polling patterns, through deterministic logic in Code nodes, to handling documents and attachments cleanly inside workflows. These are the building blocks that move agents beyond experimentation and into something you can rely on day to day. In the final post in this four‑part series, I’ll bring everything together and look at the remaining 26A and 26B capabilities that round out the platform, focusing on how they support governance, scale and long‑term operational confidence when running AI agents in production.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 2

In the first blog, I focused on how AI Agent Studio connects to the wider enterprise landscape, but once those connections are in place, the next question is how and when agents are actually set in motion. I touched briefly on triggers in an earlier post, but the depth available here really deserves a closer look. In AI Agent Studio, a published Workflow Agent can be kicked off in three distinct ways: via a webhook, through email, or on a schedule. Each option supports very different use cases, from event‑driven automation to time‑based controls, and understanding how to use them effectively is key to building agents that fit naturally into day‑to‑day operations rather than feeling bolted on.

The webhook trigger is the mechanism behind the invokeAsync API call discussed earlier, but it also supports a more flexible and powerful pattern. When configuring a webhook trigger, you can define named input variables, which are passed into the REST call as part of the parameters object. Within the workflow, those values are exposed via $context.$triggers.REST.$input.<InputName>, allowing you to build parameterised workflows that adapt their behaviour based on what the calling system provides. This is particularly useful when you want a single workflow to handle multiple variations of a process, with the external system supplying the context that determines how the agent responds.

The email trigger is one I find particularly practical. You configure a Google or Microsoft email account under Credentials, set the account type to Inbound, and from that point on, any new email arriving in the inbox automatically kicks off the workflow. The email body, sender address, subject, headers and even attachment content are all exposed as context variables, such as $context.$triggers.EMAIL.$input.content, $context.$triggers.EMAIL.$input.fromAddress, and processed attachment text via $context.$triggers.EMAIL.$input.attachments[0].context. This makes document ingestion workflows genuinely straightforward to build. For example, a supplier can email a quote to a monitored inbox, the email trigger fires, a Document Processor node extracts the line items, and the workflow creates a purchase requisition in Fusion, with no human involvement unless an exception is identified.

The schedule trigger supports two distinct patterns, depending on how you want your workflow to run. Interval scheduling fires on a repeating, time‑based cadence, configured in seconds, minutes, hours or days from a defined anchor point, while recurrence scheduling uses more familiar calendar‑based patterns, either one‑off or repeating, such as weekly on specific days. One practical point to be aware of is that the user creating a scheduled workflow must be assigned the FAI Batch Job Manager Duty role (ORA_DR_FAI_BATCH_JOB_MANAGER_DUTY) for the scheduling job to be created successfully. It is an easy detail to miss during initial setup, but one that is worth flagging early to your security or roles team to avoid unnecessary delays.

The 26A release also introduced native channel integrations for both Microsoft Teams and Slack, and I expect these to become the primary way many organisations interact with AI agents, rather than relying on the embedded Fusion chat widget. At a high level, the Microsoft Teams setup involves configuring the channel under Credentials, supplying the Teams bot or app details, generating and downloading the app manifest, and then uploading it to Teams as a custom application. Once this is in place, users can discover and select available agents directly within Teams and interact with them in exactly the same way they would through the native Fusion chat experience, but in a collaboration tool they already use every day.

One final point worth calling out is that a new duty role has been introduced to group all channel‑related permissions for both Microsoft Teams and Slack. This role includes permissions such as ChannelManifest and ExternalChatCorrelation, and it is required for any user who needs to configure channel integrations or interact with agents through Teams or Slack. As with any new security object, it is worth factoring this into your role review and security planning early, so it does not become a blocker when 26A goes live in your environment.

This blog is the second in a series of four exploring the latest capabilities in Oracle AI Agent Studio introduced in 26A. In this part, I’ve focused on how Workflow Agents are triggered and how those triggers shape real‑world usage, from event‑driven integrations through to scheduled and collaborative interactions. In the next post, I’ll move on to another key area of the platform, building on these foundations and looking at how the newer features work together to support robust, production‑ready agent solutions.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Going Deeper with Oracle AI Agent Studio: Connecting, Triggering, and Building with Confidence – Part 1

I’ve written quite a bit recently about Oracle AI Agent Studio and what it can do at a high level, and those posts have led to some really valuable conversations. A question that keeps coming up, though, is a very practical one: “This all sounds great, but how does it actually fit into the rest of my technology landscape?” Closely followed by, “How do I build something that’s reliable and production‑ready, rather than just impressive in a demo?” This post is my attempt to answer both, drawing on the training content from the AI World partner sessions, which goes into a level of detail that genuinely changes how you think about building with AI Agent Studio, particularly when it comes to integration, robustness and real‑world use. I should say up front that this series is a little more technical than my usual blogs, but it feels important to share, because this is the detail that turns AI from an interesting idea into something you can confidently run and scale.

One announcement I haven’t covered yet is the invokeAsync REST API, introduced in 26A, which allows external applications to programmatically call published Agent Teams in Fusion, and this is a significant step forward for organisations where Fusion sits alongside other enterprise systems. It means those systems can now trigger Fusion AI agents directly, without a user ever needing to open the chat interface. The process works in two stages: an external application sends a POST request to the invoke endpoint, passing either a user prompt or structured data, and because AI agents may need time to reason or retrieve information, the call returns a job ID rather than an immediate response. That job ID is then used to poll a separate status endpoint, which returns the final output along with useful metadata such as status, conversation ID, trace ID and timing information. For development and testing, adding ?invocationMode=ADMIN to the status endpoint provides detailed debug output, including the full node execution trace, which is invaluable when you are building and troubleshooting. Authentication follows the standard OAuth 2.0 bearer token approach used across Fusion APIs, so if you are already familiar with OCI IAM and IDCS, there is nothing fundamentally new to configure.

For those looking for a more standardised integration approach, Oracle also introduced support for the A2A, or Agent‑to‑Agent, protocol in 26A. A2A allows a client to discover agents through a well‑known metadata endpoint, often referred to as the agent card, initiate a task by sending a message, and then poll for the result using the same job ID pattern. The agents search endpoint makes it possible to query for published agents by name, which is particularly useful when you are building orchestration layers that need to dynamically discover and delegate work to specialist agents. The agent card itself returns the agent’s capabilities and supported methods in a structured format, making it much easier to establish interoperability between Fusion agents and third‑party systems without relying on custom, point‑to‑point integrations.

The relationship between Fusion and the outside world does not just work in one direction. In 26A, Oracle introduced Data Source Applications within the Credentials tab of AI Agent Studio, allowing administrators to configure OAuth connections to non‑Fusion systems such as EPM Cloud, WMS, or any external application with a compatible identity provider. Once a Data Source Application is set up with the base URL, IDCS URL, client ID, scope and key pair, it becomes available as a selectable source when creating a Business Object using the resource type “Other Data Source Application”. This makes it much easier for AI agents to securely reach out to external systems, bringing data back into Fusion in a controlled and repeatable way rather than relying on custom or hard‑coded integrations.

At runtime, when a Business Object function within a workflow needs to call an external system, the platform simply uses the saved configuration to obtain an OAuth token and invoke the target API behind the scenes. From the workflow designer’s point of view, it behaves exactly like any other Business Object node, with no additional complexity to manage. This opens up some genuinely powerful patterns, such as an HCM Workflow Agent that validates a job requisition in Fusion, checks headcount or budget in EPM, and then writes the outcome back to a Fusion record, all within a single, governed automation that remains transparent, secure and easy to maintain.

This blog is the first in a short series of four, where I will walk through some of the latest and most important functionality in Oracle AI Agent Studio introduced in 26A. In this opening post, I’ve focused on how agents connect to the wider enterprise landscape, because integration, reliability and governance are what ultimately determine whether AI delivers real value. In the next blogs, I’ll build on this foundation and explore other new capabilities in more detail, looking at how they work in practice and how you can apply them confidently in real‑world scenarios.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Agentic Applications for HCM Cloud

At the AI World London HCM Partner Summit, Oracle unveiled 22 new Agentic Applications across the Fusion suite, including eight designed specifically for HCM Cloud. One of the standout additions is the Workforce Operations Command Centre, which brings scheduling, time, and absence management into one coordinated hub. It highlights real‑time risks, helps managers make confident coverage decisions, and streamlines day‑to‑day operations. During the demo, we saw a live priority queue flagging shift conflicts and timecard issues by severity, with simple one‑click options to approve, reassign, or review — making it far easier to stay ahead of workforce challenges.

Oracle has also introduced a series of new workspaces designed to streamline everyday manager and employee tasks. The Hiring Workspace for Store Managers brings candidate details, interview scheduling, and urgent hiring requests together to support faster decisions, while the Manager Concierge Workspace unifies compensation, performance, talent, and absence insights with simple, policy‑backed actions. The Team Learning Workspace helps managers stay ahead of compliance risks and focus on development priorities, and the Career Advancement Command Centre connects employees to suitable roles, required skills, and training. Alongside this, the My Help Workspace offers a clear view of open requests and relevant knowledge articles, and Contracts Intelligent Counsel, also known as Agentic Compliance, provides continuous, autonomous monitoring of contract terms and policy changes to reduce compliance overhead.

Oracle also unveiled Oracle Manager Edge, a new personal AI coach designed to give managers practical, data‑driven guidance directly within Touchpoints, with suggested actions seamlessly linked to Oracle Team Touchpoints. Although it isn’t an Agentic Application, it will be available through the AI Agent Studio once released, offering organisations an accessible way to bring personalised, context‑aware coaching into everyday management without additional complexity.


Oracle also confirmed six dedicated Payroll Agents designed to cut manual effort and improve payroll accuracy. The Payslip Analyst, already live in 25D, helps employees resolve payslip queries and has been shown to reduce inquiry costs by up to 70 per cent with a rapid ROI. The Compliance Update Agent (26C) converts legislative changes into proactive configuration updates, removing up to 90 per cent of the manual workload. The Court Order Processing Assistant (26A) fully automates garnishment intake, while the Tax Calculation Statement Agent (26C), currently specific to the US and California, explains the detailed tax logic behind each payroll run. The W‑4 Compliance Agent (26B) automates US tax‑form completion, and the Pay Run Agent (26C) provides real‑time summaries and flags exceptions, reducing manual review efforts by as much as 70 per cent. For UK and global payroll teams, the Payslip Analyst and Compliance Update Agent are the most relevant today, with the remaining agents focused on US‑specific requirements.

As Oracle continues to expand its portfolio of Agentic and AI‑driven capabilities, the direction is clear: more guidance, more automation, and less friction across everyday HR and payroll operations. For organisations already using Fusion, these new applications offer a practical way to improve decision‑making, strengthen compliance, and deliver a smoother experience for managers and employees alike. And with more innovation on the horizon, now is an ideal time to explore how these tools can support your roadmap and help your teams work smarter, not harder.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines.

Under the Hood: How Oracle’s Workflow Agents Actually Work

After sharing my initial thoughts on the Oracle AI World announcements earlier this week, I’ve since taken a closer look at what sits behind the headlines. The announcements focused on what Oracle is delivering, but what really interests me now is the how. That is where things get genuinely exciting for those of us who will be hands-on, building and configuring these new capabilities.

One thing that really helped me make sense of Oracle’s approach was the clear distinction between workflow agents and hierarchical agents. They serve very different purposes, and treating them as interchangeable would quickly lead to the wrong outcomes. Workflow Agents follow policy‑bound orchestration with contextual reasoning and are designed for predictability, auditability and stable SLAs, making them ideal for things like payroll deductions, purchase requisitions or leave approvals where governance and consistency are essential. Hierarchical Agents work differently, using LLM‑led decomposition with specialist sub‑agents, which makes them a better fit for open‑ended problems with many possible paths where multi‑domain reasoning matters more than repeatability. Oracle has intentionally designed the two to complement each other, with Workflow Agents providing the structure by defining stages, approvals, retries and SLAs, while Hierarchical Agents take on the heavier analytical or generative work within specific steps. The result is a balanced model that preserves governance while still giving teams the flexibility to tackle more complex reasoning tasks.

Oracle has outlined seven composable design patterns for building Workflow Agents, each suited to a different type of process. Chaining uses sequential intelligence to pass enriched context from one step to the next, which works well for extract‑validate‑decide‑act processes. Parallel execution allows multiple branches to run at the same time and then consolidates their outputs into a single decision, making it a strong fit for compliance or risk scenarios. Switch flows use context‑aware decisioning to route work based on intent, profile, state and policy; for example, an employee updating deductions after a new baby can trigger both Benefits and Payroll updates automatically with no handoff. Iteration supports adaptive refinement by recalculating until constraints are met, which suits planning and scheduling tasks. Looping introduces self‑correction, such as regenerating and revalidating an invoice when OCR results do not match. RAG‑assisted Reasoning retrieves the right policy information before applying thresholds or routing logic. Finally, timer‑based execution triggers actions on a schedule, such as checking invoice status and notifying the accounts payable owner before an SLA is at risk.

The Workflow Agent canvas in AI Agent Studio groups its building blocks into four areas that shape how an automation behaves. AI nodes include LLM, Agent, Workflow and the RAG Document Tool. Data nodes cover things like the Document Processor, Business Object Function, External REST, Tool and the Vector DB Reader or Writer. Logic nodes provide Code and Set Variables, while the Workflow Control nodes handle governance through Human Approval, If Condition, For Loop, While Loop, Switch, Run in Parallel, Wait and Return. At workflow level, the Triggers tab supports Webhook, Email and Schedule triggers, and the Error Handling section lets you notify recipients by email if a workflow reaches a permanent failure, using context expressions such as $context.$workflow.$traceId. For image-related tasks, the Vision LLM node is the correct choice, although it is classed as a premium tool and comes with associated pricing considerations.

METRO, Oracle’s monitoring layer for Measurement, Evaluation and Testing for Real‑time Observability, gives teams a clear view of what their Workflow Agents are doing across inbound emails, approvals and scheduled runs. From the 26C release, it will also surface AI Unit consumption, which becomes increasingly important as organisations scale their use of agents and need tighter visibility and cost control.

Pricing has been a major consideration for customers exploring AI Agents, and the new structure aims to simplify things through the introduction of AI Units, or AUs. Oracle is expected to publish the full details in April or May, but the core concept is that an AU costs roughly $0.01 and is calculated as: AU consumption = CEILING((Input Tokens + Output Tokens) / 10,000) × Action Value Factor. The Action Value Factor varies depending on the action type and the LLM tier being used. General actions such as Q&A, approvals and reasoning have a 0x factor on the Basic LLM, while Premium and Bring Your Own apply higher factors. Artifact creation and audio generation sit in higher tiers again, with video generation marked as coming soon. Every Fusion customer receives 20,000 AUs per month at no charge, pooled across all pillars with unused units rolling over to the end of the contract term. Additional AUs are available in $1,000 increments.

What I find most compelling about this architecture is that it’s built for the realities of enterprise work rather than an idealised version of it. The self‑correction loops, governance controls, evaluation framework and hybrid agent pattern all acknowledge that real business processes can be messy and that auditability is essential. The 22 new agentic applications arriving in 26B across ERP, HCM, SCM and CX give us a clear benchmark for what good looks like in practice. If you’re interested in exploring how Workflow Agents could support your organisation’s processes, now is a great time to start that conversation.

In the meantime, why not check out my earlier post covering the Oracle AI World announcements? You can find it here.

Please note all screenshots are the property of Oracle and are used according to their Copyright Guidelines

Oracle ERP Cloud Financials 26B

Don’t worry, I haven’t abandoned the world of HCM for ERP just yet. My enthusiasm for Oracle AI is very much alive, and with four new AI agents landing in Financials this release, I simply couldn’t ignore it. I’d never claim to be a Financials expert, but I do know how long ERP users have been asking for meaningful AI capabilities, and this release feels like a real response to that demand. Oracle has clearly leaned in, and there’s plenty here worth getting excited about.

The long awaited Ledger Agent brings an intelligent, AI‑powered experience to General Ledger, helping finance teams work more efficiently and proactively. It continuously monitors balances, journals, and transactions using configurable prompts, surfacing clear, contextual insights only when attention is needed. Accountants can ask natural language questions about balances, variances, journals, and process statuses, and receive precise, easy‑to‑understand explanations backed by correlated ledger and subledger data. By combining proactive monitoring, root‑cause insight, and seamless access to related ledger actions in a single guided experience, the Ledger Agent reduces time spent navigating multiple screens or compiling information manually, supports earlier detection and resolution of issues, and helps teams maintain accurate, up‑to‑date financial positions while respecting existing security and access controls.

The Payables Agent delivers a modern, AI‑driven approach to invoice processing, helping organisations move towards a truly touchless Payables experience. It automates invoice ingestion, compliance, and control across multiple sources and formats, using GenAI to reduce manual effort, improve data accuracy, and surface only the exceptions that need attention. With unified capture, automated attribute defaulting, intelligent anomaly detection, and a single, streamlined view for managing invoices, teams gain full visibility and control across the invoice‑to‑pay lifecycle. The result is faster processing, stronger compliance, reduced risk of errors or fraud, and improved supplier satisfaction, allowing Payables to shift from a reactive cost centre to a value‑generating function that supports better financial outcomes.

The Payments Agent introduces a smarter, more strategic approach to supplier payments by helping organisations optimise how and when they pay, rather than simply executing scheduled runs. Using AI‑driven insights and conversational guidance, it supports users across the full payment lifecycle, from evaluating payment options such as dynamic discounting and virtual cards, through creating and managing supplier offers, to executing and monitoring payments securely. By assessing the financial impact of different payment programmes in real time and translating decisions seamlessly into action, the Payments Agent improves cash flow, generates incremental financial benefits, and strengthens operational control. The result is a more proactive, insight‑led Payables function that reduces manual effort, highlights exceptions early, and enables finance teams to focus on working capital optimisation and stronger supplier relationships.

The Expenses Agent simplifies expense reporting by allowing employees to complete and submit expenses entirely through email, using natural language. Employees can forward receipts directly to the agent, which automatically creates the expense and prompts for any missing details, such as justifications, attendee information, or cost centres, via a simple email reply. Once all required information is captured, the expense is ready for submission or can be auto‑submitted in line with company policy. This conversational, email‑based approach reduces manual data entry, minimises errors, and cuts down on back‑and‑forth, accelerating reimbursements while improving compliance and delivering a far more intuitive experience for both employees and finance teams.

To wrap up, this has been my first step into writing about ERP Cloud Financials, and I’ve genuinely enjoyed exploring what Oracle is doing in this space, particularly around AI. I’d really welcome your feedback on this post, whether it’s what resonated, what you’d like to see more of, or where I could go deeper. If there’s interest, I’d be more than happy to write further blogs on Financials and continue sharing my perspective as these capabilities evolve.