By Ashley Vassell, Senior Product Manager, Hydrolix
Agentic AI makes decisions. It doesn’t just answer questions. And while many financial companies have been using AI to help them answer questions and call it a transformative process, they’re missing the boat when it comes to how much more powerful, autonomous, and potentially damaging AI can be if it makes decisions on its own.
An example of how this works is to look at a simple portfolio management:
- An agent analyzes your current allocations.
- Another agent looks at a benchmark of similar portfolios in today’s market.
- A third agent takes both of those analyses and uses them to create a plan to rebalance your portfolio.
- A fourth agent takes that plan and executes the necessary trades.
- A final agent reviews the results and prepares a report for you.
Now, while it is true that many people view this type of decision-making as a positive development – because it allows people to spend less time doing manual tasks – the reality is that this kind of technology scares risk managers in the banking industry.
That is because traditional AI told users what to do. But it didn’t act alone. There was always a human decision-maker somewhere along the way. When the AI recommended a course of action, and that recommendation turned out to be wrong, there was always someone there to say, “I’m sorry.”
With Agentic AI, that safety net no longer exists. An agent that acts independently and rejects loan applicants without reason or explanation is not a theoretical concept. It’s already happening. An agent that executes trades in high-volatility situations before a human is even aware of it is also becoming more common.
Controls are the New Safety Net
Because of these concerns, the response cannot simply be to slow down the implementation of Agentic AI. Instead, we need to rethink our strategy. We need to build systems that inherently include the controls required to allow users to trust them – auditable, transparent, and capable of providing explanations for decisions. Systems that cannot provide explanations should not be deployed.
Tier your agents based on the risk associated with their actions. Agents performing higher-risk functions, such as trading, require gateways to prevent unauthorized activity. While strong controls are necessary, they are useless without the right foundation.
The Data Infrastructure Mismatch
While everyone agrees that AI requires data, the idea persists that you can point an agentic system at your current data infrastructure and expect it to work. Unfortunately, most of today’s data infrastructure wasn’t created with agentic systems in mind.
Traditional data infrastructure was primarily designed to serve humans generating reports. Agentic AI, however, requires a fundamentally different approach – agents do not ask questions sporadically. Instead, they continually interact with your data; agents call your database multiple times a second. As a result, there is often a significant disconnect between the performance characteristics developers expect from traditional data systems and the performance characteristics required by agentic systems.
In many organizations developing Agentic AI systems, similar patterns repeatedly emerge as major hurdles:
- Poor Data Quality: Organizations cut costs by sampling their data or aggregating it. This approach was sufficient for monthly reporting but woefully inadequate for real-time decision-making. Fraud-detection algorithms based on sampled historical transactions may fail to detect anomalous behavior — particularly when those anomalies were never part of the original samples.
- Limited Access to Data: Agents can respond only as quickly as they can retrieve relevant information. If your transaction history resides in cold storage that requires 12 hours of rehydration prior to use, that transaction history effectively does not exist for purposes of the agent’s decision-making. Similarly, if the data supporting the agent’s decision-making is not available within seconds due to limitations imposed by storage or retrieval mechanisms, it similarly does not exist. Invisible data is essentially the same thing as non-existent data.
- Data Fragmentation: Critical data elements reside in disparate systems incapable of communicating with one another. For example, an agent assessing creditworthiness may need data from your CRM system, transaction processing system, and customer service ticket tracking system — all at once. If these sources are isolated from one another, the agent will be unable to accurately assess the applicant’s creditworthiness, which represents a clear liability for the organization.
Additionally, many organizations still maintain several misconceptions regarding the nature of data and Agentic AI systems. More data without adequate quality merely introduces additional noise into the analysis process. More data without access to that data renders the data moot. AI agents are not impervious to garbage-in-garbage-out. While having more data provides a minimum threshold of adequacy, the quality of that data determines whether an AI agent rapidly produces reliable decision support or consistently makes confident errors.
Cost Considerations
Finally, there are numerous cost-related considerations for implementing data- and agentic-ai architectures. Traditional AI inference is measured in fractions of pennies. However, agentic systems generate 10–15 separate model calls for each task, with each call accessing and validating data and determining subsequent courses of action. Depending on the specific hardware used, each individual call can range from $0.10 to $1.00. After accounting for thousands of daily transactions, multiplied by additional requirements such as monitoring and governance tools, and the increasing demand for highly skilled professionals capable of designing and maintaining AI-based systems, the total price tag becomes quite daunting.
The challenge for financial services is clear: to safely and effectively utilize Agentic AI, the time to prepare your data, establish rigorous controls, and plan for architectural transformation is now.

Ashley Vassell is Senior Product Manager for Innovation Labs at Hydrolix, where she drives the company’s most forward-thinking initiatives beyond the core platform and console. Her work focuses on emerging capabilities like Anomaly Detection and MCP (Model Context Protocol), as well as integrations into third-party ecosystems including Spark, Splunk, Kibana, and Grafana.