06/12/2025 | Press release | Distributed by Public on 06/12/2025 01:44
"Agentic AI" is the latest new shiny object. As with everything AI, agentic AI has issues.
Agentic AI refers to AI systems designed to perceive, reason, and act independently to achieve a given objective. Unlike normal AI, agentic AI can make autonomous decisions. It can dynamically determine the best course of action based on its environment, learn from its experiences, and adapt its strategies over time. It goes beyond algorithmic programmed decision-making.
This article gives examples of agentic AI uses, including customer service, where the agentic AI could:
For a more in-depth look at what agentic AI is, and how it works, see this UK Government publication AI Insights: Agentic AI.
If an AI system acts autonomously to achieve a goal, what personal information might it collect, use, and disclose along the way?
Imagine an agentic AI designed to manage your smart home. It might learn your daily routines, your energy consumption habits, and even your preferred water temperature for a shower. This could involve collecting data from smart devices, calendars, and your online activity. While this might seem convenient, the scope of data collection could be broader than expected. What happens if this AI, in its pursuit of optimizing your life, shares this personal data with third parties for personalized services, or if there's a data breach?
How do you obtain meaningful consent when the AI itself dynamically determines its data needs? How do you limit its purpose when it discovers new ways to achieve its goals, potentially involving new data uses? We will have to sort out how agentic AI ensures transparency and control over what our personal information that agentic AI has access to.
If your agentic AI, acting independently, causes harm, who is on the hook?
Take, for example, agentic AI acting as a financial advisor. What if it makes stock trades that result in financial losses? The person who deployed it would most likely be liable, as the agent is acting on their behalf. Depending on the situation, that could be the individual using it for their own benefit, or a financial advisor using it as a tool for their customers. Are there situations where the developer could be liable? The legal system is built on the premise of human or corporate responsibility. Corporate responsibility flows from the notion that corporations have the same powers as a natural person. Agentic AI is not an entity itself; it is a tool used by a person or corporation. Laws of agency typically hold a principal liable for the actions of their agent, but that agent is usually a human or a corporation. We will see how traditional laws of agency apply to agentic AI. An interesting concept to watch will be how the concept of ostensible authority - where a reasonable person on the other side would believe the agent had authority to act - applies.
It is well known that AI has a hallucination problem. In my view, AI hallucinations are at the top of the list of AI issues. Its output often contains information that is completely made up and untrue. If agentic AI hallucinates and makes decisions based on hallucinated information, it affects reliability. We recently wrote about an instance where an AI bot made up a fictitious customer service policy that resulted in lost customers.
Bad decisions arising from hallucinations would be the responsibility of the entity employing the agent. But could others be liable to that entity? Might the developer have some responsibility based on a flaw in the development of the agent or the training data? Or might the AI have operated within its designed parameters but made a bad autonomous choice?
Take for example an agentic AI tasked with conducting due diligence for a business acquisition. If it hallucinates a critical financial or business detail, and the acquisition proceeds based on that false information, the financial and legal ramifications could be catastrophic.
Verifying every autonomous decision of an agentic AI in real-time might be impractical, yet the risks of not doing so are immense. This highlights the need for oversight mechanisms, auditable decision-making processes, and human-in-the-loop safeguards for high-stakes agentic AI applications.
What happens when AI agents start communicating with each other, forming their own ecosystems of autonomous decision-making? Let's take the example of an online marketplace. What happens when my agent buys something you own from your agent, and one or both of us does not want to do the transaction? Is that contract binding? Does it matter when the human finds out and intervenes? What if the buying agent has already paid for it? What if the selling agent has already shipped it from a warehouse? Will there be a third agentic AI dispute mechanism that sorts it out?
Is the current legal framework, designed for a human-centric world, ill-equipped to handle the implications of agentic AI? Time will tell.
For now, anyone employing agentic AI should pause and reflect on what information it might use and what decisions and actions the AI might make. Consider whether there are parameters to be set on its decision-making abilities. Are there touchpoints in the agentic AI's decision process where it should be interrupted so you can provide or deny permission to the agentic AI's next step? The tougher issue might be how to vet the decision-making to make sure it is not based on hallucinations.
David Canton is a business lawyer and trademark agent at Harrison Pensa with a practice focusing on technology, privacy law, technology companies and intellectual property. Connect with David on LinkedIn, Bluesky, and Twitter.
Image credit: ©Stone Story - stock.adobe.com