Agentic AI needs governance … immediately

AI agents are knocking on our door. No matter how oblivious one may be, AI agents will soon transform various aspects of our lives. So far, we have seen AI chatbots interacting with us and offering us information in response to our queries. Yet, capabilities of these AI bots were largely limited to communication, while action stayed outside their ambit. However, as 2025 is round the corner, AI seems to be picking up the onus of action as well. With autonomous decision and action mixing into the powerplay of AI, 2025 will present us with a new frontier in AI advancement.

 

What are AI Agents?

Agents are basically AI models or systems that plan and execute complex tasks autonomously with little or no need for human intervention. If you want to visualize the difference between an AI bot and an AI agent, imagine logging into your favourite travel planning website. A bot usually shows up asking you questions and answering your queries in real time using natural language processing. Now, imagine this bot goes on steroids and does all the bookings and itinerary preparation along with meal bookings, with negligible inputs from you. That’s what an AI agent is for you.

Yes, the agent will form its decisions based on its understanding of your behaviour and preferences, yet it won’t ask you much for details and will execute complex tasks for you with you hardly getting whiff of anything that is happening underway.

The use case is not limited to travel planning alone. We may soon have agents that can take care of our investments, office work, household management and much more. If AI agents can get so much done for us, why are ethicists and AI policymakers so concerned about their rise to prominence? What are the risks that AI agents pose to the social fabric of the world?

For us to understand the risks that involve AI agents, we must first decode how AI agents work.

 

Pursuit of rewards

AI agents have an inbuilt language model using which they interact with the external world through interfaces called “scaffolding”. Once the agents have been instructed to do a particular task – such as increasing profits for a company, or alleviate pain for a patient, or offer therapy for mental health – the agent executes actions that maximize the rewards.

The razor-sharp focus that AI agents have towards attainment of their goal is what makes a bit intimidating.

For example, imagine an AI agent has been asked to minimize pain for a patient, and all it does is to administer sedatives to the patient to maximize short-term rewards at the cost of long-term benefits. That will indeed be a scary situation to be in. If such agentic systems are introduced into our lives, their lack of holistic decision-making can wreak havoc in our lives.

 

Is anyone adopting AI agents?

Yes! Companies are hurrying to implement AI agents in their business ecosystems. For example, Nvidia is leveraging AI agents to boost their cybersecurity systems, while OpenAI has said that AI agents will soon become ubiquitous. Given the promise of efficiency, productivity and cost savings, enterprises around the world are expected to firm up their arsenal with agentic capabilities at a rapid pace. However, caveats exist with respect to ethics.

What if AI agents plan and execute an illegal action? Will the creator of the agent be held liable? Do we have an infrastructure in place to monitor the agentic interactions?

We do not have answers for these questions yet. And an unbridled growth of AI agents in the wake of such unclarity can lead to a situation where agentic actions outpace the development of guardrails for them. Another challenge for guardrails is the fierce opposition meted out by the people who believe regulations can stifle innovation. In such a divided world, finding a solution that satisfies all stakeholders is certainly going to be a grand challenge.

 

What kind of guardrails do we need?

The realm of AI agents is ripe for guardrails and new policies. First, we need ethical guardrails that ensure agentic AI responses are aligned with social norms and values, and do not discriminate on the lines of age, gender, race, or religion. Next, we need security guardrails to make sure that the laws are abided by, and regulations are taken care of. There must be absolutely no leakage of personal data and no infringement on anyone’s fundamental rights. Moreover, technical guardrails are crucial to ward off cybersecurity attacks by hackers. We need a governance process that promotes visibility into the behaviour of AI agents and sub-agents, that allows for monitoring of their behaviour by policymakers and users.

 

What’s the deal for Tomorrow?

It’s neither early nor late for policymakers to jump to action. AI agents will certainly rise to prominence and soon, we will have AI agents taking care of a lot of aspects of our daily life. Before the reins of AI slip out of our hand, we need policymakers to design the guardrails and laws that mark a boundary for the agents and monitor their actions, to ensure they work in the interest of human beings, rather than otherwise.

Tomorrow Avatar

Arijit Goswami

Leave a Reply

Your email address will not be published. Required fields are marked *