Loading Now

The next frontier in artificial intelligence – Firstpost

The next frontier in artificial intelligence – Firstpost


The recent announcement of Claude Code by Anthropic is the latest to come out of the stable of agentic artificial intelligence (AI) systems – AI capable of independently pursuing goals through reasoning, planning, and autonomous action. This is part of a growing ecosystem that includes GitHub Copilot X, AutoGPT, BabyAGI, Google’s Veo and Genie systems, and others. These are new types of systems that can proactively identify problems, create plans, and execute complex tasks across digital environments with minimal human supervision. Regulatory response to these developments has to keep up.

Agentic AI follows the natural evolution of AI and a logical technological trajectory. It is not some distant science fiction scenario we can postpone addressing. The progression from narrow AI systems designed for specific tasks to large language models with broader capabilities, and now to autonomous agents that can chain actions together, shows the further possibilities. Some AI researchers believe goal-directed agentic systems will become more ubiquitous in the coming future. What about the economic, ethical, safety, and geopolitical implications these systems present?

Implications of Agentic AI

Gartner, for instance, projects that 33 per cent of enterprise applications will integrate agentic AI by 2028, up from less than 1 per cent in 2024. Significant efficiency gains can be seen due to these systems. As per some reports, for instance, Copilot can autonomously design database schemas, implement APIs, and debug code, and Microsoft is testing Project Padawan, aiming for “minimal human intervention” in software development. While this economic transformation is something that one cannot prevent, the world needs to understand the significant distributional consequences it will have, potentially eliminating millions of knowledge-worker jobs while creating new ones that require different skills. Goldman Sachs estimates 300 million jobs globally (9.1 per cent of the workforce) could be affected by AI.

Moving on to the safety challenges posed by agentic AI, it is important to understand that these systems don’t wait for human instructions. They possess the ability to interpret their objectives in unexpected ways, which could lead to unintended consequences. This phenomenon is called “specification gaming”. When an AI agent is instructed to “maximise customer engagement” on a platform, for instance, it might exploit psychological vulnerabilities or promote controversial content rather than creating genuine value, all while operating within its formal parameters.

Privacy concerns take on new dimensions when AI systems can proactively gather information rather than merely processing what they’re explicitly provided. For instance, fears exist of AI agents independently developing techniques for correlating supposedly anonymised datasets to extract identifiable user information if it helps them achieve the assigned objective of identifying market opportunities. These are novel privacy threats that existing regulatory frameworks have to keep up with to address.

Bias and accountability are the other ethical issues that are important. It’s after all, historical data that is at play here, which can have societal biases. These can then creep into the decision-making frameworks. For instance, AI-driven credit assessments in Indonesia have helped with financial inclusion, but marginalised groups can face exclusion because these systems are trained on biased datasets.

Geopolitics

The rise of agentic AI has already triggered an intensifying geopolitical competition. It is well known that China is making huge efforts as far as AI is concerned. Manus AI, China’s new AI agent, is a case in point when it comes to agentic AI. The European Union’s regulatory framework specifically addressing autonomous AI systems remains caught in legislative gridlock. The US Executive Order on Safe, Secure, and Trustworthy AI signed in late 2023 now appears woefully inadequate to address the rapid acceleration of agent capabilities. This regulatory fragmentation creates dangerous arbitrage opportunities, where developers can forum-shop for the most permissive jurisdictions to deploy increasingly autonomous systems without adequate safety measures or oversight.

The security implications of agentic AI systems being weaponised to develop offensive cyber capabilities do exist. The scale of the problem is increased due to the asymmetric nature of these capabilities, where a small team could deploy thousands of AI agents to target critical infrastructure. This brings in new dimensions to national security calculations.

Moreover, agentic AI systems could possess the capability to modify information ecosystems and political discourse by generating and distributing narrative-based content without human control. Both state and non-state actors would salivate at the opportunity to deploy persuasion campaigns at an unprecedented scale.

Governance has to keep up fast

While tackling these challenges, we have to be careful to neither stifle innovation nor naively accept unconstrained development. The best way would be to have international coordination on safety standards for agentic systems and proper evaluation protocols and transparency requirements. Second, we need significant investments in technical safety research focused specifically on the control and alignment problems unique to autonomous agents.

Further, we must reject the false choice between innovation and responsible development. Anthropic showed the way ahead with its decision to release Claude Code with specific constraints on its execution environment.

Aligning Business Models

There’s something that might slow down the progress of agentic AI. This isn’t technological but economic. Our digital ecosystem has been built around business models that prioritise user retention, attention harvesting, and data collection, and not interoperability or automation. The largest tech companies now control access points representing a large portion of user digital interactions. Their revenue model needs them to keep users on their platform, where their attention can be monetised, and not move around between platforms, bypassing ads or tracking mechanisms.

These economic incentives explain why, despite breathtaking AI capabilities being developed, there are still persistent barriers to their practical deployment. Consider that despite multiple AI systems now capable of controlling browsers and applications, we’ve seen minimal progress in the standardisation of APIs that could allow these agents to interact efficiently with software.

Moreover, the competitive landscape further complicates matters. It helps existing monopolies rather than democratising access. The computational resources required to train and deploy advanced agentic systems have resulted in the concentration of commercial AI development among a handful of big tech companies. These giants can afford to create closed ecosystems where their agents work seamlessly with their own applications while struggling with competitors’ offerings. These aren’t technical limitations but strategic business decisions.

Natural Progression of AI

However, in the natural progression of things, these problems will likely be overcome in the interest of other ways of monetising interactions. Within 18-24 months, the technological capabilities will likely outpace our institutional capacity to respond unless we act decisively now. Technology regulation has seen a lot of belated responses to transformative innovations, from social media to cryptocurrency. The world can’t afford the same regulatory lag with agentic AI. The systems being developed today will shape economic opportunities, influence information environments, and potentially alter geopolitical power dynamics for decades to come.

The path forward requires a lot of collaboration between industry, policymakers, and civil society to develop balanced governance approaches. We require a fundamental rethinking of digital business models and governance frameworks. Regulatory approaches focused solely on preventing harm from AI systems miss the equally important opportunity to promote beneficial integration. The EU’s AI Act, for instance, contains provisions restricting high-risk AI applications but doesn’t offer a lot of guidance on interoperability standards that could enable positive AI-human collaboration. Similarly, the US Executive Order on Safe, Secure, and Trustworthy AI emphasises safety testing without addressing the structural barriers to AI adoption created by existing digital monopolies.

For agentic AI to fulfil its transformative potential, we need coordinated action across multiple fronts. Technical standards bodies must accelerate the development of protocols for AI-software interaction. Regulatory frameworks should incentivise interoperability alongside safety. And most critically, business leaders need to explore revenue models compatible with an ecosystem where AI agents move freely between applications, potentially reducing direct user engagement but creating new forms of value.

The time to confront these issues is now, while the architecture of our AI-integrated future remains malleable. The alternative is to watch as transformative potential is gradually subsumed by the gravitational pull of existing business models and market structures, leaving us with powerful technologies that never quite revolutionise our digital lives in the ways we once imagined possible.

The author is a research scholar at Takshashila Institution, Bangalore. The views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost’s views.

Post Comment