AI Agents: Business Tool or Partners?

PLUS: Governing AI Agents

IN THIS ISSUE

  • From AI Agents to AI Companions

  • Jevons Paradox: Why You Must Upskill to AI

  • Paper Highlights: Governing AI Agents

TOP PICKS

From Unsplash

From AI Agents to AI Companions

Last week, Mustafa Suleyman published how AI will evolve to become a companion to human beings. That its cons are grossly overrepresented while the possibilities and the positive impact that it can have on human lives are grossly underrepresented.

AI is not just another tech. So much so, that Suleyman urges the reader to ‘ignore’ the popular discourse and open their mind to the ‘possibilities’ of AI. As a new technology, it is more ambitious and more human in mind. AI is more intimate and emotionally engaged.

  1. He stresses the importance of building AI tools that simplify human life, reduce stress, and facilitate connection.

  2. Advances like voice and vision enable easy communication with the tech, We can speak to it in natural language and be able to convey our context through voice, visuals, and text.

  3. AI agents will not only say things but also do them. They will have personalities and quirks similar to us based on our cultures, routines, and needs. As they will start becoming part of our lives, they will also start developing emotional connections. Indeed, they may offer emotional support to us as they evolve.

  4. Because there is deep and extensive work going on to anchor AI in trusted sources of information (indeed, data is the new oil!), these AI agents will be more credible, accurate,e and reliable.

Why This Matters

Mustafa Suleyman is the CEO of Microsoft AI. He is widely regarded as an AI pioneer and has been working in the AI domain since 2010. He was one of the 100 most influential people in AI according to Time in 2024. Suffice to say, when he talks AI, you listen.

A majority of us are working with AI tools to improve our work output and productivity. We are already building AI agents who can execute tasks without human intervention. It is the easiest area of our life to implement AI since it is ‘just another tech’.

Maybe, it is not.

What if AI can break through a plethora of tech that we don’t need anymore? It simply becomes a dedicated agent who can take care of the dull, boring chores for us. But can also talk to us during meal times, share our stories, and offer a shoulder to lean on.

The billion-dollar question is - do we really need an “AI” companion?
Do we have that choice?

Jevons Paradox & Why You Must Upskill to AI

The Jevons paradox, named after English economist William Stanley Jevons, states that as the efficiency of resource use improves, the overall consumption of that resource tends to increase rather than decrease.

This counterintuitive phenomenon occurs because improved efficiency often leads to reduced costs, which in turn stimulates greater demand and usage.

From Aravind’s LinkedIn post

Paper Highlights: Governing AI Agents

Noam Kolt’s paper is an instructive and insightful read for anyone interested in AI Agents and the challenges they pose when it comes to governance and ethical behavior. Here are some highlights for you to ponder:

“AI agents are different. They are not mere tools, but actors. Rather than simply produce synthetic content, AI agents can independently accomplish complex goals on behalf of humans.”

“Individuals or organizations seeking to use an AI agent are likely to have limited information about the agent’s abilities and limitations prior to deploying it, especially if deployment is in a novel setting or application. For example, it may be difficult to determine, before the fact, whether a “generic” AI personal assistant can perform a specialized business task. In addition, even after the fact, it might be difficult to determine whether an AI agent has accomplished its goals effectively and ethically. Of course, the more complex and difficult-to-measure the goals, the more acute these problems.”

“...we can see that traditional legal and economic frameworks for incentive design are relatively well-suited to addressing familiar principal-agent problems that are predictable and fall neatly into conventional models of human behavior. These frameworks, however, appear ill-suited to addressing agents that operate very differently to human beings and present novel and unpredictable risks.”

“Any attempt to overcome these challenges must begin by recognizing that human oversight and monitoring are not, on their own, an adequate solution for the challenges presented by AI agents.”

“..it may be costly or impractical to terminate AI agents if they are deployed in high-stakes settings (where termination would result in significant economic losses) or if the agents are capable of resisting attempts to shut them down. Another problem stems from the fact that AI agents do not necessarily have the same interests or motivations as human agents. Because AI agents do not, by default, explicitly value financial resources or personal freedom, it is unclear how the imposition of financial penalties or incarceration could be applied to penalize and deter these artificial agents.”

“A similar problem arises with respect to informal and extra-legal sanctions in the case of AI agents that are not sensitive to reputational or psychological consequences associated with enforcement actions.”

FOR YOUR READING PLEASURE

And that’s all, folks! If you like or want to share something, hit reply to this email. You can also connect with me on LinkedIn.