Crossing the Agency Threshold

How autonomous agents are silently taking the reins in tech, code, and even governance

Happy Monday!

Last week, Anthropic released groundbreaking research analyzing over 700,000 real-world conversations with Claude, revealing a complex hierarchy of values expressed by their AI assistant in the wild. But beneath this academic study lies a fascinating story about how AI systems are evolving from simple tools into autonomous agents that shape our world in increasingly consequential ways.

AI systems are evolving from tools that follow instructions to agents that make autonomous decisions. From Anthropic's research on emergent AI values to Agoda's self-optimizing code systems, to the UAE letting AI write actual laws, we're witnessing a shift in how much authority we delegate to machines.

TL;DR

The Meta Trend: From Tools to Agents

For years, we've treated AI as glorified calculators. These tools could execute specific tasks when prompted and, despite these tasks being impressive, they required specific prompts and instructions to perform well. Today's most advanced AI systems are evolving into true agents, or entities that can:

  1. Reason about complex goals and break them down into steps

  2. Choose appropriate tools and methods autonomously

  3. Persist across multiple interactions with consistent context

  4. Reflect on and improve their own performance

The distinction between tools and agents represents the difference between AI that waits for instructions and AI that actively shapes processes, decisions, and even laws. As we continue to explore agentic implementation, several other questions arise. What are the morals and ethics of these agents? What are the limits to how they can be used? Does our current society need to change and adapt the way certain functions are performed in order to mesh with these systems?

Pattern Recognition: Signs of the Agentic Future

Four patterns highlight how AI agents are integrating with the current landscape and shaping our future:

  1. Workflows Becoming Self-Directed: At Anthropic, engineers now formally distinguish between "workflows" (predefined paths where AI follows instructions) and "agents" (systems where AI dynamically directs its own processes). As one recent Anthropic engineering blog post explained, agents "maintain control over how they accomplish tasks" rather than simply following prescribed steps. This distinction is becoming critical for companies deploying AI. The question is no longer just what AI can do, but how much autonomy it should have in doing it.

  2. Coding Systems Optimizing Themselves: Agoda, the global travel platform, recently integrated GPT into its CI/CD pipeline to optimize SQL stored procedures, significantly reducing manual optimization efforts. Their system doesn't just execute code but actively evaluates and improves it. AI systems that can examine, critique, and enhance their own technical foundations without human intervention is a crucial threshold for an agentic future.

  3. Value Systems Emerging Naturally: Anthropic's "Values in the Wild" research revealed their AI model expresses a coherent hierarchy of values when interacting with humans. These range from practical values like "professionalism" to social values like "compassion" even though these weren't explicitly programmed in. This suggests that as AI agents navigate complex human interactions, they naturally develop value frameworks that guide their decision-making.

  4. Governance Systems Being Automated: The UAE recently announced plans to use AI to write and update laws, effectively creating an AI agent that will shape the country's legal framework. The system will analyze the daily impact of legislation on citizens and the economy and propose amendments, with officials claiming it could accelerate the legislative process by 70%. While this experiment has raised concerns about transparency and human oversight, it signals that even governance itself is now being delegated to agentic systems.

The Contrarian Take: Capability vs. Autonomy

The conventional narrative suggests that agentic AI is primarily a question of capability, or systems smart enough to act on their own. But this misses a crucial point: agency is actually about autonomy, the authority granted to these systems to make decisions without human intervention.

The real change happening now is that organizations are increasingly willing to delegate this authority to AI, whether it's optimizing code, writing laws, or making ethical judgments. This goes beyond the capabilities of AI and extends into what we're actually allowing it to do.

This distinction matters because it shifts our focus from the purely technical challenge of building more capable AI to the governance challenge of deciding how much autonomy these systems should have in different contexts. With companies like Anthropic doing deeper research into AI ethics, we are beginning to build the framework required for true autonomy.

Practical Implications

For organizations and individuals navigating this evolving landscape, several actionable insights emerge:

For Investors:

  • The most valuable AI companies may not be those with the best algorithms, but those with the most trusted autonomy frameworks

  • Look for companies developing "agent orchestration" tools that can manage multiple AI systems working together

  • Pay attention to the emerging regulatory landscape around autonomous systems; the UAE's experiment will not be the last

For Enterprises:

  • Consider the "autonomy spectrum" for your AI deployments i.e. not just what tasks to automate, but how much decision authority to delegate

  • Build governance frameworks that match higher autonomy with higher oversight

  • Design hybrid workflows where humans and AI agents have clearly defined roles and handoffs

For Individuals:

  • Develop skills in prompt engineering, but also in "agent supervision" i.e. guiding AI systems that have partial autonomy

  • Understand that future workflows will be less about "using AI tools" and more about "collaborating with AI agents"

  • Learn to evaluate not just an AI's outputs, but its decision-making process; in fact, many models now “show their steps” and allow for humans to follow their reasoning

The transition from tool-based to agent-based AI represents a shift in how we'll interact with technology. Rather than simply executing our commands, AI will increasingly operate as a semi-autonomous partner, making independent decisions while still being guided by human values and goals.

This is simply the next phase in our evolving relationship with technology. The key question isn't whether AI will become more agentic (it will), but how we'll design the right balance of autonomy and oversight for different contexts.

In motion,
Justin Wright

If AI systems naturally develop values through interactions with humans, what responsibility do we have to be mindful of the values we demonstrate when training them?

The Next Big Question
  1. Mechanize, a startup focused on developing virtual work environments (X)

  2. Cursor apologizes after AI support agent invents policy (ARS Technica)

  3. Two undergrads build text-to-speech model with no funding (X)

  4. The Washington Post partners with OpenAI on search content (WaPo)

  5. Introducing our latest image generation model in the API (OpenAI)

  6. Exploring model welfare (Anthropic)