• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
The Media Copilot

The Media Copilot

How AI is changing Media, journalism and content creation

  • News
  • Reviews
  • Guides
  • AI Courses
    • AI Quick Start
    • AI for PR & Communications Professionals
    • AI for Journalists
    • Custom AI Training for Teams
  • Newsletter
  • Podcast
  • Events
    • GEO Dinner Series
    • Webinars
  • About

The AI shift to agents is beginning, and newsrooms aren’t ready

Agents promise acceleration in knowledge work. Media can unlock it only with governance: provenance, policy, and traceable decisions.

AI agent black box
Agents don’t just produce outputs — they make choices. If newsrooms want the speed, they need governance: traceable context, enforceable policy, and an auditable decision trail. (Credit: ChatGPT)
Jan 27, 2026

By Pete Pachal

If you spend enough time in earnings calls and pitch decks, you’ll hear the AI endgame described in grand terms: artificial general intelligence (AGI), superintelligence, or—if you’re really nerdy—recursive self-improving AI. Out here in the real world, though, the ask is simpler and more familiar: the Enterprise computer, a conversational assistant that doesn’t just “get” you, but can actually take actions.

What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/

Key Takeaways

  • AI agents are arriving in newsrooms, but most outlets lack governance.
  • Newsrooms need provenance tracking, written policy and decision logs.
  • Without governance, agentic AI’s speed comes with accountability risk.

Over the past few months, that vision has become product strategy. At CES, I sat in on Lenovo’s keynote, where the company introduced Qira, an always-on AI that will ship as a built-in layer across its devices going forward. The important shift, as I wrote at the time, is that Qira isn’t positioned as a single, self-contained brain. Instead, it’s an “orchestrator of agents,” routing users to other services—ChatGPT, Perplexity, or others—based on what the request calls for.

That’s a move Lenovo can make because it isn’t interested in competing with those other services. Qira is an on-device facilitator, not a do-everything AI. And Apple—after a long stretch of trying to keep everything in-house—appears to be moving in the same direction, having recently announced a multi-year deal to integrate Google’s Gemini models into a revamped Siri later this year.

Apple has colossally overpromised and underdelivered on AI over the past two years, in part because it didn’t want partners owning critical parts of the experience. But once you accept the orchestrator model—and once the plumbing for AIs talking to each other starts to feel real—the fear of “helping competitors” looks less like strategy and more like self-sabotage.

Assistants become actors

The other piece clicking into place is agentic software—tools that don’t just answer, but act. Claude Code and Claude Coworker are the obvious examples right now, and the intensity of the hype has been telling. Yes, they can write code and spin up websites. But that’s not the real story. The real story is that they behave like agents: you give them instructions, they translate those into a plan, and they execute, often with only light supervision.

Whether that capability lives inside an OS layer (Qira/Siri) or arrives as a desktop companion (Coworker), the consequence is the same: decision-making shifts closer to the surface of work—right at the interface people actually use.

Plenty of people on X describe Coworker less like “prompting” and more like collaborating with a colleague. That’s the appeal. It’s also the risk. Anthropic is already warning users about safety issues, such as ambiguous instructions leading to file deletion, because once the model can take actions, mistakes stop being theoretical.

Taken together, these trends point in one direction. Before long, a growing share of device interaction will look more like delegation: Tell the agent what outcome you want, and let it do the heavy lifting. No apps. No browser. Just the answer, the output, the outcome. It’s the Enterprise computer vision, except it’s in millions of pockets instead of the bridge of a starship.

For media, brands, and anyone who lives on information, the implications are immense. In my Qira piece, I focused on the coming battle for context in the information space. But agent-based work doesn’t just change how people find information—it changes how information work gets done, especially journalism. Dropping a decision-making computer into a newsroom workflow could be a massive accelerant. It also opens a nest of problems around attribution, access, and the handling of sensitive data.

If it can’t be traced, it can’t be trusted

The glib response is: fine, don’t use it. But abstinence isn’t a strategy. Tools reshape workplaces whether you participate or not, and the organizations that learn, deploy, and master agentic systems will outpace the ones that sit it out. As these tools spread, the advantage won’t go to the boldest adopters. It will go to the ones who implement them safely and securely.

Newsrooms have an especially sharp edge here, because information is their business. We’ve already watched adoption stall around hallucinations. The tendency of AI systems to fabricate plausible nonsense hasn’t gone away, and that alone has kept many journalism organizations from touching AI in any workflow that could affect content.

But workplace agents introduce a different kind of hazard, one less obvious and more structural. The agent may not be “writing the story,” but it is making consequential choices: which sources to consult, which services to use for a task, which internal knowledge to bring into the room, and how to weigh it all when responding to a request. If an agent is going to make those calls inside a newsroom, it can’t be a black box.

Even if the agent never produces an outright error, the logic behind its choices still matters. Search is the simplest analogy: when Google made a deal with Reddit, Reddit started surfacing at the top of many more search results. That shift didn’t just reorder links—it changed where people got their information, at scale, in a market where Google is an effective monopoly on search.

An agent embedded in your device or workplace can become a similar monopoly, only more intimate, because it sits inside the workflow. So the path it takes through a tree of decisions can’t be opaque. Yes, it’s easy to imagine guardrails: nudge workers toward sanctioned services and company software; enforce style guides and policy in the actions it takes. The weirdness arrives in everything the guardrails don’t cover: those messy in-between steps where the agent decides what context to rely on in order to act.

  • Subscribe to our newsletter

    How AI is changing media, journalism, and content creation.

    Learn More

Governance is the unlock

Seamlessness is the whole point of an agent. But seamlessness without accountability is a trap. If the agent is pulling context from the web, the provenance needs to be visible. If it’s leaning on third-party services, that needs to be traceable. And when a user asks, plainly, why it took a specific action, the system should be able to explain itself, with a paper trail the user can follow as far as they want.

Just as important: there has to be a mechanism to correct the agent when its reasoning goes off the rails, including when bias sneaks into the chain. Disclaimers won’t cut it for agents. Training people to use them and audit their own use should be standard operating procedure.

That’s what governance looks like in practice. Agents like Qira and Claude Coworker might finally deliver on the dream of true AI assistants. But whatever they unlock will demand an equal amount of discipline. If the past few years have taught us anything, it’s that AI can do incredible things—and still can’t be trusted to always get it right. To move into the agent era without losing the plot, organizations will need to adopt an old adage: trust, but verify.

A version of this column first appeared in Fast Company.

Contributors

  • Pete Pachal: Author

    Pete Pachal is the founder of The Media Copilot. In addition to producing the site’s newsletter and podcast, he also teaches courses on how journalists and communications professionals can apply AI tools to their work. Pete has a long career in journalism, previously holding senior roles in global newsrooms such as CoinDesk and Mashable. He’s appeared on Fox Business, CNN, and The Today Show as a thought leader in tech and AI. Pete also puts his encyclopedic knowledge of Doctor Who to good use on the popular podcast, Pull To Open.

Category: AI media analysis
Share this post:
FacebookTweetLinkedInEmail
  • Related articles

The new agentic AI battleground: The case for unified architecture

Read moreThe new agentic AI battleground: The case for unified architecture
Scale tipping from journalism to AI, representing the power imbalance described in the Open Markets white paper

Independent journalism’s AI reckoning: A new report maps the stakes

Read moreIndependent journalism’s AI reckoning: A new report maps the stakes

AI won’t save Local News. But it might reinvent it.

Read moreAI won’t save Local News. But it might reinvent it.

Cate Blanchett Backs New AI Rights Nonprofit

Read moreCate Blanchett Backs New AI Rights Nonprofit

Why AI content labels keep failing the people who need them most

Read moreWhy AI content labels keep failing the people who need them most
Editorial still life of subscription cards and citation evidence with AI summary panels

Google tags AI overview links from publications you subscribe to

Read moreGoogle tags AI overview links from publications you subscribe to

The Media Copilot

The Media Copilot is an independent media organization covering the intersection of AI and media. Founded by journalist Pete Pachal, we produce journalism, analysis, and courses meant to help newsrooms and PR professionals navigate the growing presence of AI in our media ecosystem.

  • LinkedIn
  • X
  • YouTube
  • Instagram
  • TikTok
  • Bluesky
  • About The Media Copilot
  • Advertising & Sponsorships
  • Our Methodology
  • Privacy Policy
  • Membership
  • Newsletter
  • Podcast
  • Contact

© 2026 · All Rights Reserved · Powered by Springwire.ai · RSS