AI Avatars as a Journalist’s Shield

Could AI avatars become a go-to solution for risky reporting? Credit: Midjourney

Journalists sometimes keep sources anonymous in order to protect them from potential reprisals. Less common is when the journalists themselves remain anonymous in order to report sensitive information. But it happens, especially when reporting from countries with repressive governments. 

The problem then becomes one of trust. While it’s certainly understandable for a reporter to put a mask on when the threat to their life is real, how do you know if you can believe what they’re saying if there’s no way to tie it to who they are? Add to that the lack of one-to-one connection: there’s no byline to follow, there’s no Twitter handle to reply to — there’s no there there.

With AI, now you can put something there. Some reporters covering the political tumult in Venezuela this summer have been using AI avatars to report their stories so they can remain anonymous. Created by the Colombia-based organization Connectas, the male and female avatars — which have Spanish names that translate to “friend” and “the girl” — relate their stories via short videos shared on social media platforms like TikTok and Instagram under the label “Operation Retweet.”

A post shared by @connectasorg

The videos have been coming since the contested election in July, and Connectas began putting out English versions a few days ago. The avatars don’t always directly declare that they’re AI, but the accompanying captions disclose their synthetic nature — which will be fairly obvious to anyone looking closely, but on the whole the avatars are pretty good imitations. If you’re casually scrolling, you might not immediately catch that they’re AI.

Giving a synthetic face to anonymous reporting doesn’t entirely solve the trust problem, but it goes a long way to creating a more tangible and ongoing connection between an audience and a story that may be dangerous to even talk  about. Now there’s someone to follow, a voice to listen for, and a familiar face in your feed that may make you stop and watch.

It’s also very similar to the act of pseudonymity, common in online communities around crypto and gaming. At CoinDesk, where I used to work, the ethics policy has strong respect for pseudonymity since the person or persons behind a pseudonym typically invest a lot of social capital in the false identity. That doesn’t mean all pseudonyms are trustworthy, but the opposite is also true: just because a name is fake doesn’t mean it can’t be a source of credible information.

When the teaser for the AI-driven TV network Channel1 dropped last year, I was skeptical that people would want to follow AI avatars to get their news. There’s undoubtedly a trust and familiarity component that contributes to the popularity of TV news anchors and reporters. You can get your information from anywhere, but the reason you watch Wolf Blitzer or Jesse Watters is because of that perceived connection on the part of the viewers. I don’t believe people will respond in the same way to on-air personalities who were wholly synthetic.

While I still think that point of view is generally correct, Operation Retweet shows the idea has its place. For reporting that is important and in demand, but is also dangerous or otherwise difficult to attribute, attaching it to an AI avatar will embody the stories in a way that can connect with audiences. Instead of wisps of anonymous information, now you have something solid.

Certainly, using AI as the face of your reporting won’t ever be a best practice in journalism. Standing by your words is a fundamental value in the profession, and attaching your name to stories is part of that. But for the rare cases where you have no choice but to sever that connection, AI’s synthetic solution might be the next best thing.

Subscribe now

The Chatbox

All the AI news that matters to media

Maybe Try Some Tennis Shoes with that AI Summary: We have more information about how Perplexity’s ad play is going to work after Digiday got a hold of the pitch deck it’s shopping around to advertisers. The AI-powered “answer engine” has made no secret that it plans to pivot to an ad-based model, which its Perplexity Publishers’ Program — announced earlier in the summer — is a part of. The deck describes multiple ad units, including sponsored related questions appearing in the follow-up queries that Perplexity suggests at the end of answers.

With chatbots essentially becoming commodities, it’s been clear for a while that AI platforms would eventually adopt an ad model, and it’s probably just a matter of time before the same thing happens to ChatGPT’s search engine, SearchGPT (which is still a prototype). The key part of getting this right is how that model interacts with the answers you get, and Perplexity appears to have good instincts on this, with Chief Business Officer Dmitry Shevelenko saying there would be no pay-to-play, which echoes what he told me back in July.

Concern about deepfakes is on the rise as election season heats up. Credit: DALL-E

California Takes Aim at Deepfakes: Election season is in full swing, and so are concerns about deepfakes and AI-driven disinformation injecting poison into the electoral process. The California legislature is sending a series of bills to Governor Gavin Newsom’s desk that deal with AI, according to the AP, including one that bans deepfakes related to elections and would require social networks to publicly disclose if ads have materials altered by AI.

It’s all very well-meaning stuff, though the place of deepfakes with regard to political speech gets gray very quickly. An outright ban would also abolish material clearly labeled as AI-generated as well as images that are created as satire. Disclosure is certainly less controversial, and perhaps the new laws, which Newsom has until Sept. 30 to sign, will speed the adoption of the C2PA standard, which would make any image’s origin and alterations available with a click but has been slow to catch on. We can hope.

Alexa, Tell Me the News as Only You Can: Amazon Alexa may be getting its long-awaited AI upgrade this October, per Reuters. I can’t claim to be unbiased — I have Alexa devices all over my house — and my family has been increasingly frustrated at the intelligence gap between Amazon’s assistant and LLM-powered chatbots like ChatGPT. My hope is that I never hear the words, “Here’s something I found on the web…” again, but I wonder how Alexa will summarize current events. Its current Flash Briefing model was always flawed and, like many platform solutions for news, was dependent on relationships with publishers. How will Alexa decide whose news to prioritize, and how will it attribute the information? There’s lots to think through when you’re not just relaying publisher-provided audio, but there’s a Perplexity-like opportunity here for audio news if Amazon is willing to put its thinking cap on.

Worrying About AI in Journalism: Poynter published a long list of aspirations, worries, and practical tips on using AI in journalism from a panel of experts (my invitation must have gone to spam). There are the usual worries about disinformation and pink slime, but I found this thought from Katie Sanders of PolitiFact to be particularly resonant: “I worry that some journalists will dismiss AI out of hand, and that will prevent everyday Americans from becoming more familiar with it and that will just promote more fear. I think it’s incumbent on journalists to be familiar enough with the technology that we are decoding how it works.”

This reflects what journalists in many major newsrooms tell me: that, outside of high-level projects, there’s still a lot of skepticism on an individual level about even touching the technology, and several newsrooms still don’t permit much in the way of casual use outside of general research. That tells me there needs to be more information about how to use AI in journalism in ways that are helpful, private and don’t compromise integrity. In other words, there’s still a need for The Media Copilot, and I’ve got projects in the works that will address exactly that. Stay tuned.

The Media Copilot is a reader-supported publication. To receive new posts and support The Media Copilot, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.