Learn How to Use AI In:
Journalism

We Upskill Your Creative Team and Upgrade Your Content
So Your Organization Is AI-Ready

Get in Touch

Shining a Light on Media Bias With AI

Credit: DALL-E

Bias in our society always seems to be on the rise: bias in our institutions, bias in our media, bias in AI. There doesn’t seem to be much we can do about it, or if we even want to. But few would argue against increasing awareness of it, and AI may be able to help with that.

More on that in a minute. Lots going on this week with respect to the media and AI, in particular the breaking news from Axios that several publications — including the Financial Times, the Atlantic and Fortune — have signed deals with ProRata.ai, whose platform is meant to enable, “fair compensation and credit for content owners in the age of AI.”

I’ll have more to say about that and other recent headlines in Thursday’s newsletter, but for now, I’ve got a couple of quick updates: This week, I was excited to be a guest on the most recent episode of the Better Radio Websites podcast. I explored with host Jim Sherwood how AI tools can empower a small team to do the work of a bigger one, but also the importance of adopting the right thinking about AI before you even start. Along the way, I also pick winners in the whole ChatGPT-Claude forever war, so it might be worth a listen just for that.

Speaking of podcasts, my recent interview with Perplexity’s Dmitry Shevelenko has officially become The Media Copilot’s most successful podcast to date. If you missed it, you can check it out right here on Substack, on our YouTube channel, or wherever you find podcasts.

Finally, don’t forget that the next cohorts for The Media Copilot’s AI training classes begin soon. AI Quick Start, our 1-hour basics class, is happening Aug. 22, and AI Fundamentals for Marketers, Media, and PR arrives on Sept. 4. This being summer, it’s the perfect time to upskill yourself with AI tools specific to your work so you can hit the ground running on cybernetic legs in September. Reserve your spot today, and don’t forget the discount code AIHEAT for 50% off at checkout.

One more thing, then let’s dive in.

Keep Your SSN Off The Dark Web

Every day, data brokers profit from your sensitive info — phone number, DOB, SSN — selling it to the highest bidder. And who’s buying it? Best case: companies target you with ads. Worst case: scammers and identity thieves.

It’s time you check out Incogni. It scrubs your personal data from the web, confronting the world’s data brokers on your behalf. And unlike other services, Incogni helps remove your sensitive information from all broker types, including those tricky People Search Sites.

Help protect yourself from identity theft, spam calls, and health insurers raising your rates. Plus, just for The Media Copilot readers: Get 55% off Incogni using code COPILOT.

How it works

Can AI Help Uncover Bias in Media?

The problem of media bias is a particularly vexing one. While the idea of a fully objective media, with zero bias, is obviously a fantasy, our current environment often feels like it’s embraced the other extreme, with a seemingly endless supply of slanted stories that present facts through thick ideological lenses, both left and right.

One could make the case that we asked for this — that the tangled incentives of media (and social media), where outrage and engagement drive clicks — led us to a place where many, if not most media brands today have some kind of reputation of bias. But it’s harder to make the case that this is somehow good for the average news consumer. Decoding bias involves considering how a particular story supports some kind of narrative, whether the language used aligns with a point of view, the reputation of the publication, and sometimes the record of the individual author.

Most would agree that process can be exhausting. But it also sounds like something an algorithm could do. That appears to be the thinking behind this AI-powered bias checker, unveiled on Tuesday by AllSides, a website that analyzes bias in media. If you suspect an article you read is slanted to favor either the left or the right, you can paste the URL into the tool, and it’ll tell you which way it leans and how much. It’ll also give you a helpful summary of the precise signals it used to arrive at its conclusion.

How the AllSides bias checker works.

Is this helpful? As a tool for readers, it may have some utility. But anyone deliberately using it is probably already pretty media-savvy, so AllSides’ bias checker has a few steps to go before it puts a dent in the overall problem. A logical next step would be to create a Chrome extension. After installation, your browser could alert you to bias automatically — on any article page you happen to land on. Maybe from there certain browsers eventually start to include the feature as an opt-in setting.

Subscribe now

What LLMs Are Good At

But the real value in this exercise today is to show that AI is actually quite good at this. Analyzing text, detecting patterns within it (some potentially subtle), and then producing an overall assessment against a set of rules — that’s at the core of what large language models (LLMs) do.

Of course, it matters who is creating that set of rules. AllSides, which describes itself as “a public benefit corporation,” has been analyzing and rating the bias of media sites since 2012. You might quibble with any specific rating, but its media bias chart that maps where the top publications (and their opinion sections) land on a continuum of right to left looks generally accurate.

This isn’t the first time someone has thought to use AI to uncover media bias. A similar enterprise was announced last fall between Seekr, a company that aspires to create “trustworthy” AI, and the now-defunct publication The Messenger. That effort intended to show the bias in articles, but it didn’t get off the ground before The Messenger’s now-infamous flameout. Still, it was an arguably brave move to turn a bias checker on itself, one suited to a young publication trying to make a name for itself.

For established media, there’s probably less of a desire to highlight their own biases, either because they’re already explicit, or at least generally assumed. That’s why a media bias checker likely won’t gain traction as an idea: There may be general agreement that bias exists at a publication, but there isn’t agreement on whether that’s a bad thing.

Addressing Bias From Within

But let’s run with the idea. If, in theory, a publication wanted to eliminate bias in its news or storytelling, the place to integrate a bias checker isn’t at the reader level — it should be part of the news production process. Once a story is written and uploaded into the CMS, it could run an automatic bias check. If the story falls outside of a certain range, it would be kicked back to the reporter and editor, probably with suggestions from the LLM to make it more balanced. For opinion, where bias is encouraged, the checker could simply count the number of pieces that lean one way vs. the other. If the count skews too far in one direction, it would alert the opinion editor to commission more pieces from the other side.

This sounds straightforward in theory, but it would be extremely thorny in practice. Publications interested in implementing a bias checker might face resistance from their staff, who may not be comfortable with an LLM giving them editorial feedback. Moreover, many staffers may not see any problem with a publication having an overall bias. And they might be right — telling an audience what they want to hear is arguably a reliable editorial strategy, whether the publication is honest about their slant or not, at least in today’s click-driven ecosystem.

Still, AI’s power to analyze language brings a new tool that may shine some light onto the thorny issue of bias in the media. It’ll take more than a single bias checker to untangle the problem, or even make clear that we should want to. But anything that might be a step towards more trust in media — currently historically low and getting lower — is probably worth a try.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Test title 3

Credit: DALL-E

Bias in our society always seems to be on the rise: bias in our institutions, bias in our media, bias in AI. There doesn’t seem to be much we can do about it, or if we even want to. But few would argue against increasing awareness of it, and AI may be able to help with that.

More on that in a minute. Lots going on this week with respect to the media and AI, in particular the breaking news from Axios that several publications — including the Financial Times, the Atlantic and Fortune — have signed deals with ProRata.ai, whose platform is meant to enable, “fair compensation and credit for content owners in the age of AI.”

I’ll have more to say about that and other recent headlines in Thursday’s newsletter, but for now, I’ve got a couple of quick updates: This week, I was excited to be a guest on the most recent episode of the Better Radio Websites podcast. I explored with host Jim Sherwood how AI tools can empower a small team to do the work of a bigger one, but also the importance of adopting the right thinking about AI before you even start. Along the way, I also pick winners in the whole ChatGPT-Claude forever war, so it might be worth a listen just for that.

Speaking of podcasts, my recent interview with Perplexity’s Dmitry Shevelenko has officially become The Media Copilot’s most successful podcast to date. If you missed it, you can check it out right here on Substack, on our YouTube channel, or wherever you find podcasts.

Finally, don’t forget that the next cohorts for The Media Copilot’s AI training classes begin soon. AI Quick Start, our 1-hour basics class, is happening Aug. 22, and AI Fundamentals for Marketers, Media, and PR arrives on Sept. 4. This being summer, it’s the perfect time to upskill yourself with AI tools specific to your work so you can hit the ground running on cybernetic legs in September. Reserve your spot today, and don’t forget the discount code AIHEAT for 50% off at checkout.

One more thing, then let’s dive in.

Keep Your SSN Off The Dark Web

Every day, data brokers profit from your sensitive info — phone number, DOB, SSN — selling it to the highest bidder. And who’s buying it? Best case: companies target you with ads. Worst case: scammers and identity thieves.

It’s time you check out Incogni. It scrubs your personal data from the web, confronting the world’s data brokers on your behalf. And unlike other services, Incogni helps remove your sensitive information from all broker types, including those tricky People Search Sites.

Help protect yourself from identity theft, spam calls, and health insurers raising your rates. Plus, just for The Media Copilot readers: Get 55% off Incogni using code COPILOT.

How it works

Can AI Help Uncover Bias in Media?

The problem of media bias is a particularly vexing one. While the idea of a fully objective media, with zero bias, is obviously a fantasy, our current environment often feels like it’s embraced the other extreme, with a seemingly endless supply of slanted stories that present facts through thick ideological lenses, both left and right.

One could make the case that we asked for this — that the tangled incentives of media (and social media), where outrage and engagement drive clicks — led us to a place where many, if not most media brands today have some kind of reputation of bias. But it’s harder to make the case that this is somehow good for the average news consumer. Decoding bias involves considering how a particular story supports some kind of narrative, whether the language used aligns with a point of view, the reputation of the publication, and sometimes the record of the individual author.

Most would agree that process can be exhausting. But it also sounds like something an algorithm could do. That appears to be the thinking behind this AI-powered bias checker, unveiled on Tuesday by AllSides, a website that analyzes bias in media. If you suspect an article you read is slanted to favor either the left or the right, you can paste the URL into the tool, and it’ll tell you which way it leans and how much. It’ll also give you a helpful summary of the precise signals it used to arrive at its conclusion.

How the AllSides bias checker works.

Is this helpful? As a tool for readers, it may have some utility. But anyone deliberately using it is probably already pretty media-savvy, so AllSides’ bias checker has a few steps to go before it puts a dent in the overall problem. A logical next step would be to create a Chrome extension. After installation, your browser could alert you to bias automatically — on any article page you happen to land on. Maybe from there certain browsers eventually start to include the feature as an opt-in setting.

Subscribe now

What LLMs Are Good At

But the real value in this exercise today is to show that AI is actually quite good at this. Analyzing text, detecting patterns within it (some potentially subtle), and then producing an overall assessment against a set of rules — that’s at the core of what large language models (LLMs) do.

Of course, it matters who is creating that set of rules. AllSides, which describes itself as “a public benefit corporation,” has been analyzing and rating the bias of media sites since 2012. You might quibble with any specific rating, but its media bias chart that maps where the top publications (and their opinion sections) land on a continuum of right to left looks generally accurate.

This isn’t the first time someone has thought to use AI to uncover media bias. A similar enterprise was announced last fall between Seekr, a company that aspires to create “trustworthy” AI, and the now-defunct publication The Messenger. That effort intended to show the bias in articles, but it didn’t get off the ground before The Messenger’s now-infamous flameout. Still, it was an arguably brave move to turn a bias checker on itself, one suited to a young publication trying to make a name for itself.

For established media, there’s probably less of a desire to highlight their own biases, either because they’re already explicit, or at least generally assumed. That’s why a media bias checker likely won’t gain traction as an idea: There may be general agreement that bias exists at a publication, but there isn’t agreement on whether that’s a bad thing.

Addressing Bias From Within

But let’s run with the idea. If, in theory, a publication wanted to eliminate bias in its news or storytelling, the place to integrate a bias checker isn’t at the reader level — it should be part of the news production process. Once a story is written and uploaded into the CMS, it could run an automatic bias check. If the story falls outside of a certain range, it would be kicked back to the reporter and editor, probably with suggestions from the LLM to make it more balanced. For opinion, where bias is encouraged, the checker could simply count the number of pieces that lean one way vs. the other. If the count skews too far in one direction, it would alert the opinion editor to commission more pieces from the other side.

This sounds straightforward in theory, but it would be extremely thorny in practice. Publications interested in implementing a bias checker might face resistance from their staff, who may not be comfortable with an LLM giving them editorial feedback. Moreover, many staffers may not see any problem with a publication having an overall bias. And they might be right — telling an audience what they want to hear is arguably a reliable editorial strategy, whether the publication is honest about their slant or not, at least in today’s click-driven ecosystem.

Still, AI’s power to analyze language brings a new tool that may shine some light onto the thorny issue of bias in the media. It’ll take more than a single bias checker to untangle the problem, or even make clear that we should want to. But anything that might be a step towards more trust in media — currently historically low and getting lower — is probably worth a try.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

test post 2

Credit: DALL-E

Bias in our society always seems to be on the rise: bias in our institutions, bias in our media, bias in AI. There doesn’t seem to be much we can do about it, or if we even want to. But few would argue against increasing awareness of it, and AI may be able to help with that.

More on that in a minute. Lots going on this week with respect to the media and AI, in particular the breaking news from Axios that several publications — including the Financial Times, the Atlantic and Fortune — have signed deals with ProRata.ai, whose platform is meant to enable, “fair compensation and credit for content owners in the age of AI.”

I’ll have more to say about that and other recent headlines in Thursday’s newsletter, but for now, I’ve got a couple of quick updates: This week, I was excited to be a guest on the most recent episode of the Better Radio Websites podcast. I explored with host Jim Sherwood how AI tools can empower a small team to do the work of a bigger one, but also the importance of adopting the right thinking about AI before you even start. Along the way, I also pick winners in the whole ChatGPT-Claude forever war, so it might be worth a listen just for that.

Speaking of podcasts, my recent interview with Perplexity’s Dmitry Shevelenko has officially become The Media Copilot’s most successful podcast to date. If you missed it, you can check it out right here on Substack, on our YouTube channel, or wherever you find podcasts.

Finally, don’t forget that the next cohorts for The Media Copilot’s AI training classes begin soon. AI Quick Start, our 1-hour basics class, is happening Aug. 22, and AI Fundamentals for Marketers, Media, and PR arrives on Sept. 4. This being summer, it’s the perfect time to upskill yourself with AI tools specific to your work so you can hit the ground running on cybernetic legs in September. Reserve your spot today, and don’t forget the discount code AIHEAT for 50% off at checkout.

One more thing, then let’s dive in.

Keep Your SSN Off The Dark Web

Every day, data brokers profit from your sensitive info — phone number, DOB, SSN — selling it to the highest bidder. And who’s buying it? Best case: companies target you with ads. Worst case: scammers and identity thieves.

It’s time you check out Incogni. It scrubs your personal data from the web, confronting the world’s data brokers on your behalf. And unlike other services, Incogni helps remove your sensitive information from all broker types, including those tricky People Search Sites.

Help protect yourself from identity theft, spam calls, and health insurers raising your rates. Plus, just for The Media Copilot readers: Get 55% off Incogni using code COPILOT.

How it works

Can AI Help Uncover Bias in Media?

The problem of media bias is a particularly vexing one. While the idea of a fully objective media, with zero bias, is obviously a fantasy, our current environment often feels like it’s embraced the other extreme, with a seemingly endless supply of slanted stories that present facts through thick ideological lenses, both left and right.

One could make the case that we asked for this — that the tangled incentives of media (and social media), where outrage and engagement drive clicks — led us to a place where many, if not most media brands today have some kind of reputation of bias. But it’s harder to make the case that this is somehow good for the average news consumer. Decoding bias involves considering how a particular story supports some kind of narrative, whether the language used aligns with a point of view, the reputation of the publication, and sometimes the record of the individual author.

Most would agree that process can be exhausting. But it also sounds like something an algorithm could do. That appears to be the thinking behind this AI-powered bias checker, unveiled on Tuesday by AllSides, a website that analyzes bias in media. If you suspect an article you read is slanted to favor either the left or the right, you can paste the URL into the tool, and it’ll tell you which way it leans and how much. It’ll also give you a helpful summary of the precise signals it used to arrive at its conclusion.

How the AllSides bias checker works.

Is this helpful? As a tool for readers, it may have some utility. But anyone deliberately using it is probably already pretty media-savvy, so AllSides’ bias checker has a few steps to go before it puts a dent in the overall problem. A logical next step would be to create a Chrome extension. After installation, your browser could alert you to bias automatically — on any article page you happen to land on. Maybe from there certain browsers eventually start to include the feature as an opt-in setting.

Subscribe now

What LLMs Are Good At

But the real value in this exercise today is to show that AI is actually quite good at this. Analyzing text, detecting patterns within it (some potentially subtle), and then producing an overall assessment against a set of rules — that’s at the core of what large language models (LLMs) do.

Of course, it matters who is creating that set of rules. AllSides, which describes itself as “a public benefit corporation,” has been analyzing and rating the bias of media sites since 2012. You might quibble with any specific rating, but its media bias chart that maps where the top publications (and their opinion sections) land on a continuum of right to left looks generally accurate.

This isn’t the first time someone has thought to use AI to uncover media bias. A similar enterprise was announced last fall between Seekr, a company that aspires to create “trustworthy” AI, and the now-defunct publication The Messenger. That effort intended to show the bias in articles, but it didn’t get off the ground before The Messenger’s now-infamous flameout. Still, it was an arguably brave move to turn a bias checker on itself, one suited to a young publication trying to make a name for itself.

For established media, there’s probably less of a desire to highlight their own biases, either because they’re already explicit, or at least generally assumed. That’s why a media bias checker likely won’t gain traction as an idea: There may be general agreement that bias exists at a publication, but there isn’t agreement on whether that’s a bad thing.

Addressing Bias From Within

But let’s run with the idea. If, in theory, a publication wanted to eliminate bias in its news or storytelling, the place to integrate a bias checker isn’t at the reader level — it should be part of the news production process. Once a story is written and uploaded into the CMS, it could run an automatic bias check. If the story falls outside of a certain range, it would be kicked back to the reporter and editor, probably with suggestions from the LLM to make it more balanced. For opinion, where bias is encouraged, the checker could simply count the number of pieces that lean one way vs. the other. If the count skews too far in one direction, it would alert the opinion editor to commission more pieces from the other side.

This sounds straightforward in theory, but it would be extremely thorny in practice. Publications interested in implementing a bias checker might face resistance from their staff, who may not be comfortable with an LLM giving them editorial feedback. Moreover, many staffers may not see any problem with a publication having an overall bias. And they might be right — telling an audience what they want to hear is arguably a reliable editorial strategy, whether the publication is honest about their slant or not, at least in today’s click-driven ecosystem.

Still, AI’s power to analyze language brings a new tool that may shine some light onto the thorny issue of bias in the media. It’ll take more than a single bias checker to untangle the problem, or even make clear that we should want to. But anything that might be a step towards more trust in media — currently historically low and getting lower — is probably worth a try.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.