Let’s Admit That We’re All Using AI. Now What?

A close friend of mine has been doing contract work for a website in Europe. The writing he’s been doing — essentially blog posts about the company’s products — is as milquetoast as it gets and requires very little in the way of creativity. The goal in those kinds of SEO-focused gigs is quantity, not quality, and the job would lend itself extremely well to GenAI.

But, due to the vagaries of Google and other search engines, the client is deathly afraid of AI. The thinking is that Google will penalize sites that use AI even as the company itself is blasting out the worst possible AI products on the market.

This guy’s client uses AI detectors (which don’t work) and plagiarism checkers (which have been dinging him for using the company’s own boilerplate) which results in a delightful back and forth that is, in the end, exhausting.

If he didn’t need the money, he’d fire the client.

So here we are, living in a world where humans are afraid of robots and robots are penalizing humans for original writing. What this means is that it’s time to build an AI content policy for you and your team.

Introducing The Media Copilot Events and Dinner Series

Over the next year we will be planning our event and dinner series. Here are some specifics:

Events will be held monthly and involve pitches, networking, and deep discussion. If you’d like to sponsor an event, please reply to the newsletter or ping team@mediacopilot.ai. If you have a space in the New York City area that might work for a meetup, please get in touch!

Dinner Series is all about connecting the companies building AI-driven platforms and experiences with the media (journalists, executives, product managers, and other stakeholders). If you’d like to sponsor one of our dinners, please email team@mediacopilot.ai. We would love to get your project in front of decision-makers and this is a simple, economical way to do it.

If you’d like to attend one of our upcoming events, please RSVP here and include your city so we can plan an event near you. Thanks!

RSVP

Does Google Care About AI?

First, a quick lesson in SEO and AI. Google doesn’t actually care if you write something using generative AI. “Our focus on the quality of content, rather than how content is produced, is a useful guide that has helped us deliver reliable, high quality results to users for years,” the company wrote in a recent blog post about generative AI. What they’re looking for is E-E-A-T: expertise, experience, authoritativeness, and trustworthiness.

In short, they want good content, whether it was written by a robot, your cat, or a Nobel Prize-winning author (or all three). Further, the tools used to “test” for AI and plagiarism are blunt at best. These tools are aimed at telling whether a student copied a Wikipedia entry into their essay and not at understanding when someone worked with AI to create a deep analysis.

Finally, most companies aren’t communicating enough anyway, so why not use GenAI to make it easier? Your CEO probably wants to write a 1,000-word article on why they’re the best boss in the world. Let them use ChatGPT and then edit it so it doesn’t make her sound like a sycophant.

Keep Your SSN Off The Dark Web

Every day, data brokers profit from your sensitive info — phone number, DOB, SSN — selling it to the highest bidder. And who’s buying it? Best case: companies target you with ads. Worst case: scammers and identity thieves.

It’s time you check out Incogni. It scrubs your personal data from the web, confronting the world’s data brokers on your behalf. And unlike other services, Incogni helps remove your sensitive information from all broker types, including those tricky People Search Sites.

Help protect yourself from identity theft, spam calls, and health insurers raising your rates. Plus, just for The Media Copilot readers: Get 55% off Incogni using code COPILOT.

How it works

What Is an AI policy?

Quite simply, an AI policy is any rule you write down and disseminate to describe how you and your team can use AI. This rule should be as simple as “Everyone can use AI tools.” Then add other rules as you see fit, depending on your degree of comfort. Here’s a start:

Everyone can use AI tools.

Disclose all use of AI to management.

Disclosure of AI usage to customers is not required. You don’t tell clients when you use Microsoft Word so why share which tools you’re using?

Use AI to outline, prepare, and brainstorm. Do not use it to write.

All information the AI produces or adapts and fact-checked by humans.

Do not depend on AI. It is a utility and it can go down. Build your editorial muscles by writing important documents with your brain, not ChatGPT.

Be careful. AI is a tricky beast. It can sneak in things that don’t make sense, confuse facts, and change ideas entirely. Do not depend on it in the way you’d depend on a spelling or grammar checker. It needs far more oversight.

So there you have it: next time someone at work talks about AI, say they can use it and give them your rules. Chances are they’re already following them, and if they’re not, have a talk about why these simple ideas are important. Your writers will thank you for finally entering the 21st century.

The Media Copilot is a reader-supported publication. To receive new posts and support The Media Copilot, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.