Meta’s Image-Creation Tool Shows How the AI Sausage Is Made

Image via Prompt: imagine snoopy flying a biplane in 1757

“imagine snoopy flying a biplane in 1757”

That’s what I asked to imagine for me this morning and the results ranged from an OK-looking Snoopy tooling through the sky to whatever hellish, Boschian vision that it created, above.

The Media Copilot is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

What’s most interesting, however, is how Meta shows us what the model is “thinking.” In fact, they are spending millions to prove that their model is faster and more performant than the rest.

Meta’s AI tool does a lot of the work right out in the open. As you prompt it, it begins showing images immediately. When you ask for a donkey, it shows you a donkey and then changes the donkey every few seconds until you add to the prompt. When I tried it, it drew the donkey, added a funny hat, and then gave the old boy a pipe and finally a stylized Nintendo controller. It won’t show you warplanes, however, for whatever reason. I couldn’t put Zuckerberg in a WWI biplane, for example.

Image via

The interstitial images are most interesting part of Watching the AI weave through its models, bopping from image to image until it homes in on your exact request is quite clever and truly fascinating. As much as I hate AI-generated art — I no longer use it for these newsletters, preferring human-shot images (although, arguably, AI art is getting difficult to distinguish from photography) — this little gimmick shows us how the AI wends its way through the corpus, adding and subtracting visual information as it goes.

Subscribe now

Take our goat friend here. You’ll notice that the generator moves from a standard goat scene to something far weirder with the addition of one word. This is the model branching off into an entirely new sphere of imagery as you type. The first goat image is bog standard, something that probably existed on a website somewhere, but the rest of the images are generated on the fly.

To do this requires massive amounts of computer power and Meta isn’t pulling any punches here. It’s honestly wild how much they’re throwing at this “problem” just so people can make goat images. Showing the process is very expensive, the equivalent of running a massive server farm just to produce a single beep.

This is the biggest problem with AI right now: it’s really good at doing stuff that you may or may not really care about. Nobody is going to lose their job because they couldn’t create Snoopy in a biplane really quickly nor will they lose their job if they can’t write 3,000 words on the future of ATM machines in Indonesia. But here’s the problem: according to OpenAI’s billing system, it costs me about 48 cents to produce 3,000 words of SEO-ready copy. The words, arguably, are mostly garbage and require editing but the fact remains that I have content where before there was none.

Knowing how that content is made, even by simply watching our goat generator, we see how intensely Meta and the rest want to grab that 48 cents. They’ll spend millions to convince me that their goat generator is the best out there. It would be obscene if it weren’t so sad.

The Media Copilot is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

Ready to start using AI like a pro?


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.