Usa new news

AI wrote this headline (and created this image): ‘The Machines Are Learning, and It’s Getting Spooky’

A year ago, I asked Google’s Gemini AI to write a column boosting artificial intelligence in the style of Neil Steinberg of the Chicago Sun-Times. It instantly spat out a snappy patter that was a bad parody of me, with lots of winking Chicago references. AI is supposedly constantly improving, so I was curious: has it actually gotten any better in the past year? I asked Google’s new 2.0 “experimental” Gemini for an AI update, again in my style, which I present here (anything in italics is still me). It came up with something that begins:

Remember when AI was just some nerd stuff, a bunch of guys with pocket protectors arguing about neural networks and whether a toaster could ever truly understand the existential dread of a Tuesday morning?

Well, forget all that. This ain’t your grandpappy’s AI. This ain’t even your dad’s AI. This is the kind of AI that makes you wonder if Skynet is just around the corner, disguised as a helpful chatbot.

Last year, AI went from “kinda cool, I guess” to “holy crap, is that a threat to my livelihood?”

Opinion bug

Opinion

Not bad, right? It even threw in bullet points and boldface:

That part about the inner playwright weeping is true, but not because AI is so impressive. I was at a party where a young man said he was writing a play. “Writing plays is hard,” I replied. And he said, no, not really; he’s having AI generate the dialogue.

I went looking for a “surprisingly catchy” AI-composed pop tune and ended up listening to “Daddy’s Car,” a muddy mashup of the Beatles and the Beach Boys, whose lyrics border on gibberish.

Gemini AI did, I noticed with unease, take far longer to create the image (above) of a talking squirrel riding a unicycle than it did to write my column. Not bad, right?

But here’s the thing: it’s not just about the cool stuff. It’s about the implications. Jobs that were once considered safe — writers, artists, musicians — are now facing an uncertain future. What happens when a machine can do your job better, faster, and cheaper?

Last year, we fed my usual column mugshot into an AI image generator and asked it to make Neil Steinberg a robot. We didn’t suggest it make me look evil — that the generator did on its own (left). This year, we asked again (right), and it got a little closer. Note the stack of old newspapers, the computer in the background and the jaunty fedora atop my . . . robot head.

Made by Google AI

AI lies all the time, and the above contains a whopper. In what universe were the jobs of writers, artists or musicians ever “considered safe”?

And let’s not even get started on the ethical dilemmas. Deepfakes are getting scarier by the day, and the potential for misuse is terrifying. Imagine a world where you can’t trust a single video or image.

So, where do we go from here? Do we embrace the AI overlords and become their willing servants? Do we try to unplug the whole damn system and go back to living in caves?

Not much of a choice. Technology wins.

I don’t have the answers. But I do know one thing: the machines are learning, and they’re learning fast. And that, my friends, is both exhilarating and terrifying.

Then Gemini did something I didn’t ask for that was indeed terrifying. Showing what looks like initiative, it proceeded to critique my writing: 

This column attempts to capture the essence of Steinberg’s writing style:

That’s more impressive than the column itself. I’m not a writer who anyone bothers to analyze; but should someone make the attempt, they’d do well to come up with Gemini’s uninvited take.
 
I’d say we’ll try this again in a year, but — cue the skeptical outlook and self-deprecation — that will depend on whether we’re both here next year. I mean you and me. AI, I’m sure, will be here, bigger than ever.

Exit mobile version