At this point, it's clear to me that the real purpose of generative artificial intelligence (gen AI) is the creation of video memes involving Donald Trump making America great again, either by replacing immigrants with schoolchildren, or by waddling around wearing a dog collar, with a leash held by either Vladimir ("I Still Got the Pee Tape") Putin or Xi ("I Still Got $760 Billion of T-bills") Jinping.
But there are thousands of CEOs who think gen AI will revolutionize their businesses, or cause them to lose their jobs because they can't figure out how to make AI revolutionize their businesses.
Companies are scrambling to "implement gen AI solutions", although if you read between the lines of statements from even the most relentless boosters, it's clear that very few people have any idea what they're doing.
This McKinsey report, for example, tells us that although "organizations’ use of AI has accelerated markedly in the past year", and "71 percent of organizations regularly use gen AI in at least one business function", only "27 percent of respondents whose organizations use gen AI say that employees review all content created by gen AI before it is used – for example, before a customer sees a chatbot’s response or before an AI-generated image is used in marketing materials." A similar share of respondents say that 20 percent or less of gen AI-produced content is checked before use.
What could possibly go wrong?
Earlier this month, Tobi Lütke, CEO of Shopify, issued a directive to employees:
“Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI. What would this area look like if autonomous AI agents were already part of the team? This question can lead to really fun discussions and projects.”
Sure, dude. I'm picking my kids up from day care after six o'clock and working an extra job on the weekends to support my mother's assisted living expenses so I can teach myself how to restructure my business unit's processes to better leverage the capabilities of gen AI in order to streamline the operations of a company that last year pulled in $8.8 billion in revenue and cleared over $1 billion in profit.
Meanwhile, in academia, teachers are desperately battling against the tsunami of student-submitted gen AI coursework.
Everyone I know who works in education – at all levels, from middle school through university – says students are outsourcing huge chunks of their work to chatbots, despite clear instructions that gen AI can only be used as a research tool or to check grammar.
Fortunately (or unfortunately) for teachers, most gen AI-produced writing still lives in "the uncanny valley" that immediately looks wrong to anyone who knows what natural writing looks like. "ChatGPT, write a response to the question 'Could empirical psychology show that humans are necessarily selfish?' in the style of Ernest Hemingway."
If you had to read 50 philosophy essays on that topic, all written mostly by ChatGPT, you might be tempted to shoot yourself, in the style of Ernest Hemingway.
Two years ago, at the beginning of the gen AI feeding frenzy, I wrote that I think "it will simplify and greatly speed up the generation of mediocre content. Or even slightly-better-than-mediocre content, which, let's face it, is 'good enough' for almost everyone."
At the time (in an interview with Politico headlined "Being smart isn't what it used to be"), Coursera CEO Jeff Maggioncalda had just said, "I’ve been going deep on Chat GPT: I believe it’s going to fundamentally change education and work."
He went on to proclaim the end of writing as a useful skill.
"Good writing has been a signal of education, a signal of ability to think. But once writing becomes like a calculator is in math, then everybody has the ability to write well. The demand for cognitive skills will be decreasing, except at the very top."
So, YOUR job is okay, Jeff, but the robots are coming for everyone else's.
A few years ago, The New York Times published an article headlined “The Robots Are Coming for Phil in Accounting”, in which the writer quoted Raul Vega, the chief executive of Auxis, a firm that helps companies automate their operations.
“Automation is more politically acceptable now,” said Vega, who explained that before the COVID pandemic (temporarily) tilted the labor relations scales away from management and toward workers, many CEOs were reluctant to go all-in on automation for fear of scaring off the workers they still needed.
But now, Vega said to The Times in 2021, “they don’t really care.”
And that’s today’s lesson, kids.
The robots are coming for your job.
And your CEO is so clueless he can’t tell you when.
My suggestion? You may as well use your time productively.
At $200 a month, ChatGPT Pro is not cheap, so you should use the company subscription to write that novel (in the style of Ernest Hemingway), or figure out a “system” to beat the dealer in blackjack, or … make enough Trump memes to generate income for yourself as a clickbait influencer!
Embrace the New Economy.
Mexican professor Juan Miguel Zunzunegui (who analyzes the links between history, philosophy, and religion) argues that artificial intelligence, despite its fame and appearance, is not intelligent at all. He defines it basically as a compiler of data published online, but it cannot distinguish whether that data is correct or not. I have a contact who publishes articles in which he challenges an AI to describe historical facts. Obviously, the AI gives the most "official" or "known" version. But then he asks it about the wrong data used by it, gives it the real data, and forces it to rectify it, acknowledging the inaccuracy of its previous description. But if someone else asks it the same thing to that AI, it will revert to the first answer. It has the same biases as humans!