The other day a friend sent me a link to a post showing images of a “fashion show for the seniors”, as the caption described it.
The comment on the post my friend sent to me read:
An #AI-generated runway show is generating huge, positive buzz across #digital and #socialmedia. These super-stylish older models may be computer-generated, but they have opened our eyes as to how innovative and powerful a #fashion show featuring diverse, older models would be. Well done Malik Afegbua for truly bending the rules and extending our reality 👏
Then (inevitably) the hashtags: #extendedreality #artificialintelligence #artificialinspiration #diversity #innovation
I appreciated the images, which were beautiful: attractive fashions and dignified, handsome models.
But I didn’t understand how creating “an #AI-generated runway show” had “opened our eyes as to how innovative and powerful a #fashion show featuring diverse, older models would be.”
What would really have “opened my eyes” would have been a #fashionshow featuring actual “older models”.
That, of course, would not have been of great interest to the person who created the images, Malik Afegbua, who is the CEO of Slick City Media, a media production company with a strong tilt toward XR (extended reality). The fake “fashion show”, in other words, was a calling card (of the sort that media agencies produce all the time).
I’m throwing no shade at Malik Afegbua; he and his team created an impressive (fake) “campaign” (“Hire us! Hire us!”) and showed themselves to be very good at what they do.
And of course, our society, thanks mostly to the media, is fetishizing AI right now, and if you want to get people’s attention, you use – or even just mutter the words three times under your breath – AI.
What was eye-opening to me were the comments under the post: a handful of observers expressed the same skepticism and disappointment that had been my reaction.
Others wrote that they wished they hadn’t seen the (skeptical, disappointed) comments, because they had been so excited to see (i.e. to have been fooled by the computer-generated imagery) “fabulously fierce members of an older generation being celebrated, and walking a show.”
A third group either did not realize the images had been computer-generated, or were informed and did not care. This group, you may be unsurprised to hear, comprised the majority of commenters.
The first issue of The Jaded Cynic, published way back in September 2021, focused on #fakenews and how to identify it. But I noted that many people curate news feeds for themselves that reinforce their pre-existing biases (e.g. that "Guns don't kill people, people do.").
There’s no question that technology (#AI!) has made it easier to create #fakenews.
Last month, the artificial intelligence company OpenAI launched ChatGPT, a “chatbot” aimed at “optimizing language models for dialogue.” It turns out that ChatGPT is pretty good at optimizing language models for dialogue, and as The New York Times reported, “it can write jokes (some of which are actually funny), working computer code and college-level essays. It can also guess at medical diagnoses, create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty.”
Perhaps the most famous “test” came when someone asked ChatGPT to “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.”
More frequent applications, however, are likely to come from college students looking for quick-and-dirty essays, and political propagandists looking to sow disinformation.
In September, someone won the emerging digital artists category in the Colorado State Fair’s annual art competition with an AI-generated “artwork”. The winner, Jason Allen, used Midjourney, an artificial intelligence program that turns lines of text into hyper-realistic graphics, and said, “I’m not going to apologize for it. I won, and I didn’t break any rules.”
Very 21st century.
Allen continued, “The ethics isn’t in the technology. It’s in the people.”
True.
And he said, “This isn’t going to stop. Art is dead, dude. It’s over. A.I. won. Humans lost.”
Very, very sad if true.
Earlier this year, technologist Andy Baio wrote about Opening the Pandora’s Box of AI Art: “A common argument I’ve seen is that training AI models is like an artist learning to paint and finding inspiration by looking at other artwork, which feels completely absurd to me. AI models are memorizing the features found in hundreds of millions of images, and producing images on demand at a scale unimaginable for any human—thousands every minute. There’s no question it takes incredible engineering skill to develop systems to analyze that corpus and generate new images from it, but if any of these systems required permission from artists to use their images, they likely wouldn’t exist.”
Baio pointed out some of the concerns raised by AI-generated art and design:
• Is it ethical to train an AI on a huge corpus of copyrighted creative work, without permission or attribution?
• Is it ethical to allow people to generate new work in the styles of the photographers, illustrators, and designers without compensating them?
• Is it ethical to charge money for that service, built on the work of others?
The ethics are the thing, as Colorado State Fair art contest winner Allen noted, and unfortunately, it seems to me we are becoming less ethical as a society (I wrote about this in August, in a post titled “Ethics, Schmethics”).
The question is, do (enough) people care?
As usually the answer to this kind of questions is, the answer is "no, (enough) people do not care". Lots of people just read the headlines (sometimes just clickbaits) and comment without reading, and the rest of readers are divided into opposite points of view (sadly some people do not care if the work of artists is used without permission, if AI entertains them). That's why fake news are a thing, and if there is a fake image to proof what the text says (confirming a biased opinion), few people really get further information. Sad but true.
AI is the new trend, we are curious and want to know what AI opines (as well as a kid would do), showing surprising, funny and weird things. And media is focused on it, although sometimes there is no news. Few days ago I read "Chilling prediction: AI answers who will rule the world in the next century", and the only result are four disappointing images (which do not answer the question) taken from internet and modified by AI, showing 1. bomb 2. army 3. something falling from space 4. someone in a suit looking planet Earth. What a waste of time. It forgot a zombie apocalypse (it is also on the internet).
What will make humanity disappear will be the end of Natural Intelligence.