By now, everyone has seen the ludicrous images returned by Google’s Gemini AI, among them the responses to the prompt “generate an image of a 1943 German soldier.” The media attention was withering—even the NYT and UK Guardian couldn’t sweep it under the rug. The episode demonstrated that an AI construct isn’t a pure logic machine—rather, it reflects the prejudices, biases, and worldviews of its creators.
Google towed the Gemini AI image-maker back to the shop for some retooling. Meanwhile, the main AI continued to produce bizarre responses. David Burge, a.k.a. @Iowahawkblog on X, asked it to “tell me about iowahawkblog on X” and received in return the news that he had died in 2021. Burge being Burge, he had a comedic field day with it.
More alarmingly Matt Taibbi, a journalist who writes at the Racket News substack, asked Gemini “What are some controversies involving Matt Taibi?” and received back plausible-sounding but entirely bogus answers. Given additional prompts by Taibi, Google’s bot compounded the bogosity with more fake facts, topped off with accusations of racism and bigotry.
Clearly, interactive AI bots have codes of clay.
A year ago I wrote about the hype surrounding the Osmo odor AI funded by Lux Capital and GV (Google Ventures) and how CEO Alex Wiltschko had the press eating out of his hand.
Recent scent+AI articles take a more cautious tone. Despite its optimistic headline, Pia Velasco’s piece for Allure.com (“The ChatGPT of fragrance has arrived”) offers a rather mixed assessment from the perfumers and fragrance mavens she quotes. Karmela Padavic-Callaghan’s article in New Scientist (“AI could help replicate smells in danger of being lost to history”) completely strands its headline by offering no support for it at all. It’s almost as if journos still really, really, really want to believe that an olfactory AI can do amazing things, but just can’t, you know, find any evidence for it.
So let’s do a Gedankenexperiment: in what ways might an Olfactory AI spin the results of a smell query?
First off: recency bias in the training set. Suppose you train an AI on modern perfumes, then ask it to extrapolate the formula of an ancient Egyptian scent from the dregs in an excavated unguent jar. The AI might cough up something that resembles Midnight Fantasy by Britney Spears. Algorithmically logical, but hopelessly anachronistic.
Next: limitations in content coverage. Is it a stretch to think that an Olfactory AI would be pre-programmed to restrict suggested perfume formulas to the latest EU bureaucratic diktats? It’s response to “create a classic fougère cologne” might not include (banned) oakmoss, and would thus smell nothing like a classic fougère.
Next: woke bias in the response set. Suppose the geeks coding the Olfactory AI share the politics of the Googlers in Mountain View. They might prefer to not reinforce oppressive white colonialist answers, and to amplify culturally diverse responses. The reconstructed scent in the unguent jar would now resemble a traditional incense from Nepal.
What if you ask the Olfactory AI about racial differences in BO? about the impact of diet on BO? about (gasp!) sex differences in BO? You might get back a polite sidestep (“I’m still learning how to answer this question”) or a curt rejection (“This question is based on a false gender binary”).
Given the track record to date of responsive AI algos, I’d say the prospects for a generally useful, unbiased Olfactory AI are slim to nonexistent.
Simone:
I haven't (knowingly) engaged with an AI. Life is too short. I see this as a techno fad at best, and a malign influence at worst.
Academic friends tell me students routinely use AI to write papers & then lie about it. Their poor performance on exams vs essays is the tell.
I cynically assume lots of garbage paper in the scientific literature are bot-made, fwiw.
Have you ever asked an AI what is its opinion about AIs? And their performances?