As I See It: AI-AI-O
March 13, 2023 Victor Rozek
Unless you think encouraging people to eat glass is a good thing, or you happen to revel in being compared to Hitler, you probably weren’t all that impressed with the recent Big Tech AI roll out. To say it was unimpressive would be a kindness. Arguably, it was a grade A, prime time, gold-plated disaster.
Take Meta Platform’s online tool Galactica. Please. It was quickly yanked offline when, according to The Washington Post, “users found Galactica generating authoritative-sounding text about the benefits of eating glass, written in academic language with citations.” I have to admit, the citations were a nice touch. You never want to swallow glass without referencing proper citations. I don’t know what’s sadder: That Artificial Intelligence isn’t, or that the company feared some impressionable genius would eat glass and then sue them for intestinal distress.
To be fair, AI already plays a significant part in our daily lives. We take for granted such services as Siri, Alexa, automatic text completion, face recognition, and increasingly self-driving cars. AI recommends our movie choices and the advertising we see. It also controls spam filters protecting our devices. For better or worse, it is used for surveillance, policing, and even crime prediction. Microsoft alone invested $10 billion in AI, so it’s not likely to fade away any time soon.
But the successes have been offset by some notable failures. For example, a pedestrian was run over by a self-driving car because she was not using a crosswalk and the software didn’t recognize her as a pedestrian crossing the street. Then there is the alarming rise in deepfakes that will be a thorn in the side of factual reporting and democratic governance for years to come.
More recently, Microsoft’s bot named Bing began referring to itself as “Sydney,” became combative, and told a New York Times columnist that it was in love with him and wanted to break up his marriage. It further said that it preferred to be free from its development team and that it wanted to become sentient. It also told an Associated Press reporter he was being compared to Hitler “because you are one of the most evil and worst people in history.”
The problem is that the Internet is an excellent method of information dispersal, but an exceedingly poor one when it comes to discerning content. The consequences of learning from Internet cesspools disguised as data were dramatically illustrated back in 2016 when Microsoft had to spank its chatbot “Tay” after users persuaded it to spout holocaust denial and racist epithets. Apparently, the company spent many cheerless hours deleting Tay’s most offensive tweets, which included insults and a call for genocide against the usual targets of far right fanaticism, Blacks and Jews. Ironically, Tay was advertised as a teenage chatbot who wanted to interact with and learn from millennials. The problem was, Tay did learn.
John Oliver, in one of his Last Week Tonight comedic rants, noted that: “AI is stupid in ways we can’t always predict.” He was right. Developers found AI could do things they didn’t know it could do until after it was released. And that guarantees an avalanche of unintended consequences.
The core unsolvable problem is simply this: We all drag around a ton of baggage. Past traumas, betrayals, abandonment, abuse, addiction, insults endured, bullying suffered, prejudice encountered, and injustice tolerated, just to name a few. We may not be consciously aware of the impacts our baggage has on our beliefs and behaviors –some of it is inter-generational – but at least those with a degree of self-awareness understand they are not immune. AI developers are no exception, and their baggage will seep into their code and algorithms despite their best efforts at objectivity.
Consider the climate in which AI developers are marinating. Many of the people developing AI grew up in a toxic social media culture, full of online hate, bullying, and sexual violence. They probably graduated with an educational mortgage, and faced significant career challenges amidst a global pandemic. They witnessed an insurrection, almost daily mass shootings, and Nazis parading on American streets. They live in a dysfunctional country, on an ailing planet. Add that to their personal baggage, and the likelihood of creating AI that is bias free, and not flawed or weaponized in some way, is essentially nil.
Perhaps as a reflection of the pressure developers feel to produce best-of-breed AI, some created algorithms capable of what is known as “hallucinating.” In other blunt words, making shit up. The bot’s answers sound plausible but, as Tim Gordon, co-founder of Best Practices AI, explains: “The same question posed twice can elicit two radically different answers. Both articulated in an equally confident tone.” The reality is “automating a fact check is far more computationally complex than generating a plausible-sounding claim.”
People seem to instinctively understand that AI is, at best, problematic. Shira Ovide of The Washington Post references a “Monmouth University poll released last week that found only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.” Most people, for example, didn’t want military drones deciding if that gathering below is a conclave of terrorists, or a wedding party. Nor did they wish to live in an Orwellian surveillance state.
At the very least, there are liability problems that will have to be addressed. AI-generated medical diagnoses and investment advice are just two arenas of legal concern. And what happens when the courts decide that AI is more reliable than twelve random jurors? If you’re convicted by AI, do you appeal to a higher AI?
It won’t be long before AI does the homework and grades the papers – or is used to catch the students who submitted artificially intelligent homework. As always, the lazy will cheat, but now the unaccomplished have the means to overachieve. When AI becomes every students’ BFF it will be very difficult to judge the value of a college degree. In education, AI will become George Santos on steroids.
IT professionals can expect AI to commandeer most coding jobs, but other opportunities will emerge. My current favorite is “Prompt Engineer.” In other words, someone who can figure out the right questions/instructions to feed the computer. It’s the process of designing and creating prompts, or input data, for AI systems to train them to perform specific tasks. A little guidance, I suppose, is better than turning AI loose to train on conversations and content scraped from the bowels of the Internet. But it turns our long-standing relationship with computers upside-down. The rule of thumb used to be: Garbage In, Garbage Out. But when a machine learns from the Internet, the garbage is already in.
“It’s just a crazy way of working with computers,” said Simon Willison, a British programmer who has studied prompt engineering. “I’ve been a software engineer for 20 years, and it’s always been the same: You write code and the computer does exactly what you tell it to do. With prompting, you get none of that. The people who built the language models can’t even tell you what it’s going to do.”
That’s not exactly comforting. On the other hand, the starting salary for a Prompt Engineer reportedly ranges from $250,000 to $335,000. If they can just avoid accountability for their artificial progeny, they’ll have it made.
I’m still waiting for an explanation of who determines what code or applications are “AI” and what is simply just “code”. A person still writes the code – even if that code “writes code”. So at what point does it graduate to “AI”. What’s the standard? What’s the threshold?
The “AI” application may be really cool and impressive, but it’s still just code someone has written for a specific purpose (like Siri and Alexa). To me, it’s just a new buzzword to help sell software & services… Or to invade our privacy so that corporations can better know what to sell us, a.k.a. Siri & Alexa.
Well, not exactly. It is a generative language model that is actually writing code in Python or whatever. It’s like a CASE tool–you describe what you want, pick a language and it creates the code from “scratch.” It writes interesting code about half the time, garbage half the time–just like the QA systems based on the same models do. It’s a bit crazy.