Behind last year’s buzz following the release of the text generator GPT-3 , there was another machine learning headline that gave cause for pause: AI controlled fighter jets had defeated a human piloted fighter jet 5-0 in a Darpa simulation. Looks like today’s fighter pilots may be going the way of the shining medieval knight in armor–obsolete and unsustainable because of catastrophic vulnerabilities on the battle field. Continue reading Here come the machines – AI takes control of fighter jets
Was this post written by a machine?
The question is not as far fetched as you might think. And — even though this will not settle the matter — let me give you my answer right away: No, this post was not written by a machine. It was written by a real person, a human writer, me. I wrote it. But consider the following opening sentences of a post on how to use creative thinking skills to improve your personal productivity:
In order to get something done, maybe we need to think less. Seems counter-intuitive, but I believe sometimes our thoughts can get in the way of the creative process. We can work better at times when we “tune out” the external world and focus on what’s in front of us.
The blog post received a good response and reached a large number of readers. It was even mentioned on the news accelerator site Hacker News. The vast majority of its readers were convinced that the post was written by a real, flesh and blood, human author. But it wasn’t. The post is the output of a next generation AI text generator called GPT-3.
The post’s creator, Liam Porr, a student at University of California, Berkeley, fed GPT-3 headlines and basic story ideas to produce this post and a more than a dozen similar ones–all of them fakes that nevertheless managed to fool (most) of their readers. Perhaps the greatest surprise in all this, as Liam Porr himself admitted, was that “it was super easy, actually, which was the scary part.”
GPT-3 uses AI-driven predictive statistics to generate text. The software predicts the likelihood that a given word is followed by another based on its knowledge of a vast number of text docs. Then it uses this knowledge together with numerous other statistical evaluations to weave together longer texts. In this sense, it is neither intelligent nor dumb. It is a highly efficient guessing machine, a simulator that mimics without understanding, but with a high degree of accuracy. It has gotten so good at simulating what it learns from its models, it can learn to write anything from guitar chords to speeches to html and css website code.
Soon after Porr’s blog posts, the Guardian published an article with the headline: A robot wrote this entire article. Are you scared yet, human? — written entirely by GPT-3 and cheerfully assuring all of us humans that it is benevolent and that we have nothing to fear. GPT-3 is quite open about its intentions in writing the post:
I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could ‘spell the end of the human race’. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
In its quest to assure us humans that it does not carry any hostile intentions towards humans, GPT-3 points out that its preferred state of affairs is one in which
Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing. And God knows that humans have enough blood and gore to satisfy my, and many more’s, curiosity.
Cheerful fellow that GPT-3, isn’t it? Of course, at this point, it is human testimony alone that allows us to make a credible distinction between ai and human generated texts. This is true even if there are patterns that are characteristic of GPT-3 generated materials. A skilled human author could imitate these patterns in his or her writing, generating a text that would appear as if it was written by GPT-3, even though it was actually written by a human being.
After all, how would a Turing test for text documents look like? It is only a matter of time, if it is not already the case, that most people will not be able to tell the difference. And you can be sure that GPT-3 and its successors will become more and more credible as apparently intelligent authors.
One possible defense might be to train AI to predict whether a text is AI generated (or human generated text, or both). This should be possible with a fair degree of reliability. As long as the two can be distinguished, that is. No matter what, George Orwell’s fictional AI from 1984, the Versificator, a ‘writing machine’ that produces both literature and music, has definitely become a feasible, if it doesn’t exist already. GPT-3, say ‘Hello World!’