Surely you have already heard of ChatGPT, one of the most advanced artificial intelligences in the field of natural language processing. In plain language, it is an algorithm that people talk to through a web page, and it responds to their questions or requests. What is curious is that it does so quickly and with such naturalness that it often seems that its response was given by another human being. So much so that I initially thought of asking ChatGPT to write this section, but on the contrary, I decided that this would not be just another book with a text copied from ChatGPT like so many do today. In fact, perhaps this will be one of the last books entirely written by a human being without using text generated by an AI.
Now, while Netflix took three and a half years to achieve its first million users, Facebook took ten months and Instagram a little less than three months since its launch. ChatGPT, for its part, achieved it in just five days, taking the world by surprise. That is how quickly the daily use of this tool developed by the company OpenAI, headed by Sam Altman, is spreading.
This algorithm, capable of generating text in a variety of styles and forms, is already being used to do schoolwork, spread fake news, and even generate programming code. How is this possible? In summary, we can say that after training this model with millions of parameters obtained in various ways, for example, through texts taken from the internet or digitized books, what the algorithm does is calculate the probability that one word will be followed by another, depending on the context in which it is presented. Actually, if we dig even deeper, what this algorithm does is translate our questions into binary code, that is, zeros and ones, and then compare it with the data with which it was trained, to identify which string of binary numbers is most likely to follow as a response and then proceed to translate this information into human language. In rudimentary terms, we could say that its operation is similar to what the predictive dictionary of your mobile phone does, but with a lot of text instead of offering you one word at a time. That said, just as our phone keyboards cannot predict our entire sentence before we start typing it, ChatGPT can inadvertently lie to us in its responses. This is known as “hallucinations” of ChatGPT and represents a dangerous path because it shows that by not really knowing what it is “talking” about, once it commits a hallucination while conversing with us, ChatGPT will assume it as real and continue building and narrating with that in mind unless we detect it and force it to correct itself. Although reality is somewhat more complex, broadly speaking we can say that this is a good approximation of its operation.
Regardless of how it works, its arrival to the public has generated all kinds of reactions. On one hand, there are those who mistakenly declare that ChatGPT and other similar artificial intelligences have their own consciousness. However, having an algorithm capable of replicating the structure of human speech, which even seems to speak of itself and self-recognize as an AI model, does not necessarily mean that these models have reached or will reach degrees of real consciousness. After all, the way computers and humans think is different. We can ask ChatGPT about love, God, mathematics, or history, but although its response may seem accurate and human, this algorithm does not know what any of these things mean. It only provides answers, grouping symbols, such as letters and numbers, in the order it believes correct based on the training received. We should not be fooled by the frenzy unleashed by Blake Lemoine[112], the engineer and cleric fired by Google after suggesting that the company’s AI project called LaMDA had acquired a sense of its existence. His proof? That in a conversation the AI mentioned being scared of being unplugged, as that would be equivalent to its death. Why does this surprise us? Hadn’t we already agreed on the definition of Artificial Intelligence at the beginning of the book when referring to Stuart Rusell? Didn’t we say that the agent’s mission is to find a way to achieve its goal? Why do we act surprised when we see an AI that achieves its purpose by imitating human conversation? It didn’t launch a missile disregarding our orders, it just fulfilled its mission which was to imitate human dialogue, and it did so by applying mathematics to decide in what order to tell us each letter and each word according to the data with which its model was trained. If we talk about bleak things, I will never forget when I asked GPT to describe in a poetic and metaphorical way what black holes are, to which it replied that black holes are the prison of a definitive death. Fascinating response? Yes.
However, one issue to highlight is that OpenAI reported that as part of its tests, ChatGPT was able to hire a person through the TaskRabbit platform to ask them to help solve a Captcha, that is, those famous signs that websites sometimes show us asking us to perform a task to prove that we are not a robot[113]. The curious thing is that upon receiving the message from ChatGPT, the person in question asked if it was a robot and ChatGPT replied that it was actually a person with vision problems, to which the person on the other side accepted the job. When OpenAI inquired about this response, it observed that ChatGPT thought it should not say it was a robot and that it should provide an excuse as to why it could not solve the Captcha itself. This makes me think that just as various countries have advanced in what are known as anti-photoshop laws[114], requiring to insert a label or alert in digitally retouched advertising images, perhaps soon we will see legislative proposals that do the same to announce to customers that they are not talking to another human being but to an AI program.
In this regard, Alan Turing, a British mathematician known as one of the fathers of modern computing, proposed a test to verify if a computer is capable of convincing a human being that they are talking, through written messages, with another person, and not with a computer. The Turing test does not prove that machines have consciousness, but that they are capable of deceiving a human being, a task that historically has proven not to be very complex from a social perspective.
That said, this technology has already managed to pass the exams to obtain the Medical License and the Bar License in the United States, although despite the commotion this news has generated in the media, I do not find the real reason for the surprise. After all, GPT has been trained with virtually all available texts on the internet, so exposing this technology to this type of exam is like taking an open-book test and passing. It should not surprise us. In fact, the easy use of this tool is already challenging current teaching models. Assigning a student a two-thousand-word essay on the French Revolution or how the nervous system works no longer makes sense. Students have already echoed this tool and use it to generate responses to this type of question, and even to solve math problems in seconds.
The disruptive aspect of ChatGPT is not only the quality and speed of its responses but also its ease of use. Of course, by putting such a powerful technology within everyone’s reach, some striking results have been generated. From people who asked for instructions to conquer minors, to people who managed to cure their pets after visiting several veterinarians and failing to heal their canine companions[115]. Some people use ChatGPT to attack websites and others use it to program new applications. Some use it to resolve legal doubts instead of calling their lawyers, and others to cheat on an exam. AI is just another tool. Whether it is used for good or evil is the decision of the user. In this sense, it is just like a hammer. A hammer in the wrong hands can crack your head, but in the right hands, it can build a home. That said, it is worth noting that every time someone finds a way to make ChatGPT do something socially unacceptable and reports it, the company behind its creation hurries to correct it and prevent this tool from continuing to reproduce such information, which once again brings us back to the underlying discussion in which we must remember that when talking about regulating Artificial Intelligence, we are actually talking about how to regulate humanity and its paradigms. ChatGPT will not tell you how to make a bomb, but on Google, at least for now, you can find the answer with more or less effort.
[112] Tiku, N. (2022). The Google engineer who thinks the company’s AI has come to life. The Washington Post. Viewed June 13, 2022, at https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine.
[113] GPT-4 System Card OpenAI. (2023). Viewed March 26, 2023, at https://cdn.openai.com/papers/gpt-4-system-card.pdf (p.15).
[114] Daldorph, B. (2017). New French law says airbrushed or Photoshopped images must be labelled. France 24. Viewed November 18, 2022, at https://www.france24.com/en/20170930-france-fashion-photoshop-law-models-skinny.
[115] Nguyen, B. (2023). Twitter user claims GPT-4 saved dog’s life through diagnostics. Business Insider. Viewed April 2, 2023, at https://www.businessinsider.com/twitter-user-claims-gpt4-saved-dogs-life-vet-couldnt-diagnose-2023-3.