If you search the internet today for anything related to AI, articles and conversations about ChatGPT and its counterparts dominate the landscape. There is plenty to be said about the language-model AIs using some combination of algorithms and human-set parameters to respond in a way that sounds human. What we will attempt to do instead, is to go further back than that; to establish the basics of how AI works.
To begin with, why should you really care? Well, you shouldn’t necessarily, but there is value in having some context when so many people are going to talk about a given topic. There is even more value to that context when many of those people are going to try to sell you on the idea of its value, contribution, or primacy. There are some serious grifters with a “we have AI that does X,” or “AI will do Y,” pitch right now. Most of the time there is some degree of AI involved, but the gap between their pitch and reality is often measured in yards, not inches. So, let’s dig in and consider AI.
What is AI?
Artificial Intelligence is, for the most part, what it sounds like: an algorithm that makes decisions intelligently (intelligence) without human action in the moment (artificial). There is a lot in that simple description that can be opaque. For example, why add “in the moment,” as a modifier to the lack of human action? It’s because humans wrote the script of what the AI can consider, the parameters the AI can operate within, and the procedural method for acting in response to anticipated inputs. The AI is not thinking, it is not independent, it is an automation tool. So, while a human did not take the action, this person gave the AI the instructions on how to act within a defined realm of possibilities. No thinking, no independence, no doing a thing it was not built specifically to do – the technical term for this is narrow AI.
Many people have asked the question, “Is ChatGPT a precursor to independent AI, to humanity reconsidering what it means to be human?” No, it’s not. ChatGPT (and almost all AI) is what we will call a second-generation from this point on. It just means that it is making yes or no, or if and then decisions in succession. There are some third-generational AIs out there already, but they are few and far between. Why it, nor even a third-gen AI, is the thing that makes us reconsider personhood definitions requires some base understanding of AI. We will come back to this question, but first let’s cover more of the fundamentals.
“An algorithm that makes decisions intelligently without human action in the moment.” AI makes decisions intelligently; it is considering potentially millions of inputs and selecting – usually instantly – what the appropriate response/action should be. The way it makes decisions is fascinating and worth a whole conversation, but for this piece we are going to keep it short. With the AIs we have today, they can make decisions in one of three ways: 1) direct – think yes or no, if and then; 2) decision tree – yes/no if/then decisions in a consecutive series, and 3) looped decisioning – a decision tree, with a return flow that brings the response to the action back to append the data set. This is what creates the generational reference from earlier – 1st, 2nd, and 3rd respectively. For perspective here, it will be at least fifth generation that will become independent. This idea is better illustrated in the ChatGPT section where we can add some practical applications to the generations.
“An algorithm that makes decisions intelligently without human action in the moment.” An algorithm is just a word describing those instructions the human coded in on how to respond to specific inputs. “Specific,” because if an input falls outside of the potential, most AIs have no mechanism for responding and will return an error. Think of it like this, you have a fully factory-restored 1967 Corvette 427 L88, and you need to fuel it up. You have a thousand gigawatts ready to go to fuel it… doesn’t work, does it? It is an input that the Corvette was never given the ability to process. This is why this type of AI is called “narrow” – an input it is not prepared to deal with shuts down the decisioning capability.
Some will say that we could write in procedures for dealing with new inputs, or ambiguity, and this is certainly true; we almost certainly will as AI continues to evolve. When we get it to a point where we have done this and it does not just error out, we are getting closer to that conversation about humanity. Before we can go there though, we must push AI’s boundaries, and I do not mean the “boundaries of what is possible,” but literally – its boundaries. AI can do some amazing things, but it cannot do what is not within its boundaries. As an example, if you ask ChatGPT to do your taxes, or predict the weather tomorrow, or buy media, it can tell you how, but it cannot do it itself. AIs do all those things today, but not ChatGPT- because it is outside the boundaries for which it was built.
ChatGPT’s primary contribution on the march to independent AI is synthesizing information. It considers what is written or created across the web on a given topic (a feat in shear processing capability without a doubt), and then compresses that information into responses to questions from the public. This is the combination of two important things. First, it has made a huge leap in input processing. It is considering a mountain of information and then distilling that information based on updatable parameters. ‘Write me an article about the next evolution in self-driving cars.’ This would consider available content from across the web and respond with an article – a task that is mostly straightforward, but again, a feat of processing.
Now we add “- in the style of Maya Angelo.” From here it is considering two types of input that influence the outcome. We could also add “- with less than a thousand words.“ The parameters can just keep growing, and in this we can see the second innovation: a set of instructions for dealing with inputs rather than the inputs themselves. The durability of this part of the claim has come under some fire however due to the examples of bias in responses that have been shown. There have also been some weird sort of linear rabbit trails these language-based AI models can be sent down like the recent, well-published conversation the Bing AI had with Kevin Roose.
We said early in this conversation that ChatGPT is just a second-generation AI. To expand on that statement, let’s further explore the idea of second-gen AI, or “direct decisioning plus a decision tree.” Your coffee maker is probably a first-gen AI – if clock = 5:00 a.m., then start brew cycle. A second-generation AI would layer at least one more sequential step into the process. So let’s add that you forgot to empty the pot last night when you set the auto-brew. If your coffee pot had a second-gen AI, it would change the brew cycle immediately to off if coffee pot = full. This is what we are seeing with ChatGPT. It is processing a number of tasks and responding to those in a series. What it is not doing, and what would make it a third-gen, is considering how the responses it gives are received, and then taking that input to adjust the response the next time it answers the same question. BUT even if it were doing that, it would still not be on its way to being a person. In the last section we can explore more of why.
When is AI a Person
Most people are familiar with the Turing test which is essentially: Can a human tell it is talking to a machine? This idea of considering humanity for AI has been explored in many books and movies; I Robot, Do Androids Dream of Electric Sheep (Blade Runner), Ex Machina, Terminator, etc., but in Ex Machina the tech genius Nathan makes this statement: “If I hid Ava from you so you could just hear her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.” His sentiment that we are going to get to a place where we can fool a human is undeniable; we are almost there now. The interesting thing is that his commentary is beginning to show the limitation on how functional this type of test is in determining a machine’s personhood.
The really pertinent question will be “can we get the AI to see itself as a person?” Self-awareness is the final test because it implies several leaps we have yet to make. 1. That the AI is not bound by the original set of parameters (that it is “general AI” [vs narrow]), 2. That the AI can update and overwrite its own decisioning processes*, 3. That the AI can pass as a person (not by how it looks), and 4. That the AI has defined ‘person,’ and adds itself to the category – that it has the capacity for thoughtful consideration. When we can say an AI is doing all these things is when we really must ask ourselves “what have we created?” The future of creating a new type of person is not a question simply for science, but one that includes science, philosophy, and, in some respects, morality.
Note: #2 does not exactly correlate to “black box” development of AI where the AI is writing new sections of its, or another AI’s, code that is opaque – even to its creators. This is worth discussing at length, but again is not the intent of the piece.
So, AI is not ‘there’ yet, but it is an incredibly useful tool that permeates our daily lives to a degree to which most of us are oblivious. It is growing and evolving as we speak, working its way through generational improvements like a truly viewable Darwinian experiment – or would that be intelligent design given that there are designers involved… yet another idea we can hold for a separate piece.
Understanding AI is what makes it both less and more concerning, more useful, and maybe less awe-inducing when someone shows up in our inbox or at our businesses selling us on the value of their AI. It is an automation tool for now, and it should be treated as such. Considering the personhood question can hold until we can communicate with an AI that believes itself to be a person.