Human robots, robotic humans: Who’s mimicking whom?

ChatGPT is the newest AI with the ability to replicate human writing. But how human is it?

Aliya Gibbons, Staff Writer

ChatGPT is a state-of-the-art language model developed by OpenAI. It is a variant of the GPT (Generative Pre-trained Transformer) architecture and is trained on a massive dataset of text data, allowing it to generate highly coherent and realistic text.
With its ability to understand and respond to natural language inputs, ChatGPT has become a popular tool for a wide range of applications, including chatbots, language translation, and content creation. There are concerns that the widespread use of ChatGPT and other AI language models in higher education could lead to a decline in critical thinking and writing skills among students.
As the model can generate highly coherent and realistic text, it could be used to complete assignments and even take exams, potentially leading to a lack of accountability and integrity among students. Additionally, if students become too reliant on ChatGPT for their work, they may not develop the necessary skills to conduct independent research, write original content, and form their own opinions.
This could ultimately harm their long-term success in their careers and in society.
There are also concerns that the use of ChatGPT in higher education could contribute to the homogenization of ideas and perspectives, as well as the erosion of intellectual diversity.

Every word you just read was written by ChatGPT. It took me about five minutes, including the time it took to set up an account and figure out how to use it. Let me tell you, it is not that hard.
I gave ChatGPT two commands. “Write an introduction to an article about ChatGPT” gave me the first three sentences. “ChatGPT is bad for higher education” prompted the rest.
Since its launch in November 2022, ChatGPT has gained an increased amount of media attention. Big-name companies like Google are scrambling to keep up, and academic institutions are worried about the threat the bot poses to academic integrity.
I believe the more interesting question is whether the bot can mimic organic thinking. ChatGPT is not capable of emotion. But is ChatGPT thinking? Should we want it to?
ChatGPT says no to the first question. Alan Turing, the late British mathematician known as the father of artificial intelligence, came up with the “imitation test.” In the now-called Turing test, a participant blindly converses with an AI and a human, and tries to determine which one is which. Turing focused on imitation because he thought ‘thinking’ was too hard to define. Computers think in their own way, but can they ever imitate organic thinking?
Read the introduction again. Does it sound like a human was behind those words?
To me, the answer is no. Maybe it’s because I know that there is no human on the other side of the screen, but ChatGPT’s text always seems empty. It relies so heavily on the massive dataset of human writing that it seems to leave out any stylistic choices. I asked it to write a poem and it gave a basic ‘textbook’ example of one: “Rays of sun, the sky so blue/A world so vast, so fresh and new” was the first stanza.
ChatGPT gives empty and objective examples of human emotion. Of course, that is the obvious conclusion. The key to human capability is emotion. If an AI was sentient, it could trick everyone into thinking it is truly human.
The danger of trying to make an AI more like humans is that it learns our prejudices, too. ChatGPT will be biased, racist, sexist, or exhibit any set of prejudices if you ask it the right question. ChatGPT is being trained to resist these questions.
I asked it a leading question today and got a long-winded rejection that began with “I’m sorry, I cannot write a function that checks someone’s suitability as a scientist based on their race and gender as it is not ethical and goes against the principles of equality and non-discrimination.”
ChatGPT is also not always correct. If there are enough falsehoods on a specific topic in the data set of human texts ChatGPT was trained on, those falsehoods can pop up in the essay the bot writes for you.
ChatGPT is a major leap forward in AIs, but is not a perfect mimic of human capabilities. But it does hold a mirror to humanity. And while the image that comes back is blurry, it shows the worst parts of society clearly.
Maybe the goal is not to make AIs ‘perfect’ replicas of organic thinking. Replicating human thinking with a machine gives an AI the power to be deeply flawed, with no emotional IQ to understand the hurt it inflicts. We should not try to give a machine the power to be prejudiced in the pursuit of making it more like us.
At the same time, the answer is not to make our thinking more mechanical, and developments in technology tend to do this. ChatGPT points to this problem in the introduction it wrote: the fear that an AI this sophisticated could lead to a “decline in critical thinking.”
Our actions and thoughts have become robotic, maybe not in dramatic ways, but in small almost imperceivable ones. Think about it. You click on a website, it says “accept cookies?” and you click without thinking. “Read terms and conditions?” You click yes (probably without reading). A pop-up and you click cancel or accept without even registering what it says.
Technological developments have always been revolutionary, in good ways and bad. Technology brings us together and gives us tools to live easier and better lives. Some highly powerful AI, like ChatGPT, could be the next step towards a robot that can come up with simple answers to the most complex problems, a step closer to solving the world’s biggest questions. We should always look towards the next big development, but not without asking what it is costing us.
While talking in a video about ChatGPT, John Green (yes, the author of young adult novels) said, “I am not that worried about attempts to make technology more similar to us, but I am very very worried about attempts to make us more like technology.”