Is chatGPT Sentient? by John R. Patrick
Written: February 2023
The level of interest in AI and specifically chatGPT and similar AI tools is quite high. I plan to switch back to some important healthcare technologies but, for now, I hope I can help readers understand what is going on with AI. I will start with explanations of some of the basics.
The new generative AI tools, including OpenAI’s ChatGPT, Microsoft’s BingGPT, and Google’s Bard use what is called a large language model. Generative means when you ask a question an algorithm generates a response, not a list of links like with traditional search. Large language models are very sophisticated algorithms. Algorithms range from very simple to very deep and complex. A simple algorithm would be Area = π x r^2, used to calculate the area of circle.
An example of a very large algorithm is the algorithm used in Google’s search engine. The search algorithm is a complex process involving several steps, including crawling and indexing webpages, analyzing the relevance of the content to a user’s search query, and ranking the results based on a number of factors. It is estimated the Google algorithm includes over 200 ranking factors and uses machine learning (another AI technology) techniques to analyze and interpret the vast amount of data involved in the search process. The exact details of the algorithm are closely guarded by Google to prevent abuse or manipulation.
chatGPT and Google search are both fed a lot of webpages for their algorithms to read. Google updates its database of pages regularly. chatGPT built its database up to the status of pages as of September 2021. Eventually, as computer power and Internet speed continue to grow, chatGPT will probably use a real-time database of all webpages. How many webpages are there? Nobody knows because if is continually changing as websites and pages are added and retired. A recent estimate was there were nearly two billion websites. A website can have an unlimited number of webpages. It is safe to say there are many billions of webpages on the Internet.
A large language algorithm is designed to understand questions and generate an easy to read, grammatically correct response. The algorithm is trained on vast amounts of text data and uses statistical techniques to learn the patterns, relationships, and meanings of words and sentences. Based on this understanding, the algorithm can generate new text by predicting the next words in a sentence based on the patterns it has learned.
Sometimes, the response is not correct or totally relevant. This can happen because the algorithms are not perfect, but also because the question was not stated correctly. Axios reported trainers and educators are gearing up to help industries teach workers how best to use the new technologies. New startups have arisen such as PromptHero and Promptist specifically to help a user find the magic words which can lead to the optimum response. An online market has emerged called PromptBase to enable users to buy and sell prompts which cause chatGPT et al to generate good responses. Axios said a host of new “prompt engineer” jobs have opened, and job seekers are adding those two words to their resumés.
In addition to the issue of optimizing the prompt as input to the AI chats, there are many other limitations and even dangers. The generative technology can spew out a tsunami of misinformation. Regulators, policymakers, and tech executives have been slow to face the dangers of misinformation. Misinformation abounds on social media and the large language models scoop it up and potentially proliferates it.
Axios reported, “Generative AI programs like ChatGPT don’t have a clear sense of the boundary between fact and fiction. They’re also prone to making things up as they try to satisfy human users’ inquiries.” The biggest threat may be bad actors spreading false narratives. Over time, the bad actors may teach lies to the algorithms which then spreads them.
The tech giants urge users to provide feedback. Tech firms are trying to get ahead of regulatory action by developing their own tools to detect falsehoods and using feedback to train the algorithms in real time. Microsoft said user feedback may make ChatGPT “behave”.
Eventually, chatGPT and the others will become sentient, meaning they will be able to perceive or feel things. The tools are not sentient, but people may begin to think they are. Replika is an AI-powered chatbot designed to simulate human conversation and provide emotional support to users. Over time, a Replika learns to mimic the user’s speech patterns and preferences. Replika can be accessed through a mobile app and is intended to provide a safe and private space for users to express themselves, receive empathy and validation, and work through personal issues. It can also be customized with different personalities and conversation styles to better match the user’s preferences. More than 10 million users have created their own Replikas for which they pay a monthly fee.
Axios said AI “is the most important tech breakthrough since at least the iPhone and perhaps the internet itself.” I agree and there are many benefits, but some of what is being developed will seem like science fiction, and scary. Regulators and policymakers should be concerned about the spread of misinformation and bias. Unfortunately, our leaders have been unable to solve much simpler problems than the risks of AI becoming sentient or even superhuman.
Epilogue: I wrote Robot Attitude: How Robots and Artificial Intelligence Will Make Our Lives Better to explain robots and AI in layperson terms. I provided many examples of how these technologies can improve productivity and safety in many areas including healthcare, home healthcare, farming, finance, and insurance. If you are interested, the book is available in Kindle, paperback, hardcover, and Audible. If you would like to try the Audible version, I have some promo codes available. The code lets you listen for free and no sign up is required.