The Chatbot That is Dangerously Good
Image Courtesy: Freepik
ChatGPT—the AI-powered chatbot—has taken the tech world by storm. Launched as a prototype and made available for public testing on 30 November 2022, it has generated quite a buzz. In less than a week, it gathered one million subscribers. People have been amazed and amused by its almost human responses on a wide range of topics. It has produced poetry, Shakespeare-like prose, software code and medical prescriptions. Teachers and educators are alarmed over students using ChatGPT for their assignments. The news has excitedly announced that ChatGPT has passed law, medicine and management exams (though the latter is hardly a sign of intelligence).
ChatGPT’s abstracts for medical research journals have fooled scientists into believing humans wrote them. Tons of articles have announced the coming demise of various professions, from journalism to writing, content-creation to lawyers, teachers, software programmers and doctors. Companies such as Google and Baidu, feeling threatened by ChatGPT, have rushed to announce their own AI-powered chatbots.
The core technology behind ChatGPT is an AI model called GPT-3, or Generative Pre-Trained Transformer Version 3. The “generative” qualifier implies GPT belongs to a class of AI algorithms capable of generating new content such as text, images, audio, video, software code, etc. “Transformer” refers to a new type of AI model first described in a 2017 paper by Google.
Transformer models learn about the context of words by tracking statistical correlations in sequential data—like words in a sentence. They are a seismic shift in the AI field, and have led to significant advances. In a 2021 paper, researchers at Stanford called these “foundation models” and wrote their “sheer scale and scope...over the last few years have stretched our imagination of what is possible”.
Currently, the most popular AI models use “Neural Networks”, conjuring images of an artificial brain using computers. In reality, even with massive advances in computer hardware and chip density, we are nowhere near simulating the human brain. Instead, artificial neural networks can be thought of as a series of mathematical equations whose “weights” or constants are tweaked to perform logistic regressions effectively. In a way, AI models use “training data” to perform elaborate curve-fitting exercises. Once trained, the equations of the “curves” which fit the training data are used to predict or classify new data.
Before the transformers, AI models needed to be “trained” by datasets, which humans label. For example, a vision AI model would be trained using millions of images, each manually labelled by humans as showing, say, a cat, person, mountain or river. This very labour-intensive process limits the data on which an AI could be trained. Transformer models escape this limitation by unsupervised or self-supervised training, i.e., they don’t need the labelled datasets. This way, they can be trained on trillions of images and petabytes of text data on the Internet. Such AI language models are also called “large language models” due to the sheer volume of data on which they are trained. The models develop statistical correlations between words and parts of sentences that appear together.
GPT-3 is a transformer-based, generative, large language model. Once trained, given a sequence of words, it can predict the next likely word. It actually predicts a distribution of the next most likely word. The next word is chosen randomly based on this probability distribution. The more data the model is trained on, the more coherent its output becomes, to the point where it produces not just grammatically correct passages but those which sound meaningful. However, that doesn’t mean it can produce factually accurate or appropriate passages.
GPT-3.5 is a derivative of GPT-3, which uses supervised learning to fine-tune its output. Humans rate and correct the output of GPT-3, feedback which is incorporated back into the model. It has been reported that OpenAI outsourced the work of labelling tens of thousands of text snippets. So, the model gets trained to produce outputs which do not contain obvious falsehoods or inappropriate content—at least so long as humans rate or correct the general category of topics the model is asked to respond to.
ChatGPT is the adaption of GPT-3.5 for chatting with humans. It is a significant advance over previous generations of AI-powered chatbots, which explains the excitement it has generated. But ChatGPT is just one transformer model that has been deployed. Google, for example, has built a model called BERT to understand user queries on google searches since 2019. In the future, transformer-based models could assist in content creation and curation, in searches, and as research aids in fields like software programming and even drug discovery (through AI-based protein folding simulations). We, however, must understand we are in the infancy of this new technological leap and need much more research to bring some of these promises and possibilities to fruition.
OpenAI has proclaimed it has a path to the holy grail of AI—Artificial General Intelligence or AGI—which refers to machines developing human-like intelligence. Though transformer models are a significant advancement in AI, we should be wary of such a tall claim. Many have reported ChatGPT provides incorrect responses or gibberish that sounds superficially meaningful. A doctor has reported ChatGPT making an impressive medical diagnosis--and an odd claim that seemed wrong. When asked, it provided a reference from a reputed journal, referring to authors who contributed to the journal. The only problem was the reference didn’t exist. ChatGPT had just made it up from thin air. This means it is not just unreliable but cannot actually “understand” as we humans do. It simply generates content according to its statistical model, which is very good at fooling us into believing it can. We should guard against the hype about a path to AGI. Our intelligence and instincts result from hundreds of millions of years of evolution and more than a hundred thousand years of human societal development. It is unlikely that even very powerful machines with sophisticated learning algorithms, which consume all written texts and images and sounds humans have produced, would be able to develop understanding or intelligence comparable to human intelligence in any way. However, machines can learn to do specific tasks very well using the methods described above.
With this broad understanding, we look at some issues with such models. Their power, to begin with, comes from the vast amounts of data they are trained on and the massive size of their training neural networks. GPT-3 has 175 billion parameters, up from 1.5 billion for GPT-2. Running such massive models requires huge amounts of hardware. OpenAI’s training bed is estimated to have more than 10,000 GPUs and 285,000 CPU cores. Microsoft, which invested around US $3 billion in OpenAI, boasted it was one of the largest supercomputer clusters. Much of that funding paid for the cost of using this setup. It is in talks to invest another US $10 billion in the company. Such research is out of bounds for most academic institutions and even companies. Only the biggest digital monopolies, the likes of Google, Microsoft, Amazon, Facebook, etc., or companies they fund, can afford such a price tag and access to vast data from varied sources required to train the model. The development of, and economic benefits that result from, such future technologies would accrue only to such companies, enhancing their sprawling digital monopolies. Not just that, these companies are unlikely to share the data used to train the models or the rules used to filter that data. This raises ethical questions since these models can get biased based on the choices made with the data. We have examples from the past where AI models developed a racial or gender bias. It is unlikely that these companies and their technology teams have the capability or inclination to deal with such issues.
Beyond training data selection, the models are also opaque. Even people working on them don’t fully understand how they work, what the parameters stand for and what the parameter values correspond to in real life. These are giant statistical regression models based on humongous amounts of data allowed to “figure” things out unsupervised. Also, as the Stanford researchers said, the models are foundational in that multiple different adaptations and applications could be built off them in unrelated fields. So, even after fully developed, the models may work in the majority of cases, but even these big companies won’t appreciate—let alone account for—the fatal mistakes they can cause in any field. Most academic institutions would not afford such expensive hardware setups, and these companies are unlikely to make them available for academic research. Therefore, it won’t be possible for third parties to try out and review these models. So far, science and technology have progressed through peer collaborations. Considering the great harm social media algorithms have unleashed on democracies and societies by creating their filter bubbles and the proliferation of fake and hate news, we shudder to think of the horrors of these new models for communities.
But the more immediate concern will be to deal with the extremely short-term outlook of tech monopolies and the startups like OpenAI they are funding.
Transformer models are a critical leap in AI technology, but they are still in the research phase. Much more about how they work needs to be understood. They need many generations of improvements before they can be deployed responsibly. However, we see efforts to immediately launch ChatGPT commercially. OpenAI has announced a US $42 per-month “professional plan” for the ChatGPT API. This would open the ChatGPT model for developers to launch commercial offerings. To alleviate educators’ fears, OpenAI has announced a service that would identify ChatGPT-produced writings. So, after creating the problem of plagiarism, OpenAI could make a killing by selling the solution to schools and universities worldwide!
Given the hype and greed of Venture Capital, we already have half-baked offerings to make customer support operators, writers, artists, content creators, journalists, lawyers, educators and software programmers redundant in the near future. These will bring suffering to people working in these fields in terms of job losses, and, considering the premature deployment, the offerings will do a great disservice to the professions. For example, journalism is hollowed out worldwide, and news organisations are making deep cost cuts to survive in the digital era. AI-generated news and opinion may exacerbate this trend. Since these models superficially generate reasonable content (and many professions face cost pressures), managers would be tempted to bypass human oversight of AI-generated content resulting in serious mistakes.
There are also ethical concerns around AI-generated art, literature and movies. While it produces seemingly novel pieces of art, in effect, it learns from existing artistic styles and reproduces them without copying the content. It thereby bypasses the charge of plagiarism, but such artwork amounts to high-tech plagiarism as the models are incapable of creativity. They generate content based on learned prediction algorithms.
So, how should society address such concerns?
While transformer-based AI models represent a substantial advancement in machine learning, they are still in the research phase, and there are many unanswered questions about the ethics and regulations regarding their use. The unseemly haste and greed of their creators and funders can cause great damage to society. Governments must act quickly to prevent yet another boom-bust tech cycle and the harm they bring to societies. Due to their foundational nature, governments should treat these technologies as public goods. Governments should set up public initiatives to fund research and development in these promising technologies so they are safely and ethically developed and deployed for humanity’s greater good.
Get the latest reports & analysis with people's perspective on Protests, movements & deep analytical videos, discussions of the current affairs in your Telegram app. Subscribe to NewsClick's Telegram channel & get Real-Time updates on stories, as they get published on our website.