ChatGPT from OpenAI is regarded as a revolutionary AI tool. But specialists claim that’s not at all true.

A new discussion about what artificial intelligence (AI) is has been ignited by ChatGPT, a new AI chatbot from OpenAI, a software business specialising in AI.

ChatGPT from OpenAI is seen as a turning point for conversational AI. But experts warn that there is still more to be done. (Screenshot courtesy of ChatGPT)

“ChatGPT is a chatbot with AI that was created to be a thoughtful discussion partner. Despite its outstanding capabilities, it is not the best AI tool currently on the market. In this post, we will examine ChatGPT’s shortcomings and the potential merits of alternative AI technologies. These opening sentences were created by ChatGPT, the popular AI chatbot developed by OpenAI, a technology company specialising in artificial intelligence (AI), which has reignited the conversation on what this technology means.

Some claim that ChatGPT signals the end of Google and schoolwork. However, large language models (LLM) and AI experts warn that this is far from the case.

“Believing and relying on information produced by ChatGPT may be risky in this situation. According to Chirag Shah, Professor at the University of Washington and Co-Director of Responsibility in AI Systems & Experiences (RAISE), “At its core, this tool (or many others like it) don’t really have an understanding or knowledge of an area like a human expert would,” in an email to IndianExpress.com.

ChatGPT’s evaluation of the Xiaomi 12 Pro. Some of the details, such those about the processor and camera, are incorrect.

When you start looking for context and even accuracy in some of ChatGPT’s comments, this becomes clear. As an illustration, this author invited ChatGPT to produce a review for a device like the Xiaomi 12 Pro. The essay itself reads well, however a lot of the material it contains is inaccurate.

Shah noted that while ChatGPT is a “major improvement” and “able to use the newest breakthroughs in generative models (particularly, GPT-3.5) and natural language generation,” there are still a number of difficulties. He says ChatGPT cannot yet replace “people for writing occupations — not while we question purpose, authoritativeness, and provenance.” It can, however, serve as “a starting point for many similar projects,” he continued.

Arvind Narayanan, a professor of computer science at Princeton University, advises users to keep in mind that not all material should be taken seriously just because it sounds authoritative. Checking your sources has always been crucial, and it’s now much more crucial thanks to AI tools.

In his blog post, Narayanan continued to refer to ChatGPT as a “bulls***-generator,” warning that “you can’t identify when it’s wrong unless you already know the answer.” He cited examples of how, when conducting tests with co-author Sayash Kapoor, they inquired about “some basic information security questions” on ChatGPT, and while the responses sounded logical, they were completely unreliable.

When you ask the Chatbot to write about topics like why parliamentary democracy or a republican system is superior, this “parroting” of ideas is also clear. The essays (screenshot below) could seem flawless, but if you look closer, you can notice that they lack authority. For both subjects, the reasoning remained the same.

It used the same justifications for both republican and parliamentary systems when asked to compare them in an essay.

According to Emily M. Bender, a professor in the University of Washington’s department of linguistics, the system is merely created to generate more words based on the words in the prompt. Making these models larger and larger while asserting that the increases in apparent coherence and fluency are steps towards “real AI” or “linguistic understanding” is analogous to going faster and faster while asserting that teleportation is on the horizon,

While adding context to talks with AI may someday happen, RAISE’s Shah asserts that it will be considerably more difficult to add accountability to these responses. “Alexa challenged a 10-year-old by asking him to touch a live plug with a penny. A person would have recognised that was improper behaviour. It’s unclear whether and when AI systems will be able to have a general feeling of responsibility, the author stated.

The most serious ethical issues

Additionally, there are worries about biases in the bot’s responses—problems that plagued earlier versions like Microsoft’s Tay chatbot and Meta’s Galactica. Because the models failed in both instances, the corporations had to remove these from public access. In Tay’s case, it gained notoriety for abusive tweets and remarks that were both racist and misogynistic.

ChatGPT-from-OpenAI-is-regarded-as-a-revolutionary-AI tool. But specialists-claim-that's-not-at-all-true.

In a Twitter thread, Steven T. Piantadosi, professor at UC Berkeley and director of the Computation and Language Lab (Colala), provided examples of how ChatGPT was affected by many of these biases. Although OpenAI’s founder also posted to the thread encouraging people to downvote such comments, the bot appeared to value Caucasian guys more than those of other races in many of the responses.

I believe that the assumptions made by those who create the training objectives and algorithms, as well as the training data sets themselves, are what lead to the biases in these systems. Although it is a very challenging issue to resolve, it must be done so before these systems are deployed in practical applications.

In addition, there is a risk that these tools could be used by dishonest people to spread false information, such as by “populating message boards for recruiting extremists, producing fake reviews (positive or negative, on demand), and generally polluting information sources,” according to Professor Bender of the University of Washington.

However, as others have highlighted, OpenAI’s method for developing the model differs from that of other businesses. For instance, Shah cited how the business depended on manual annotations and its own staff members who served as the chatbot’s AI trainers, but he also cautioned that there is no guarantee of the accuracy or objectivity of the responses. While the bot has been trained by OpenAI to refrain from speaking inappropriate things, Narayanan claimed that the filter it uses “frequently fails.”

Additionally, there is the issue of “credit” as well as the query of how these models use data. Since ChatGPT does not link back to the websites or articles that it frequently uses for training, it differs from Google in this regard. This is made much trickier by the fact that generative models like ChatGPT and DALL-E produce made-up material instead of presenting existing content. The only difference is that our capacity for invention is strongly reliant upon that which already exists, as Shah pointed out.

~Solvingdad.com

Leave a Comment