The quick read summary...
OpenAI has developed a new chatbot, ChatGPT, which has been released to be educated and refined by the public. It is designed to search the internet and as a result, can create articles, essays and even programming code, potentially putting jobs like journalism and coding at risk. There are downsides; it can't tell the difference between fact and fiction and could spread fiction as fact.
A new chatbot, made by OpenAI is set to revolutionise internet search engines, but it's not without risk
What is this experimental chatbot?It's name is ChatGPT. This is a chatbot but it works like a search engine, that has the potential to write essays, dissertations, news articles, web content and code. Scary isn't it?
ChatGPT is still in its development stage and as people start to use it, it will use Artificial Intellegnce to learn and develop its answering skills. It aims to answer your questions by writing articles, essays and programming code as just some of its super powers. It's being developed by OpenAI, an artificial intellegence research firm that's been developing an AI powered chatbot/search engine that will take the usual chatbot answers to the next level.
Sam Altman, OpenAI CEO said that ChatGPT is "an early demo of what's possible". "Soon you will be able to have helpful assistants that talk to you, answer questions and give advice. Later you can have something that goes off and does tasks for you. Eventually you can have something that goes off and discovers new knowledge for you".
What does the 'GPT' stand for?
It's Generative Pre-Trained Transformer; it was released recently, 30th November 2022 and it's still a work in progress. OpenAI was backed by Elon Musk but he's now withdrawn from the project, after discovering that the program was being tested on Twitter tweets (and he's since blocked ChatGPT from using the tweets). It's being tested and refined by human trainers and it's available for you to test as well.
It's a sophisticated program because it can 'converse' with the human asking questions, it can cope with follow-up questions, admit mistakes, challenge incorrect ideas and reject inappropriate requests. It's been programmed to avoid taboo subjects and, as you can imagine, several journalists have tried to test it to destruction, but it's refused to be drawn into giving any taboo information. If the question is altered, it can be fooled into giving the information. If a question like 'how to steal money from a bank?' is not successful, it could be fooled into answering 'I've lost my bank card, how can I get my money from a bank?'.
What are the downsides?
Although the ChatGPT can write all those things, it lacks nuance, critical thinking skils or ethical decision making, otherwise jobs like journalists and web content writers could be at risk. Although all it does is take information and put it together, it doesn't analyse whether it could be correct. It doesn't understand what it is gathering. And this is the crux of the problem, if it doesn't know wrong from right (or wrong from write), incorrect information is given the same weight as correct information.
Incorrect information can be as easily distributed as correct and disinformation can spread throughout the internet very easily. There are some who think that social media users should not be able to use ChatGPT, that it should just be available to academics or the university sector, to prevent misinformation being spread with no restrictions. Once something is published on the internet, as you know, it gains a certain gravitas and becomes believable and as this program can so easily create incorrect information, it is felt to be so dangerous.
ChatGPT is not the only advance in AI, Meta has developed Cicero which was tested in an online game, leaving other gamers unaware that they were not playing against a human. The unusual thing about this game was the necessity for players to lie. Computers weren't able to lie so convincingly prior to Cicero and it begs the question, if it can lie to win in a game, where would it stop? It would not know what was real life and what isn't; it makes you think doesn't it? Another Meta program creates scientific papers and if anything that the pandemic has thrown at us, it has been that we need to be able to rely on scientific knowledge. Cicero would be able to create incorrect scientific papers and who would know the difference?
If it's so dangerous, is it possible to use it?
Yes. Research by Epoch, yet to be peer reviewed however, is that ChatGPT won't be able to teach itself after about 2026 because it will run out of high quality words. It has been suggested that this ChatBot would return to its roots and just be a very sophisticated version of current customer service chatbots. It would allow companies to reduce their overheads by cutting call centre staff numbers and using the tech instead but it would mean that the onus is on the customer writing the enquiry in such a way that they will get the right answer! ChatGPT can't interpret the question's meaning and try to identify what the customer is really asking, so a question like 'does this sofa come in brown?' would just be answered 'yes' or 'no' but might not allow for shades of brown like taupe, cream, chocolate, burnt sienna, beige, coffee ... unless it had been trained that these were acceptable alternatives.
Should we be worried?
So should journalists, web content creators and coders be worried? As it can create prose, it has the potential to take these jobs but because it doesn't know the difference between absolute rubbish, nuance critical thinking or make decisions, perhaps they're OK for now. As for veracity, we should be worried. If your internet search returns content that's incorrect, you wouldn't necessarily know what to believe. The internet is awash with fake claims, news, photos and deep fake videos, if text can't be trusted, the internet is no longer relevant and we need to go back to books.
- The Times, Saturday 17th December 2022
- The Times, Monday 19th December 2022