OpenAI’s ChatGPT, the artificial intelligence-powered chatbot that has gone viral, has crossed one million users in less than a week since it was officially made available to the public. ChatGPT was made available for public testing last Wednesday. Open CEO Sam Altman confirmed this via a tweet. His post also attracted questions on whether the company plans to keep ChatGPT free forever, to which he replied that they “will have to monetize it somehow at some point,” and added that the computing costs of running this are “eye-watering”.
Twitter CEO Elon Musk also asked Altman what is the average cost per chat for OpenAI. Altman said this is “probably single-digits cents per chat; trying to figure out more precisely and also how we can optimise it.”
Interestingly Musk also tweeted about how Twitter’s database is used for training by OpenAI and that he has put a pause on this for now. According to him, OpenAI used to be an open-source and non-profit company which has changed and he “needs to understand more about governance structure & revenue plans going forward.”
Meanwhile, others have also pointed out that ChatGPT Is not without flaws and is still falling victim to old racist and sexist biases. Steven T Piantadosi, a professor at UC Berkeley who heads the ‘Computation and language lab (colala)’ at the university, wrote that while ChatGPT is amazing, it is not without biases.
He put out a thread showcasing the issues with the chatbot. Incidentally, Altman also replied to the larger thread and asked users to “hit the thumbs down on these” replies which are offensive and help the AI improve.
Check out his tweet and Altman’s reply below
ChatGPT is the OpenAI’s conversational chatbot which can talk back almost as another human being would; many are saying that ChatGPT could soon be writing the daily mundane emails, articles, code, how-to-guides and even college essays. Once a user signs up for ChatGPT, they can use the chatbot to have a conversation, and the expectation is that it will give reasonably intelligent answers in the form of an essay. Some have also used it to write fiction, though, in our experience, this is one of the limitations for now.
A user can just go to the OpenAI website and tap on the Try it Now option next to the ChatGPT banner which is right on top. Or you scroll down to ChatGPT and tap on it. You will have to sign up and create an OpenAI account.
Keep in mind, that you might not be able to sign up right now. We tried signing up with a new account and it said, “ChatGPT is currently overloaded. Please check back later.”
If you do manage to get in you will be able to see the chatbot interface. As you start, ChatGPT has some examples, capabilities and warnings listed out for you. For instance, examples include the kind of questions you can ask such as “explain quantum computing in simple terms” or “got any creative ideas for a 10-year-old’s birthday?”
The ‘Capabilities’ tab mentions that the chatbot can remember what the user “said earlier in the conversation” and allows a “user to provide follow-up corrections.” It is also trained to decline inappropriate requests. Its limitations are it may occasionally generate incorrect information, occasionally produce harmful instructions or biased content and limited knowledge of the world and events after 2021.
According to OpenAI, this model was trained using “Reinforcement Learning from Human Feedback (RLHF)” and is similar to an earlier model they created called InstructGPT.
“We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. The blog post adds, “ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022. ChatGPT and GPT 3.5 were trained on an Azure AI supercomputing infrastructure.”
The post also acknowledges there are limitations with the chatbot. This means it could write “plausible-sounding but incorrect or nonsensical answers,” and this is still a challenge that the company is hoping to fix. It is also “sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.” It can also overuse certain phrases due to “biases in the training data…”