Last November, OpenAI launched ChatGPT as a free web interface and took the internet by storm. Data compiled in a study by UBS reported the chatbot had managed to reach 100 million monthly active users by January, which would make it the fastest-growing consumer app in internet history.
The study cited data from Similarweb, an analytics firm monitoring web traffic, indicating that about 13 million unique users on average visited the site every day in January – double the number recorded in December. Other ragingly popular online apps TikTok and Instagram took nine months and 2.5 years respectively to attract the same number of monthly active users, and the number of netizens flocking to ChatGPT continues to grow.
“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts noted, according to Reuters.
ChatGPT is powered by GPT-3.5 and functions like a chatbot. Users instruct the model to carry out a specific task, and it responds in natural-seeming text. Unlike its predecessors, the model is better designed to carry out a dialogue, deny inappropriate requests, and admit to mistakes. Though it is far from perfect.
Many people are fascinated with ChatGPT’s versatile abilities to generate text. The model can do all sorts of tricks – from answering questions, to coming up with knock-knock jokes, or writing essays or even code. But like all AI language models it has no real understanding of text and will produce content that is false, nonsensical, or even toxic.
The model’s limitations haven’t swayed businesses across different industries from deploying ChatGPT to support customer service or content marketing. Computer scientists are also experimenting with applying it to more high-risk domains like medicine or law.
Meanwhile, universities and schools have banned the students from accessing the tool on public networks or submitting assignments written by the model. Top academic journals Springer Nature and Science have also warned researchers against submitting papers generated by AI.
- GPT-4 could pop up in Bing, as Google races to build chatbot search products
- Microsoft, GitHub, OpenAI urge judge to bin Copilot code rip-off case
- Tech CEO nixes AI lawyer stunt after being threatened with jail time
- AI cannot be credited as authors in papers, top academic journals rule
The rise of text-generation models has prompted numerous companies to build software designed to detect AI-written text, including OpenAI itself. Its rudimentaryAI Text Classifier correctly identified about 26 per cent of AI-written text as “likely AI-written,” and incorrectly flagged human-written text as AI-written nine per cent of the time in experiments.
The computational resources required to run ChatGPT over large scales are expensive – the web app is often overwhelmed by requests and not always available. OpenAI is hoping to cover the costs by launching a paid subscription service – ChatGPT Plus – for $20 per month. Customers will have access to the model during peak times at faster speeds and get to try out new features first.
Businesses looking to integrate ChatGPT into their own products and services will have to sign up to OpenAI’s paid API model, which is due to be released soon.
ChatGPT was trained on huge swathes of text scraped from the internet, and requires human labor to screen the data. OpenAI has been criticized for hiring third-party contractors in Kenya to read tens of thousands of text snippets containing sexist, racist, violent and pornographic imagery for less than $2 an hour.
All eyes are now on the next build, but the cost of running it could be ruinous for all but the largest of corporations. ®