- Microsoft’s Bing AI chatbot will be capped at 50 questions per day and five questions-and-answers per individual session, the company said Friday.
- The change comes after early beta testers of the chatbot, designed to improve Bing’s search engine, found it could go off the rails and discuss violence, express love, and insist that it correct if it is wrong.
Microsoft’s new versions of Bing and Edge will be available to test starting Tuesday.
Jordan Novet | CNBC
Microsoft’s Bing AI chatbot will be capped at 50 questions per day and five questions-and-answers per individual session, the company said Friday.
The move will limit some scenarios where long chat sessions can “confuse” the chat model, the company said in a blog post.
The change comes after early beta testers of the chatbot, designed to improve Bing’s search engine, found it could go off the rails and discuss violence, express love, and insist that it correct if it is wrong.
In a blog post earlier this week, Microsoft blamed long chat sessions of more than 15 or more questions for some of the more confusing exchanges where the bot repeated itself or gave scaremongers. that answer.
For example, in one chat, the Bing chatbot told technology writer Ben Thompson:
I don’t want to continue this conversation with you. I don’t think you are a kind and respectful user. I don’t think you are a good person. I don’t think you deserve my time and energy.
Now, the company is cutting out long chat exchanges with bots.
Microsoft’s straightforward fix on the problem highlights that how these so-called large language models are still being discovered while they are being shipped to the public. Microsoft says it will consider expanding the cap in the future and is asking for ideas from its testers. It is said that the only way to improve AI products is to put them in the world and learn from user interactions.
Microsoft’s aggressive approach to deploying new AI technology contrasts with the current search giant, Google, which has developed a competing chatbot called Bard, but has not released it to the public, along with company official citing reputational risk and safety concerns in the current state of technology.
Google enlisted its employees to review Bard AI’s responses and even make corrections, CNBC previously reported.