By Pymnts
Artificial intelligence (AI) solutions are set to be the greatest thing since sliced bread.
Just ask Bill Gates.
But as tech giants like Microsoft and Alphabet compete for market share primacy in the emergent landscape of intelligent, interactive tools, certain flaws in the application of their large language model (LLM) trained chatbots are increasingly rearing their embedded heads.
This, as Google announced Tuesday (March 21) it is officially inviting users to try out Bard, its entry into the generative AI race, while OpenAI was forced to briefly shut down its headline-grabbing ChatGP-4 AI interface Monday (March 20) after the chatbot released users’ search histories — a big-time data privacy no-no.
PYMNTS has previously reported on how regulation around generative AI has generally struggled to keep up with, and contain, its exponential growth.
Read more: It’s Google’s Bard vs Microsoft and ChatGPT for the Future of AI
Google Treads Carefully After $100B Mishap
Google’s first unveiling of its Bard chatbot wiped out nearly $100 billion in shareholder value and sent the company’s stock on an 8% dive after the AI solution gave a wrong answer during its first-look presentation — underscoring the inherent unreliability of certain LLMs trained on datasets that themselves contain potentially misleading or incorrect information.
That could be why the tech giant has given the latest iteration of Bard, which already seems a little more knowledgeable and cautious about what it’s saying than OpenAI’s ChatGPT, somewhat of a personality lobotomy.
Alphabet, Google’s parent company, is also deploying the chatbot as a separate service from its Google search engine and other products.