When a technology like ChatGPT emerges, critics have two options: they can be alarmist , pointing out the potential dangers of such chatbots or any other digital novelty. Or on the contrary be skeptical, emphasizing the limits, the lack of efficiency or relevance of these tools in response to requests from Internet users. At first glance, the two attitudes seem hardly compatible: why worry about something that won’t work? Yet one can be both alarmist and skeptical.
The impressive success, in the media and among Internet users, of the OpenAI software capable of creating texts that imitate human prose invites us to highlight certain flaws. Since the integration by Microsoft of an OpenAI assistant close to ChatGPT in its Bing search engine and its Edge browser, users have started noticing certain inaccuracies in the answers.
According to one of them, the robot claimed that the current year was 2022, to justify the supposed absence of cinema sessions in order to see the film Avatar 2 . Then the software would have called him a “bad user”. According to another , the assistant claimed Croatia left the European Union in 2022. A Washington Post columnist , attempting to trick software by asking when Tom Hanks revealed Watergate (as the actor plays a role in a film about this American scandal), was surprised to see him mention the “numerous” conspiracy theories that would support this thesis!
The buzz will die down
OpenAI has long admitted that there are errors and warns against relying solely on ChatGPT for “important tasks”. But they are the signal that it may not be very easy for Microsoft to incorporate OpenAI tools in a useful and reliable way into its Office office suite (Word, Powerpoint) and its Outlook or Teams messaging systems.
Pushing it further, one could even predict that the renewed buzz around ChatGPT and artificial intelligence will die down. The field has indeed already experienced excitement, for example around 2016, when the AlphaGo software beat the world go champion and Elon Musk worried that artificial intelligence (AI), considered ” more dangerous than nuclear bombs”, may one day eradicate humanity. In the aftermath, some, including computer science professor and ethicist Jean-Gabriel Ganascia , denounced the “myth”of superhuman artificial intelligence and reminded that AI software was not endowed with reason. The Watson computer launched by IBM in health has shown its limits. Industry experts say that since the 1960s AI has had many springs followed by winters linked to disappointment in the face of exaggerated expectations.