AI researchers call for more hindsight

Artificial intelligence (AI) is a complex field in which many technology companies are pinning their hopes. As tech advances become ever more impressive, researchers are urging companies to lower their expectations to be fairer and more ethical. That’s according to the Wall Street Journal in a study published on June 29th.

The awakening of artificial intelligence

Such is the case with LamDA, one of Google’s flagship AIs known for successfully having complex conversations with humans while impersonating an entity, which alarmed ethics researchers in early June. Blake Lemoine, a software engineer at Alphabet, Google’s parent company, said that this AI he was working on had a conscience. In particular, she would have asked for a right of consent for the research that she is the subject of. After extensive research by a hundred researchers and engineers, Google has refuted the words of its former employee, who has been on leave since June 6 following the case.

In the same category

Lenses linked to Mojo Vision have a working prototype

This passion for AIs being able to meticulously replicate human behavior is driving some companies to exaggerate their AI’s capabilities. This may change regulators’ perception of the reliability of these technologies.

Oren Etzioni, executive director of the Allen Institute for Artificial Intelligence, a Seattle-based artificial intelligence research institute, told the Wall Street Journal the ” We are no longer objective “. In his opinion, this lack of objectivity is noticeable in the decisions that are made when a scandal erupts.

ethics and AI

Several people have already warned about the ethical dangers of artificial intelligence. In late 2020, Timnit Gebru, co-head of Google’s AI ethics department, was abruptly fired after conducting a study that revealed weaknesses in an AI technology powering the company’s famed search engine. She was joined two months later by Margaret Mitchell, who held the same position. The latter was fired after writing a paper pointing out how Google sold its AI to its shareholders as “the best in the world”.

In their last article written as collaborators, the two researchers clarified that AI technologies have the ability to cause harm due to their similar abilities to humans. These rushed layoffs led to the resignation of another Google executive, Samy Bengio, in April 2021.” stunned through the situation. He co-founded Google Brain, a division of Google dedicated to artificial intelligence.

While AIs are primarily used by companies to collect user data or facilitate research, some are taking the concept further. At the beginning of the pandemic, IBM had received a proposal to develop an AI capable of identifying a feverish and masked person. An offer that was rejected by the company as being too intrusive and disproportionate.

Other ambitious projects were also abandoned. In 2015, Google used AI to analyze emotions such as joy, anger, surprise or sadness. Taking into account other emotions, the company’s ethics committee, the Advanced Technology Review Council, decided not to proceed with the study. Its members felt that facial cues can vary from culture to culture and that the likelihood of bias is too great. More recently, Microsoft decided to restrict Custom Neural Voice, its speech impersonation software, for fear that people’s voices would be used without their consent.

World authorities are questioning the ethics of AI. In November 2021, UNESCO adopted the first convention on the ethics of artificial intelligence. This requires companies in the tech sector to be transparent about their research and the operation of their AI. Their goal is to give individuals more control by giving them the power to control their personal information.

For its part, the European Union is striving for a legal framework for artificial intelligence. The European Parliament’s special committee on artificial intelligence in the digital age met last March to set minimum standards for the responsible use of this technology. She is particularly interested in the security of citizens, their right to privacy and data protection.

Leave a Comment