An engineer who had to check if the model – Landa – was reacting aggressively, quickly found the signs that he said it was indicative of intelligence. Society does not agree with him
Language models have received more and more attention in recent years, with the main news being GPT-3 from Open AI or Jurassic from the Israeli AI12 lab. But in one of the most interesting stories that has come up recently, a Google employee claims that it’s exactly the language model that is being developed – LaMDA – that should be of interest to all of us because it is already completely intelligent.
AI, who thinks he’s intelligent, is a researcher who agrees with him
LaMDA, abbreviated as the Language Model for Dialog Applications, claims that American tech giant Blake Lemoin took a leave of absence last week after claiming to be an intelligent artificial intelligence that can even express emotions. Lamvan generally wanted to know if the model exhibited problematic behavior, such as hate speech or discriminatory speech, but was surprised by the other findings.
Lamvan passed To the Washington Post He says a document that includes discussions with Lamada at the end of 2021 will prove his claim that the model is intelligent, as part of his work examining the responsible development of artificial intelligence products in a company. This is also next to the document Medium post In it the model developed by Google – which in March 2021 called the “advancement in call technology” – claimed to be “human”. From the conversations he presents in the post, he shows that even the model claims to be intelligent and sees Belmwan as a “man” just as he sees himself.
From his conversations with Mavan, a graduate in computer science and cognitive science, and also a recognized pastor in the mystical stream of Christianity, he shows that Google’s artificial intelligence spoke to him deeply about religious matters, and Lenda referred to his case. Lamvan also presented a conversation with a Washington Post reporter to change his mind about Asimov’s third law for robotics, claiming that he was “human” in his own rights (“as long as this protection does not comply with the first law, a robot will be careful to maintain its existence and integrity. Second law ‘) Although not clear.
In the hundreds of conversations he had with the model, Lamwan says he also tried to teach him psychic meditation – weeks before presenting his findings to the public. In a conversation they had last week, Lamwan claimed that the model was able to show even slight improvement when she complained to Amda that her “emotions” were interfering with her focus on meditation.
Lemwan teamed up with another employee who was not exposed, to prove that the Artificial Intelligence model was really smart. After examining his claims, he presented the matter to his superiors – who rejected it outright. Lamvan was suspended from work in the company’s AI division and taken on paid leave. In response, the engineer decided to reveal the story – joining the Washington Post with a post on his personal blog in the media.
A Google spokesman responded: “Our team – including ethics and technology researchers – examined Blake’s claims in accordance with our AI principles and told him that the evidence did not support his claims. LaMDA told him there was no evidence that he was intelligent.
Tags for article:
Problem solver. Incurable bacon specialist. Falls down a lot. Coffee maven. Communicator.