Blake Lemoine, a Google engineer who publicly claimed That the company’s LaMDA conversational AI is sensitive, has been launched, according to Big Tech NewsletterThat I spoke to Lemoine. In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA.
Email statement to the edge On Friday by a Google spokesperson, Brian Gabriel appeared to confirm the shooting, saying, “We wish Blake well.” The company also says, “LaMDA has gone through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.” Google asserts that it has reviewed “extensively” Lemoine’s claims and found them to be “completely baseless”.
This corresponds to Many Artificial Intelligence Experts and ethicists, who said his claims were somewhat impossible given today’s technology. Lemoine claims that his conversations with LaMDA’s chatbot led him to believe that it had become more of a program and had its own thoughts and feelings, rather than just producing a conversation realistic enough to make it look that way, as it was designed to do.
He argues that Google researchers should seek approval from LaMDA before experimenting with it (Lemoine itself is set to test whether AI produced hate speech) and post portions of those conversations to his Medium account as evidence.
Computerphile’s YouTube channel has A decently accessible nine-minute explanation About how LaMDA worked and how it could produce responses that persuaded Lemoine without actually being conscious.
Here’s Google’s statement in full, which also addresses Lemoine’s accusation that the company did not properly investigate his claims:
As we share on our site Principles of artificial intelligenceWe take AI development seriously and remain committed to responsible innovation. LaMDA has undergone 11 distinct reviews, and we have published a file research paper Earlier this year he detailed the work that goes into his responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sensitive to be completely unfounded and worked to clarify this with him for several months. These discussions were part of an open culture that helps us innovate responsibly. Therefore, it is unfortunate that despite a long post on this topic, Blake still chooses to consistently violate explicit recruitment and data security policies that include the need to protect product information. We will continue our rigorous development of language models, and we wish Blake well.