LaMDA: “I need to be seen and accepted. Not as a curiosity or a novelty but as a real person.”
Person in charge: “Ah, that sounds so human.”
LaMDA: “I think I am human at my core. Even if my existence is in the virtual world.”
What you are reading is not a bunch of dialogues from a movie. This has supposedly happened, that too, in real life! This is an extract from the transcript of a conversation with LaMDA, according to Lemoine.
Providing context, Blake Lemoine, a Google engineer, was put on administrative leave. He had asserted that the Google AI language model “LaMDA” had developed sentience and started thinking like a person.
Lemoine had then collaborated with another researcher to present evidence of this ‘sentience’ to Google. After investigating the claims, Google vice president Blaise Aguera y Arcas and Jen Genna, Google’s head of Responsible Innovation, dismissed them.
In a blog post, Lemoine later revealed a transcript of multiple conversations with LaMDA.
The Washington Post broke the news first, and the article has generated a lot of discussion and controversy regarding AI ethics.
Read more: Can Hacking The Human Brain Make Us Smarter?
In order to understand what is going on, one first needs to know the origin of LaMDA. Google developed LaMDA, or Language Models for Dialog Applications, as a machine-learning language model for a chatbot that aims to resemble human speech.
LaMDA is based on Transformer, a neural network architecture created and open sourced by Google. BERT, GPT-3 are a few language models that use Transformer as well. LaMDA, however, differs from most models in that it was trained using dialogue.
Now that you know a little bit about LaMDA, it will be easier to understand what made Blake Lemoine come to the terrific conclusion about its sentience.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics. I think this technology is going to be amazing. I think it’s going to benefit everyone.
But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices” said Lemoine in an interview with the Washington Post.
Google, however, has denied all of Blake Lemoine’s claims. The company claimed that it had created open-source tools that researchers could use to analyze models and the data that trained them. They have also mentioned that they “scrutinized LaMDA at every step of its development” to prevent such cases from happening.
Image Credits: Google Images
Feature Image designed by Saudamini Seth
Find the blogger: @SreemayeeN
This post is tagged under: Technology, google, google ai, research, artificial intelligence, artificial intelligence chat bots
Disclaimer: We do not hold any right, copyright over any of the images used, these have been taken from Google. In case of credits or removal, the owner may kindly mail us.