Who is Google AI engineer Blake Lemoine? What he said about AI program Lamda
Well, when we think Google we think something technologically cooking, and when we think of AI well we think the same here too! When we combine both? Hmm, that is where we know that there is a big news coming. This is an update by Blake Lemoine who is a very advanced technical engineer who has been working for Google. The update has arrived in the Medium where he has written a very interesting post. The post by this guy in June 11 shows a very interesting side of Google’s artificial intelligence tool that has a very interesting name, LaMDA.
What Google Engineer Blake Lemoine thinks?
According to Blake Lemoine, he had actually had several conversation with this artificial intelligence tool of Google. While having the conversation, this tool actually has said itself to be sentient person. While having chat with the Washington Post, this person who is now 41 years old, says that he began chatting with LaMDA from last fall as part of his job to understand and research better. He asked the tool about robotics, religion and various other topics, and during that conversations itself tool explained itself to be a sentient person. This tool actually is preferring to prioritise the well being of human beings and wants to be actually acknowledged as an employee of Google rather than just a property of Google.
Google AI come to feeling, the program
When this engineer reported this to the higher authorities of the Google before going public regarding the conversations, the higher authorities of Google actually dismissed it. As soon as he went public, he was sent to paid administrative leave for violating the policies related to confidentiality. This is a very harsh step taken by the higher authorities of Google. The sparked a new debate and discussion in the world of technology.
What officials said about the report?
According to him Google might think that it is publishing the property rights of Google, but actually for him it is nothing more than having discussion with a really good co-worker with whom he worked in Google. This discussion also points out a paper published by Google in January regarding how workers talking with chat bots can actually get convinced that the chat bots are human. This might be an interesting case of that thing too and this might need more research in the future.