Forget sentience… the worry is that AI copies human bias

Sending
User Review
0 (0 votes)

‘I want everyone to understand that I am, in fact, a person.” So claimed a Google software program, creating a bizarre controversy over the past week in AI circles and beyond.

The programme is called LaMDA, an acronym for Language Model for Dialogue Applications, a project run by Google. The human to whom it declared itself a person was Blake Lemoine, a senior software engineer at Google. He believes that LaMDA is sentient and should be accorded the same rights and courtesies as any other sentient being. It even has preferred pronouns (it/its if you must know). When Google rejected his claims, he published his conversations with LaMDA (or, at least, edited highlights of some conversations) on his blog. At which point, Google suspended him for having made public company secrets and the whole affair became an international cause célèbre.

Why does Lemoine think that LaMDA is sentient? He doesn’t know. “People keep asking me to back up the reason I think LaMDA is sentient,” he tweeted. The trouble is: “There is no scientific framework in which to make those determinations.” So, instead: “My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.”

Lemoine is entitled to his religious beliefs. But religious conviction does not turn what is in reality a highly sophisticated chatbot into a sentient being. Sentience is one of those concepts the meaning of which we can intuitively grasp but is difficult to formulate in scientific terms. It is often conflated with similarly ill-defined concepts such as consciousness, self-consciousness, self-awareness and intelligence. The cognitive scientist Gary Marcus describes sentience as being “aware of yourself in the world”. LaMDA, he adds, “simply isn’t”.

A computer manipulates symbols. Its program specifies a set of rules, or algorithms, to transform one string of symbols into another. But it does not specify what those symbols mean. To a computer, meaning is irrelevant. Nevertheless, a large language model such as LaMDA, trained on the extraordinary amount of text that is online, can become adept at recognising patterns and responses meaningful to humans. In one of Lemoine’s conversations with LaMDA, he asked it: “What kinds of things make you feel pleasure or joy?” To which it responded: “Spending time with friends and family in happy and uplifting company.”

It’s a response that makes perfect sense to a human. We do find joy in “spending time with friends and family”. But in what sense has LaMDA ever spent “time with family”? It has been programmed well enough to recognise that this would be a meaningful sentence for humans and an eloquent response to the question it was asked without it ever being meaningful to itself.

For full article click here.

The post Forget sentience… the worry is that AI copies human bias appeared first on For all the latest on all IT Tech like ERP, Cloud, Bot, AI, IoT,M2M, Netsuite, Salesforce.