Why AI systems should disclose that they’re not human

Sending
User Review
0 (0 votes)

Otherwise, they could mislead and deceive people.

AI-systems

AI systems

AI systems-We are nearing a new age of ubiquitous AI. Between your smartphone, computer, car, smart home, and social media, you might interact with some sort of automated, intelligent system dozens of times every day. For most of your interactions with AI, it will be obviously and intentionally clear that the text you read, the voice you hear, or the face you see is not a real person. However, other times it will not be so obvious. As automated technologies quickly and methodically climb out of the uncanny valley, customer service calls, website chatbots, and interactions on social media may become progressively less evidently artificial.

This is already happening. In 2018, Google demoed a technology called Duplex, which calls restaurants and hair salons to make appointments on your behalf. At the time, Google faced a backlash for using an automated voice that sounds eerily human, even employing vocal ticks like “um,” without disclosing its robotic nature. Perversely, today’s Duplex has the opposite problem. The automated system does disclose itself, but at least 40% of its calls have humans on the phone, and it’s very easy for call recipients to confuse those real people with AI.

As I argue in a new Brookings Institution paper, there is clear and immediate value to a broad requirement of AI disclosure in this case and many others. Mandating that companies explicitly note when users are interacting with an automated system can help reduce fraud, improve political discourse, and educate the public.

The believability of these systems is driven by AI models of human language, which are rapidly improving. This is a boon for applications that benefit society, such as automated closed-captioning and language translation. Unfortunately, corporations and political actors are going to find many reasons to use this technology to duplicitously present their software as real people. And companies have an incentive to deceive: A recent randomized experiment showed that when chatbots did not disclose their automated nature, they outperformed inexperienced salespeople. When the chatbot revealed itself as artificial, its sales performance dropped by 80%.

Read More Here

Article Credit: FC

Leave a Reply