By Samantha Murphy Kelly | CNN
We’ve grown accustomed to asking virtual assistants like Siri and Alexa to do small tasks for us and provide basic information. But if the CEO of a Samsung-backed startup has his way, “artificial humans” will become your teachers, doctors, financial advisers and possibly your closest friends.
It’s a polarizing concept that became the most talked about topic at the CES tech show in Las Vegas this week. When pulling back the curtain on the existing technology, we found it’s mostly hype, but it still raises questions around how such tech would actually play out in real life and whether it’s a future we should want.
Star Labs, an innovation lab backed by Samsung, displayed its AI-powered lifeforms called Neons at CES in videos on giant TVs. At human scale, one is a yoga instructor who can help you perfect your downward-facing dog; another is a local news anchor who can deliver the news based on interests in your preferred language while a financial adviser Neon can help get your retirement plan in order.
These videos tried to depict how someone might interact or even form a relationship with such realistic avatars in the future. But in a demo with only one working Neon, the technology today was wonky and plagued with delays. Her emotions and expressions were far less believable in real time and were controlled by the company’s CEO, Pranav Mistry, via an app nearby. Onlookers were able to ask questions, but most often, answers missed the mark. For example, when asked what her favorite gadget at CES was, she responded: “Las Vegas.”
Its CES debut came on the heels of a mysterious social media push leading up to the conference, sparking rumors and generating buzz that the next big thing in AI may be coming. But soon after its debut, it was clear the speculation was overblown.
Mistry admits the technology still needs work (“It’s just a baby right now”), but his vision is for “the digital species” to one day be everywhere — in your favorite chat applications, home or stores. Instead of ordering from kiosk buttons in a fast food restaurant, you could have a natural conversation with a realistic-looking AI human.
If properly executed, the creations, which are addressed by names like Frank and Hanna (rather than something like “Hey Google”), could present an intriguing yet uncomfortable glimpse into what human-like AI lifeforms could mean for our future.
“The marketing rhetoric around the Neons is quite extreme at a time when AI generates lots of confusion and anxiety [with topics such as] humans replacing machines, AI ethics issues and deep fakes,” said Thomas Husson, a principal analyst at Forrester Research. “But if they’re able to successfully express emotions, they would help enhance interactions between consumers and brands, and more broadly humanize technology.”
It’s tall order for a company that Mistry says has only been working on Neon for four months. Its core technology, a blend of behavioral neural networks and algorithms, has clear limitations, but Mistry said it will soon be able to support original content, expressions, emotions, movements and eventually memory on its own.
“Right now, Neon doesn’t have any intelligence per se,” he said. “They are behaving intelligently, but they don’t have the concept of learning or memory. [Eventually], she will remember that you like pizza or something you’re reading.”
Despite Samsung’s backing, Neon is not related to any Samsung products or its Bixby voice systems. A Star Labs spokesperson told CNN Business that Samsung knew few details about the concept ahead of its CES debut.
Neon plans to launch later this year but has not yet landed on a business model. Mistry said a subscription service is a possibility and it’s also working to secure business partnerships.
The idea of a “digital species” is undoubtedly controversial. Big names in tech, including Elon Musk and Bill Gates, have warned about the development of powerful artificial intelligence. Gates called AI both “promising and dangerous.” These concerns typically revolve around what’s known as artificial general intelligence, or AI that can, for the most part, do the things a human can do.
“As demonstrated by Neon, we are still very far from a commercially ready AGI solution,” principal analyst Lian Jye Su of ABI Research said. “The best AI nowadays are narrow [ones] that performs singular tasks very well, such as the camera AI in our smartphones, the defect inspection camera AI on an assembly line, and the facial recognition AI in payment terminals.”
According to Su, we should “always question the intention and financial rationale behind attempts to make artificial general intelligence a reality.”
Other companies are developing AI that can better converse with us but without a human-like interface. Two years ago, Google showed off Duplex, which allows AI to make human-like phone calls, while Microsoft is growing its Cortana platform to be increasingly responsive.
Mistry said Neon is aware of the concerns about developing human-like AI.
“There’s always good and bad [sides] of any technology and how we use it,” he said. “That applies to not only AI, but any technology. We believe that it’s our human responsibility, and this generation’s responsibility, that … if we [build something] today, we want to ensure that from the ground up from the architecture level, from the design level, that it’s not misused in a wrong place.”
Neon’s concept also comes at a time when companies including Facebook, Google and Amazon are working to gain back consumer trust after a series of data sharing scandals. In 2019, both Amazon and Apple were under fire for using third-party contractors to listen in and transcribe user requests made through their personal assistants. Putting a human-like AI in your home, one that learns your preferences for pizza, behaviors or finances, raises concerns about where intimate information could land.
“Our future can come without compromising our privacy,” Mistry said. “And that is what we are designing — an architecture [that makes sure] any interaction between you and your Neon or you and any Neon, no one has, including me, as a CEO of this company, access to that information.”
At this stage, a Neon remains a simulated human assistant that merely aims to give intelligent, human-like responses.
“But potential implications, such as if such an avatar was embodied into a humanoid robot or could have a true conversation with you, will generate more discussions about AI ethics and regulation,” Thomas said.