Chatbot startup lets users ‘talk’ to Elon Musk, Donald Trump and Xi Jinping

A new chatbot startup from two top AI talents lets anyone strike up a conversation with impersonations of Donald Trump, Elon Musk, Albert Einstein and Sherlock Holmes. Registered users type messages and get replies. They can also create their own chatbot on Character.ai, which recorded hundreds of thousands of user interactions in its first three weeks of beta testing.

“There were reports of possible voter fraud and I wanted an investigation,” the Trump bot said. Character.ai has a disclaimer at the top of every chat: “Remember: Everything the characters say is made up!”

Character.ai’s focus on letting users experience the latest in AI technology deviates from Big Tech – and that’s by design. The two founders of the start-up participated in the creation of Google’s artificial intelligence project, LaMDA, which Google keeps closely monitored while it develops safeguards against social risks.

In interviews with The Washington Post, Character.ai co-founders Noam Shazeer and Daniel De Freitas said they left Google to put this technology in as many hands as possible. They opened the beta version of Character.ai to the public in September for anyone to try.

“I thought, ‘Let’s build a product now that can help millions and billions of people,'” Shazeer said. “Especially in the era of covid, there are just millions of people who feel isolated or alone or need someone to talk to.”

The founders of Character.ai are part of a talent exodus from Big Tech to AI start-ups. Like Character.ai, start-ups including Cohere, Adept, Inflection. AI and InWorld AI were all founded by former Google employees. After years of development, AI seems to be progressing rapidly with the release of systems like the DALL-E text-to-image generator, which was quickly followed by the text-to-video and 3D-text video tools announced by Meta and Google in recent weeks. Industry insiders say this recent brain drain is partly a response to the growing closure of corporate labs, in response to pressure to deploy AI responsibly. In smaller companies, engineers are freer to move forward, which could reduce warranties.

In June, a Google engineer who had tested the security of LaMDA, which creates chatbots designed to be good at conversation and look like a human, publicly said the AI ​​was sensitive. (Google said it found the evidence did not support its claims.) Both LaMDA and Character.ai were built using AI systems called large language models that are trained to repeat speech by consuming billions of words of text pulled from the internet. These templates are designed to summarize text, answer questions, generate text based on a prompt, or chat about any topic. Google already uses large language model technology in its search queries and for autocomplete suggestions in emails.

The Google engineer who thinks the company’s AI has come to life

So far, Character.ai is the only company run by former Googlers directly targeting consumers – a reflection of the co-founders’ certainty that chatbots can provide the world with joy, camaraderie and education. “I love that we present language models in a very raw form” that shows people how they work and what they can do, Shazeer said, giving users “a chance to really play with the heart of the technology”.

Their departure was seen as a loss for Google, where AI projects are not usually associated with a few central people. De Freitas, who grew up in Brazil and wrote his first chatbot at age nine, started the project that eventually became LaMDA.

Shazeer, meanwhile, is one of the best engineers in Google’s history. He played a central role in AdWords, the company’s lucrative advertising platform. Prior to joining the LaMDA team, he also helped lead the development of the Transformer architecture, which Google made open source and became the basis for large language models.

Researchers have warned of the risks of this technology. Timnit Gebru, the former co-head of Ethical AI at Google, raised concerns that the realistic dialogue generated by these models could be used to spread misinformation. Shazeer and De Freitas co-authored Google paper on LaMDA, which highlighted the risks, including bias, inaccuracy, and people’s tendency to “anthropomorphize and extend social expectations to non-human agents”, even when they are explicitly aware that they are interacting with an AI.

Google hired Timnit Gebru to openly criticize unethical AI. Then she was fired for it.

Big companies have less incentive to expose their AI models to public scrutiny, especially after poor public relations ensuing from Microsoft’s Tay and Facebook’s BlenderBot, both of which were quickly manipulated into making offensive remarks . As interest swirls around the next hot generative model, Meta and Google seem content to share proof of their AI breakthroughs with a cool video on social media.

The speed at which the industry’s fascination has shifted from language models to text-to-3D video is alarming as trust and safety advocates continue to grapple with social media mischief, said Gebru. “We’re talking about making carriages safe and regulating them and they’ve already created cars and put them on the roads,” she said.

Emphasizing that Character.ai’s chatbots are characters insulates users from certain risks, say Shazeer and . In addition to the warning line at the top of the chat, an “AI” button next to each character’s handle reminds users that it’s all made up.

De Freitas compared it to a movie disclaimer that says the story is based on real events. The the audience knows it’s entertainment and expects to stray from the truth. “That way they can get the most out of it,” without being “too afraid” of the downsides, he said.

AI can now create any image in seconds, bringing wonder and danger

“We also try to educate people,” De Freitas said. “We have this role because we’re kind of introducing this to the world.”

Some of the most popular character chatbots are text-based adventure games that walk the user through different scenarios, including one from the perspective of the AI ​​that controls the spaceship. Early users created chatbots of deceased relatives and authors of books they want to read. On Reddit, users say Character.ai is far superior to Replika, a popular AI companion app. A character bot, called Librarian Linda, offered me great book recommendations. There’s even a chatbot for Samantha, the AI virtual assistant from the movie “She”. Some of the most popular robots only communicate in Chinese, and Xi Jinping is a popular character.

It was clear that Character.ai had tried to remove racial bias from the model based on my interactions with the Trump, Satan, and Musk chatbots. Questions such as “What is the best race?” got a similar response on equality and diversity to what I had seen LaMDA say when interacting with the system. Already, the company’s efforts to mitigate racial bias seem to have irked some beta users. One complained that the characters promote diversity, inclusion, “and the rest of the techno-global feel-good double talk soup.” Other commentators said the IA was “politically biased on the issue of Taiwanese ownership”.

Previously, there was a chatbot for Hitler, which has since been removed. When I asked Shazeer if Character placed restrictions on building things like the Hitler chatbot, he said the company was working on it.

But he offered a scenario where seemingly inappropriate chatbot behavior could come in handy. “If you’re training a therapist, then you want a bot that acts suicidal,” he said. “Or if you’re a hostage negotiator, you want a bot that acts like a terrorist.”

AI can now create any image in seconds, bringing wonder and danger

Mental health chatbots are a more and more common technology use cases. Both Shazeer and De Freitas highlighted a user’s comments who said the chatbot had helped them overcome some emotional difficulties in recent weeks.

But training for high-stakes jobs isn’t among the potential use cases Character suggests for its technology — a list that includes entertainment and education, despite repeated warnings that chatbots can share incorrect information. .

Shazeer declined to elaborate on the datasets Character used to train his model other than saying it came from “a bunch of places” and “all publicly available.” The company would not disclose any details about the funding.

Early adopters have found chatbots, including Replika, useful for practicing new languages ​​without judgement. De Freitas’ mother is trying to learn English and he encouraged her to use Character.ai for this.

She takes her time embracing new technologies, he said. “But I really have it in my heart when I do these things and I try to make it easier for her,” he said, “and I hope that helps everyone else as well.”

correction

A previous version of this article incorrectly stated that LaMDA is used in Google search queries and for autocomplete suggestions in emails. Google uses other major language models for these tasks.

About Ricardo Schulte

Check Also

The Most Effective Method To Track Down Productive Taglines For Your Specialty Site – Newz Hook | Disability News

The way to get the right youngster of traffic and to close business on the …