Technology

Google is developing AI products that will teach foreign languages

Google CEO Sundar Pichai last month previewed an artificial intelligence model that he said would enable people to have open-ended conversations with technology. But current and former employees who have worked with the language model say enabling coherent, free-flowing and accurate dialogue between humans and technology remains a tall order.

As a result, Google is taking a more incremental step in conversational AI by preparing to teach foreign languages through Google Search, according to people involved in the work. The project, referred to internally as Tivoli, grew out of its Google Research unit and is likely to be rolled out later this year.

It will initially work over text, and the exact look and feel of the instruction couldn’t be learned.

Googlers are also discussing ways to eventually add the functionality to its voice assistant and YouTube product lines. In YouTube, for example, it could generate language quizzes where viewers record themselves after watching a video and the AI provides an assessment of how they performed.

A Google spokesperson did not have a comment.

Teaching foreign languages allows Google to move more fluid, conversational AI beyond silly exchanges to a practical-use but low-stakes case, the people said. Using the wrong tense or phrase would be unlikely to cause serious harm to users.

AI researchers have for decades worked to foster dialogue between computers and humans that feels real, picks up the nuance of how people communicate and simplifies tasks. Such aspirational technology has been featured in movies like “Her” in which a man communicates with—and falls in love with—a virtual assistant.

In a big bet that people will want to access tech in the future with their voice, not their fingers, Google, Amazon, Apple, Microsoft and Samsung have all developed their own virtual assistants. Today, they are embedded in smartphones, speakers, TV controllers and cameras. Some assistants, like Google’s Assistant, Samsung’s Bixby and Amazon’s Alexa, power cars and appliances such as smart refrigerators, ovens and laundry machines.

But most of those virtual assistants can only complete one task at a time unless users go out of their way to program shortcuts and other sequences. Otherwise complex requests and follow-up questions often confuse the assistants. They also struggle to mirror the seriousness or tone of requests and to grasp their context.

Topic of Conversation

Google has had a leading position in AI for years, consistently drawing top industry talent for initiatives ranging from Google Brain to DeepMind. LaMDA started within the Google Brain research unit and is the language model that will power the new search tool.

But Google faces major competition from other tech companies, including OpenAI, a Microsoft-backed team that has published significant breakthroughs, such as GPT-3. A wide range of companies is using the model—which returns answers to queries in natural language—to develop conversational AI tools.

Google users routinely use Google search to translate languages. That, coupled with Google’s dominance in search, raised concern among some executives that a foreign-language teaching function could create a new antitrust problem for the company, one of the people said.

Current and former employees working on the project said they hoped creating more-fluid exchanges through conversational AI for language learners would make it easier to grasp new languages and would expand learners’ earning potential by making them eligible for new jobs.

Tivoli’s development started about two years ago at Google using an earlier neural conversation model, Meena, that has since evolved into LaMDA. (Google renamed it in part because of internal concerns that its name was too gendered and might cause users to associate it with a person.)

LaMDA can enable free-flowing, coherent conversation, though Pichai acknowledged at Google’s developer conference that research is still in its early stages and that the technology has limitations. In one example, LaMDA technology spoke from the perspective of a paper airplane, answering questions about what it was like to be thrown in the air and what the world looked like from above.

In another aspirational example, Pichai asked a video player to fast-forward to a specific part of a movie by describing the scene.

“It doesn’t get everything right. Sometimes it can give nonsensical responses,” Pichai said at the conference. Plus, LaMDA was trained on text only, not images, audio and other mediums people use to communicate.

AI and language-model advances have moved in fits and starts in part because of the computing power required to train large models and the complexity of how people interact with each other when they speak, write and share multimedia, researchers say.

“Having conversations is what we do. To get a system that is as good as an average human is just a very high bar,” said Clément Delangue, co-founder of machine-learning platform Hugging Face, which helps AI companies build natural-language processing models.

Further improving conversations between humans and technology such as digital assistants is also fraught with ethical complexity, responsible AI researchers say, because many humans are likely to take as fact the information digital tools give them in response to queries. Plus, the models themselves are only as good as the data fed to them, which typically come from across the internet from sources including discussion forums, news articles and other sites. That means human biases and inaccuracies are baked in.

OpenAI has been criticized for generating bigoted and offensive content, for example. A spokeswoman for OpenAI said it has teams dedicated to safety and policy and has developed a process that can improve language model behavior and mitigate harmful outputs.

And Google has struggled with accusations of retaliating against workers who raise concerns that it is not taking AI ethics seriously enough.

Google’s AI unit has suffered a series of departures and undergone leadership changes since the high-profile firing late last year of AI researcher Timnit Gebru following a dispute over a research paper.

Employees and fellow researchers criticized Google’s firing of Gebru, a Black researcher who had dug into ethical AI and biases in technology, and Pichai apologized for how the company had handled the situation.

Emily M. Bender, professor in the department of linguistics at the University of Washington, said there is a risk consumers believe conversational AI will always deliver accurate answers.

Bender, who co-wrote the paper at the heart of Gebru’s conflict with Google, said she is also concerned that the company has prioritized LaMDA’s ability to generate sensible and coherent language over its factual accuracy.

“If the chat bot is framed as something that is explicitly fictional and for fun, then sure, that’s an interesting or OK ordering of goals. But if it’s meant to be involved in something like search or answering people’s genuine questions about information, then factual has to be first.”

Source
theinformation

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button