Categories
Computers

The Seattle MacArthur Fellow who teaches common sense to computers – Crosscut

That’s according to COMET, an experimental text-based artificial intelligence web application, when asked to think about the context behind the statement “[Person] wins a MacArthur award.” Dr. Yejin Choi nods knowingly at the application’s output on her shared Zoom screen: The program generates common-sense assumptions based on simple statements. She’s demonstrating the program, whic…….

That’s according to COMET, an experimental text-based artificial intelligence web application, when asked to think about the context behind the statement “[Person] wins a MacArthur award.” Dr. Yejin Choi nods knowingly at the application’s output on her shared Zoom screen: The program generates common-sense assumptions based on simple statements. She’s demonstrating the program, which stands for COMmonsEnse Transformers, for Crosscut on Wednesday, Oct. 19, a week after being announced by the John D. and Catherine T. MacArthur Foundation as one of 25 MacArthur Fellows.

Choi, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering, received the designation and an $800,000 “genius grant” for her groundbreaking work in natural language processing. The subfield of artificial intelligence explores technologies’ ability to understand and respond to human language.

Natural language processing research impacts all of us, whether or not we interact with artificial intelligence directly. Every time we ask a smart device like Siri or Alexa to remind us to buy milk, woozily type an early-morning text relying on AutoCorrect’s help or allow Google to autocomplete our search queries, we’re asking artificial intelligence programs to analyze our voices and keystrokes and correctly interpret our requests. And increasingly, this technology is key to global business strategy, involved in everything from supply chain management to healthcare. 

But computers still take our requests literally, without understanding the “why”s behind our questions. The processors behind AI assistants don’t inherently understand ethics or social norms, slang or context. 

“Human language, regardless of which country’s language, is fascinatingly ambiguous,” Choi said. “When people say, ‘Can you pass me the salt bottle?’, I’m not asking you whether you’re capable of doing so, right? So there’s a lot of implied meanings.” 

At worst, creating AI algorithms based on content scraped from the internet can riddle them with racism and misogyny. That means they can be not only unhelpful at times, but also actively harmful.

Choi works at the vanguard of research meant to give artificial intelligence programs the context they need to figure out what we really mean and answer us in ways that are both accurate and ethical. In addition to COMET, she helped develop Grover, an AI “fake news” detector, and Ask Delphi, an AI advice generator that judges whether certain courses of action or statements are moral, based on what it’s processed from online advice communities. 

Crosscut recently caught up with Choi to talk about her MacArthur honor, demo some of her research projects and discuss the responsibility she feels to help AI develop ethically. This conversation has been condensed and lightly edited for length and clarity.

Crosscut: How did you feel when you found out that you’d won this award? 
Choi: I came a long way, is one way to put it. I consider myself as more of a late bloomer: a bit weird and working on risky projects that may or may not be promising, but certainly adventurous. 

The reason why I chose to work on it wasn’t necessarily because I anticipated an award like this in the end, but rather that I felt that I’m kind of nobody, and if I try something risky and fail, nobody will notice. Even if I fail, maybe we will learn something from that experience. I felt that that way, I can contribute better to the community than [by] working on what other, smarter people can do.

What first attracted you to AI research, especially the risky aspects you’ve mentioned? 
I wanted to study computer programs that can understand language. I was attracted to language and intelligence broadly, and the role of language for human intelligence. We use language to learn, we use language to communicate, we use language to create new things. We conceptualize verbally and that was fascinating for me, perhaps because I wasn’t very good with language growing up. Now my job requires me to write a lot and speak a lot, so I became much better at it.

I had a hunch that intelligence is really important — but it was just a vague hunch that I had. I was gambling with my career. 

It became a lot more exciting than I anticipated. 

How much does AI understand us right now? 
Computers are like parrots in the sense that they can repeat what humans said — much better than a parrot — but they don’t truly understand. That’s the problem: If you deviate a little bit from frequent patterns, that’s where they start to make strange mistakes humans would never make. 

Computers can appear to be creative, maybe generating something a little bit weird and different, and humans tend to project meaning to it. But the truth is, there’s no sentience or understanding.

Source: https://news.google.com/__i/rss/rd/articles/CBMiXWh0dHBzOi8vY3Jvc3NjdXQuY29tL25ld3MvMjAyMi8xMS9zZWF0dGxlLW1hY2FydGh1ci1mZWxsb3ctd2hvLXRlYWNoZXMtY29tbW9uLXNlbnNlLWNvbXB1dGVyc9IBAA?oc=5