• Home   /  
  • Archive by category "1"

Peter Norvig Google Research Paper

As machine learning and AI become more ubiquitous, there are growing calls for the technologies to explain themselves in human terms.

Despite being used to make life-altering decisions from medical diagnoses to loan limits, the inner workings of various machine learning architectures – including deep learning, neural networks and probabilistic graphical models – are incredibly complex and increasingly opaque.

As these techniques improve, often by themselves, revealing their inner workings becomes more and more difficult. They have become a ‘black box’, according to growing numbers of scientists, governments and concerned citizens.

According to some, there is a need for these systems to expose their decision-making process, and be ‘explainable’ to non-experts: An approach known as explainable artificial intelligence or XAI.

But efforts to crack open the black box hit a snag yesterday, as the research director of arguably the world’s biggest AI powerhouse, Google, cast doubt on the value of explainable AI.

After all, Peter Norvig suggested, humans aren’t very good at explaining their decision-making either.

Frontier psychology

Speaking at an event at UNSW in Sydney on Thursday, Norvig – who at NASA developed software that flew on Deep Space 1 – said: “You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human you’re not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation.”

Just as humans worked to make sense and explain their actions after the fact, a similar method could be adopted in AI, Norvig explained.

“So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation.”

Although a relatively new field of study, progress in this area is already being made. Researchers at the University of California and Max Planck Institute for Informatics published a paper in December on a system that put machine learning-based image recognition into human explanations.

Although explanations were justified “by having access to the hidden state of the model”, they “do not necessarily have to align with the system’s reasoning process”, researchers said.

Besides, Norvig added yesterday: “Explanations alone aren’t enough, we need other ways of monitoring the decision making process.”

Output checks

A more veracious way of checking AI for fairness and bias, Norvig said, was to look instead not at inner workings but at outputs.

“If I apply for a loan and I get turned down, whether it’s by a human or by a machine, and I say what’s the explanation, and it says well you didn’t have enough collateral. That might be the right explanation or it might be it didn’t like my skin colour. And I can’t tell from that explanation,” he said.

“…But if I look at all the decisions that it’s made over a wide variety of cases then I can say you’ve got some bias there – over a collection of decisions that you can’t tell from a single decision. So it’s good to have the explanation but it’s good to have a level of checks.”

Tags AustraliaGoogleArchitecturealgorithmsAIunswmachine learningbiasfairnessNeural Networksdeep learningexplainable artificial intelligenceXAIPeter Norvigprobabilistic graphical models

More about EUGoogleNASAUNSW

Join the newsletter!

Error: Please check your email address.

"I already have the best job in the world at the best company in the world," says a note on Peter Norvig's personal website warning recruiters not to bother contacting him. The job: director of research. The company: Google.

You don't have to be a Google fan to see why Norvig would be happy there at this point in time. In every generation there is a handful of labs where gangs of smart people cluster. In this generation, for this moment, one of them is Google, which seems to have recruited half the smart people in the known world.

Say that to Norvig, and like a flash he'll ask for the names of the other half; but of course even he has limits. "We don't get our pick of everyone because there are some things we can't offer," he says. It's no fit for someone who wants to start their own company or work in a small one, and other than its work on self-driving cars, Google doesn't fund research on hardware, although it does give grants to some university projects.

"We still have to make choices internally. There's a little more freedom than at a startup: bad choices won't drive you immediately out of business, but you can't say, here's something I want to do and here are 20 spare engineers." Instead, the reality of having to move people around to pursue new ideas forces the setting of priorities, just like anywhere else. Some of these priorities sound just plain weird: besides the much-reported self-driving cars and augmented reality glasses there are rumours of space elevators and robots.

In the 1960s and 1970s in the US the famous clusters were at Bell Labs, IBM's Watson Research Lab, and Xerox PARC. All three were famed for developing things that had nothing to do with their company's core business – and also failing to exploit their inventions successfully. PARC in particular became famous for having multiple future industries born right there in its lab and letting them all escape to make other companies rich: graphical interfaces, personal computers, desktop publishing, the ethernet networking standard.

Norvig is conscious of this past, and when it's mentioned, he brings up the 1999 book Fumbling the Future: How Xerox Invented, then Ignored, the First Personal Computer, by Douglas K Smith and Robert C Alexander, in order to argue with its interpretation of events.

"The book says they fumbled the future, but in a way they invented the future," he says. "I think they rented the future." He goes on to outline his idea of the train of thought: "There will be a day in which people can afford PCs, but we're not quite there yet. So take $200,000 and give researchers personal computers so we can see what the future is going to be like. In a sense, we're doing the same thing at Google." That would be the cars … the glasses … the 16,000 computers thrown together across 1,000 servers and set to examining 10 million 200x200 pixel single-frame images taken from YouTube videos recognise cat to see what they come up with. "Sometimes that's the hard part – imagining what's going to be possible and saying, how might it be done?"

But Norvig is also conscious that it was still true that those labs often produced research that their companies could not exploit. Google, he says, doesn't work that way: its research is more closely integrated into the rest of the company.

"In some ways we're similar to something more like Intel, where it has research groups that try to start new businesses, and if they kickstart something and somebody else makes most of the profit from that new business, they're fine with that as long as the industry buys Intel chips. We're similar – if we invent something new, even if we don't own it, if it brings two new people to use the internet that didn't before, the odds are that at least one of them will become our customer. So it's a success for us if we launch a new industry."

This explains the cars and glasses. "We think of them as extending from a strength we already have - cars, from our mapping capability, and glasses similarly, from communications and location services," he says. "We have to make a plausible case that it connects to strengths we have." Acceptance of these technologies, he thinks, will come faster than we may expect: his teenaged kids are frustrated that self-driving cars won't be on the market soon enough to excuse them from the need to learn how to park.

A defining moment in terms of Google's mapping services, came with 9/11. For one thing, that was a moment when the shift from TV to the web for breaking news became apparent. Both 9/11 and Hurricane Katrina, which devastated New Orleans in 2005, showed Google something it didn't know about its own services: "We thought we were building an atlas that you buy once a decade and look up stuff, but people were asking us, 'How does New Orleans look today that's different from yesterday?' and we realised there was a time component." Norvig says that third dimension of time will continue to become more important to mapping and the company's coverage of different parts of the world grows. "People will demand more up-to-date coverage."

A common theme throughout Norvig's career is work on artificial intelligence. He began as a mathematician, but moved to computers when he found them easier. As early as the mid 1980s, he began moving toward probabilistic reasoning and dealing with uncertainty. The theorems this kind of work is built on, the work of the 18th-century English mathematician and minister Thomas Bayes, are in use everywhere now, but at that time were still regarded with great suspicion, even in the AI community. For one thing, to work effectively, Bayesian systems need a lot of numbers and statistics to draw on, and no one could yet see where these were going to come from. For another, the mode of thinking seemed too different from the way human brains operate: people don't think through problems by using numbers, the argument ran, so programs shouldn't either. Both these objections have been answered in the decades since. The first, because huge amounts of data are available now. As for the second, while people don't do large blocks of arithmetic in their heads, there are analogies that can be drawn between the electrical and chemical processes in our brains to probabilistic reasoning.

"And," says Norvig, "people built systems and they worked, which is the best way to convince somebody."

An example of the kind of system Norvig is talking about here is Google's Translate facility, which was built by researchers who in many cases had no knowledge of the languages (other than English) they were working with. A baby learns languages by hearing and imitation; total immersion. A student wishing to understand a new language learns vocabulary lists. A linguist studies grammar and literature or conversations. Google's computers, on the other hand, did none of these. Instead, Google took advantage of the web, where it's easy to find large numbers of matched pairs of already translated documents. These were statistically analysed to find billions of word pairs that then could be used to learn how to map phrases to phrases like, in Norvig's analogy, solving a jigsaw puzzle. The phrases in turn help disambiguate meanings of words that are commonly used in multiple ways. Eliminate the ones you know, see what's left and what new correspondences you can find.

Back to the 16,000 processors, built into a neural network with a billion connections. After three days, it identified cats with an accuracy rate of 74.8%. This is what huge amounts of data will do for you: give a powerful enough computer enough stuff to work on to develop its own concepts it can then use for pattern recognition. Yet the results, which Norvig described at this year's Singularity Summit, haven't made him more of a believer that the steady exponential increase in computing power is leading us to the Singularity, the moment when artificial intelligence matches human intelligence - and then passes it. Norvig is more dubious than you might expect about this prospect, given that he's an adviser at the Singularity University.

"My biggest concern is people who are too specific about dates," he says. Even Oxford's Stuart Armstrong, who pinned it down at this year's summit to between 10 to 100 years from now, seemed to him too specific.

"I support the Singularity Institute because I think its message that there is a lot of change happening and accelerating, and it's going to have effects on society and people should be aware of that, is a good message."

Even so, he sympathises with the late John McCarthy, who worked on artificial intelligence for more than 50 years and even late in his life (he died in 2011) dismissed the Singularity robustly as "nonsense". In preparing for his talk at the 2007 Summit, Norvig did some research to answer the question, "Are we at a specific point today that's different from the past?"

Using keywords and phrases such as "AI" and "unlike past" to pull out likely candidate papers, he read through abstracts and sorted them by decade.

"Every decade there were a couple of new ones, and then some of the same ideas came back again. I couldn't see anything that said that this decade is distinct from the previous ones. They all seemed like some old ideas, some new ideas, and we think the new ones will help. I didn't see anything about now we've got it. So I guess I'm with John. We're not at a privileged point in time." In sum, "We're inventing new stuff, but it doesn't seem that different today than it did in the past."

Certainly, we have systems that help us design complex things, from bridges to new types of computer chips, but it's still a partnership between human and machine. "This idea that intelligence is the one thing that amplifies itself indefinitely, I guess is what I'm resistant to. Intelligence can let you solve harder problems, but some problems are just resistant, and you get to a point that being smarter isn't going to help you at all, and I think a lot of our problems are like that. Like in politics - it's not like we're saying that if only we had a politician who was slightly smarter all our problems would go away."

This is the more subtle problem: do smart people overestimate the value of intelligence?

"Kevin Kelly [the founding executive editor of Wired] and I talked about this; he calls it 'intelligentism' – this prejudice that intelligence is the only attribute that matters. We think intelligence is important – we call our species after it – but if we were elephants maybe we'd be trying to have super strength, or if we were cheetahs super speed. There are these societal problems that are hard because of the way they are, and it's not just that we're not smart enough to solve them."

• This article was amended on 27 November 2012. The original incorrectly described Peter Norvig as a fellow of the Singularity Institute. He is an adviser at a different organisation, the Singularity University.

One thought on “Peter Norvig Google Research Paper

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *