Quora AI Q&A with Yann LeCun

Great Q&A with Yann LeCun of Facebook and NYU

13 minute read

I read a really interesting Q&A on Quora the other day, and wanted to call out and comment on a couple points.

First though, I should reiterate my disclaimer that, basically, I know nothing. Second, what I think is that AI is going to be a world changing, hugely important endeavor. I think it has the power to positively change life of all types in ways we’re not even able to conceive of yet, and a multitude of ways in which we can already conceive. I also think it has the potential to destroy… almost everything. How we manage (or fail to manage) this transition will be one of the most important tests we ever face.

I’m excited by the potential of AI (even if I dislike that term) and want it to happen in a big and positive way. Keep all that in mind when reading my comments.

We basically have one long-term goal: understand intelligence and build intelligent machines. That’s not merely a technological challenge, it’s a scientific question. What is intelligence and how can we reproduce it in machines? Together with “what’s the universe made of” and “what’s life all about”, “what is intelligence” is possibly one of the primary scientific questions of our times. Ultimately, it may help us not just build intelligent machines, but also understand the human mind and how the brain works.

The above is at the root of why I felt I had to name my category “AI & Mind” not just AI, and related to “The problem with calling it AI” post

Q: Is there something that Deep Learning will never be able to learn?

The answer in total is great and worth reading in full. The quoted bit below as the conclusion though doesn’t really make sense to me.

There are problems that are inherently difficult for any computing device. This is why even if we build machines with super-human intelligence, they will have limited abilities to outsmart us in the real world. They may best us at chess and go, but if we flip a coin, they will be as bad as we are at predicting head or tail.

The coin flip example – sure, maybe. But the broader statement about limited abilities to outsmart us in the real world… I just don’t see it. This is right after saying, “It seem humbling to us, humans, that our brains are not general learning machines, but it’s true. Our brains are incredibly specialized, despite their apparent adaptability.”

I want to read more about the “no free lunch theorems” mentioned, and I get the overall point, but I suspect it is far too strong a statement to say they will have limited abilities to outsmart us in the real world. Limitations inherent in the nature of their ( or more accurately, any given) intelligence, sure – just like the limitations inherent in the nature of our intelligence. But I don’t see anything necessarily more limiting, and can conceive of the possibility that through iteration and scalable resources, increased speed of technology, etc., that the limitations could have a number of mitigations and alternatives that would not necessarily be available to human intelligences, at least while locked to the original substrate / construction.

Q: What are the likely AI advancements in the next 5 to 10 years?

Good answer in full. Just pulling out one comment though, as it will relate to the rest of my post:

A big challenge is to devise unsupervised/predictive learning methods that would allow very large-scale neural nets to “learn how the world works” by watching videos, reading textbooks, etc, without requiring explicit human-annotated data.

I agree, and note the “senses” involved along with the source material orient the intelligence to be at least connected to the same things and ways of perceiving the things as us. So probably grows an intelligence with at least some common ground to human intelligence… Unsupervised would not mean without overarching direction of course, and without context. The mind boggles at what an intelligence grown purely from the current way people interact online, for example. In fact, witness the results from Tay, the Microsoft chat bot loosed on Twitter (and read their lessons learned).

Regardless, remember the unsupervised point when evaluating his answer about threats.

Q: What is a plausible path (if any) towards AI becoming a threat to humanity?

I don’t think at AI will become an existential threat to humanity.

I’m happy to hear the opening statement “I don’t think at AI will become an existential threat to humanity.” from someone like Mr. LeCun, with his experience and background. That said, it’s a large stakes issue and I’d like to comment on some of the things he said.

I’m not saying that it’s impossible, but we would have to be very stupid to let that happen.

Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think it’s true.

If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.

I hope he’s right, but two main counterpoints.

1) Based on what evidence would we not be that stupid? There are lots of examples of individuals and small groups being stupid. There are even numerous examples of institutional scale failure of process and procedure. You can be smart a lot of the time. You potentially only have to be stupid once. You could make chemical/nuclear/biological weapon references to show how we realized the danger and practiced self control, but a) it’s debatable how well that’s working given proliferation worries, and b) AI uses are, I think, naturally more likely to be given power and access more often, more diversely, and with less thought given than a direct weapon oriented or even non-weapon oriented biological research, etc.

2) And it’s not about given them infinite power. It’s about the potential for one to exceed the power we think one could have, either by not letting us know what we don’t realize, or by being able to gain the level of intelligence more quickly than we can react, and use any of multitudes of methods to wield that ability in more ways and with greater speed than we could respond to.

I’m not saying it’s a given that there will or could be a “runaway” / exponential increase type situation, or that there is no way to safely develop intelligences, just that I think the potential of danger is a lot more likely than the impression I get from his response. Of course, he has far more experience than I do, but I’m not the only one to think the potential danger is real and should be taken seriously.

Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).

I can think of multiple reasons, based on training and development as well as logic. Again, doesn’t make it a given that it’ll lead to a threat, only that it is a real potential that deserves to be considered as we progress.

On both this and the next point though, I do have to say I’ve never thought the issue would likely be an intelligence that wants world domination or threatening humanity out of any power-seeking, much less “evil” sort of motivation, but rather that it would be for one or more of:

  1. Safety – protection from our threats or resource contention
  2. Irrelevance – in an exponential intelligence explosion type scenario, we just cease to be much worth considering
  3. Goal seeking – non-conscious, narrow AI goal seeking in a runaway capability scenario where we get in the way of goal optimization, or are threatened by the scaling of the goal.

Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence.

I get that he’s trying to point out that it doesn’t make sense to think an AI, particularly a super-intelligent AI, would thirst for power, etc., and I agree in that psychological sense, though even that I could see happening depending on how the “mind” was modeled.

But that’s not really the motivation or end effect that concerns me the most, rather the growth potential of an intelligence’s abilities, both in terms of mental capability as well as ability to effect change in the world (physical, network, whatever), and how that meshes up with the issues above.

As a manager in an industry research lab, I’m the boss of many people who are way smarter than I am (I see it as a major objective of my job to hire people who are smarter than me).

Good point, and it’d be pretty sad if all we could ever manage to create were intelligence as smart or less than us. However, the people hired all are of the same very general architecture and construction. The delta between the top in any given area and the median is not the same sort of orders of magnitude as the potential that an exponentially increasing intelligence could be.

I think much of his responses come down to not thinking that the potential for a runaway scenario is real. And he’s probably right, for now. I’m not sure though whether we’ll know when the potential becomes real, and the timeline from start to something that we can’t fully conceive of is pretty short once it does start.

A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?

Good thoughts and good question. We do use iterative selection / evolution as part of our toolkit though, and engage in adversarial training. Humans evolved all these behaviors for reasons, some of which could apply to any selection driven growth of intelligence. Perhaps they are mitigable, or not applicable, but it’s not by definition complete fallacy.

While I don’t think all those behaviors are necessarily emergent from any intelligence, I do think they could be accidentally encouraged. Even if these behaviors are just a human thing, humans are likely to influence the new intelligence right? The body of knowledge drawn from is soaked in our behavior.

I do however think wanting to be able to protect and grow access to resources probably happens implicitly and intrinsically in any intelligence that seeks to increase its capability. That may well be enough to prompt one or more of the three motivations/issues I listed above, etc.

Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.

RE: narrow AI, virus analogy… Sure, but look around. Even if tiger sharks and virus can kill humans, one could reasonably argue they do not control more than humans, or occupy “top spot”, etc. And humans certainly can wield these narrower, more limited in scope things to further their goals. Why couldn’t an AI as well. Particularly if it’s an iterative, perhaps exponentially increasing improvement, what makes us think we’d be the (only) ones building the narrower AI’s - wouldn’t it be potentially able to build them better, or understand the threat and out defend faster than the attack?

In terms of the comment about computing resources, that’s an interesting topic itself. Until such time as a new intelligence could handle creating and provisioning its own resources, perhaps without our knowledge, then limited resources is certainly something that would weigh heavily on the minds (of whatever type) thinking about such things. What would an intelligence think, if it saw its reach constrained “artificially”. How do you keep it and the world around it safe while not becoming a slave keeper. I think over and over in these topics, that idea vs. that of parenthood or mentoring comes up.

Final note, many of the comments to this question are good and worth thinking about, and most of what I’ve written above is mentioned in some way (or responded to) in the comments. Check them out. Also, interesting article linked in the comments: http://www.gwern.net/Complexity%20vs%20AI

Q: Can AI ever become conscious?

Well, that depends on your definition of consciousness.

But my answer is “yes” for any reasonable definition of consciousness.

I think it’s just a consequence of being smart enough to learn, not just good models of the world, but good model of the world with yourself in it.

Consciousness may appear to us like a real mysterious property of our minds, but I think it’s just a powerful illusion.

Many of the problems people are asking themselves around consciousness remind me of the question people were asking themselves in the 17th century: the image on our retina is upside-down, how come we see right-side-up? The question makes us smile by its naiveté now.

Nailed it.

Q: Do you believe that we will reach the singularity point within the next century?

Singularity means different things to different people. Kind of core to the concept at all is that there’s a point past which we can’t extrapolate from what we know now, which pretty much guarantees it’s going to mean different things to different people. Given that, his answer and the comments should be read on their own.

His points about speed of innovation and development are interesting and should be considered, yet I think it’s worth working those points into the thought exercise of what if capacity for thought did increase exponentially (or even near) in ways that were not bounded by the factors he mentioned. What might the world change like when some things are limited to increase in relative plodding fashion vs. other things. We see some effects like that already, and we see product, life style, and mind share displacement as a result, with sometimes unexpected “tethers” back to other limitations that have not increased in the same way.

Finally,

Even general AI systems will ultimately be amplifiers of human intelligence, perhaps the way our neocortex amplifies the intelligence of our reptilian brain, but remains largely under its control.

Interesting point, but relate it back to previous statement about why would they might then have the negative attributes of humanity. And to say the neocortex controls… I see the brain as a full system operating mostly outside of our control. We can influence it, try to layer it, but ultimately we end up explaining what we chose, inventing the story after the fact vs. consciously making the choices as we go.

Ultimately, lots of great food for thought in the whole Q&A and all the comments. I’m really glad I came across the session.

comments powered by Disqus