Big Think2023.4.15---|Michael Wooldridge:超智能人工智能是可能的
Chat YouTube:
在人工智能的歷史上,人們一直著迷於創造有意識的、可能像人類一樣的機器的想法。
然而,人工智能不僅僅是創造生命,而是製造能夠執行需要智能的任務的機器,例如將法語翻譯成英語。
人工智能有兩種主要方法:
符號人工智能,涉及對人類專業知識和知識進行編碼;
機器學習,涉及顯示所需輸出的機器示例並使其自行學習。
Q:人工智能的兩種主要方法是什麼,它們有何不同?
Chat YouTube:
人工智能的兩種主要方法是符號人工智能和機器學習。
符號人工智能涉及對人類專業知識和知識進行編碼,而機器學習涉及顯示所需輸出的機器示例並允許其自行學習。
符號人工智能是基於規則的,需要對規則和知識進行顯式編程,而機器
學習是數據驅動的,可以從示例中學習,不需要對規則進行顯式編程。
Q:AI 的好萊塢夢想是什麼?它與古希臘和中世紀的布拉格神話有什麼關係?
Chat YouTube:
人工智能的好萊塢夢想是創造有意識的機器的最終目標,
可能以與人類相同的方式。
這個夢想不僅僅是創造可以執行任務的機器,而是建造有感知力、自我意識、能夠像人類一樣思考和感覺的機器。
這個夢想類似於古希臘和中世紀布拉格關於用非生命材料創造生命的神話,例如分別創造金屬生物的神話鐵匠和用粘土製成的魔像。
縱觀歷史,從無到有創造生命的概念一直令人類著迷。
Q:Superintelligent AI與AGI有何不同
Chat YouTube:
Superintelligent AI(超智能人工智慧)
是指在特定任務上能夠比人類表現得更好的人工智慧系統,它僅僅是做一項或多項特定任務上的強大工具,而不是一個具有超越人類的智慧。
AGI(人工通用智慧)
是指一種更高級別的人工智慧,它代表了人類大腦功能的全面基礎,可以以類似人類的方式、從解決特定問題到擁有廣泛認知能力的全部范圍。 如果實現AGI,我們將擁有一個與人類智慧功能相同的機器。
AGI是一個更美好的愿景,它能夠實現更多更廣泛的功能和更廣泛的應用。
0:00
- AI is not about trying to create life, right?
0:03
That's not what it's about, at all.
0:04
But it's kind of, very much feels like that.
0:09
I mean, if we ever achieved the ultimate dream of AI,
0:13
which I call the "Hollywood dream of AI,"
0:14
the kind of thing that we see in Hollywood movies,
0:16
then we will have created machines that are
0:19
conscious, potentially, in the same way
0:22
that human beings are.
0:24
So it's very like that kind of dream of creating life-
0:28
and that, in itself, is a very old dream.
0:30
It goes back to the ancient Greeks:
0:31
The Greeks had myths about the blacksmiths
0:34
to the gods who could create life from metal creatures.
0:37
In medieval Prague they had the myth of the 'Golem,'
0:41
which was a creature that was fashioned from clay
0:43
and brought to life.
0:46
You know, the dream of creating life from nothing.
0:49
So, it's a fascinating idea.
0:52
It's an idea that's been there throughout human history,
0:55
but it's an idea that we seem to now
0:57
have the tools to potentially make real.
1:01
Hi, my name's Mike Wooldridge.
1:02
I'm a professor of computer science at the University
1:04
of Oxford and an AI researcher, and most recently,
1:07
I'm the author of "A Brief History of AI"
1:09
out now in Flatiron.
The birth of AI & machine learning
1:17
So John McCarthy was an American researcher,
1:20
and he applied for funding from the Rockefeller Foundation
1:23
for a summer school at Dartmouth.
1:25
What he had to do for this funding bid was to give a name
1:29
for what they wanted to do.
1:30
And so he picked the term Artificial Intelligence,
1:33
and it's the name that stuck.
1:35
So what McCarthy was working in was a trend
1:39
in artificial intelligence, which is called 'Symbolic AI.'
1:43
When we consider what we should do,
1:45
we kind of have a conversation with ourselves:
1:47
"I should do this because X and Y and Z,
1:49
no I shouldn't do it because A and B and so on."
1:52
And the Symbolic AI is about trying to recreate
1:56
that kind of reasoning.
1:58
So, how do we approach artificial intelligence?
2:00
How do we go about doing it?
2:02
We wanna build a machine that can do some task
2:04
which requires intelligence in humans,
2:06
let's say translating French into English.
2:09
So the Symbolic AI view of this
2:12
is that what you do is you go
2:14
and find somebody who's really expert and you find out
2:18
from them all the knowledge that they use
2:21
when they translate from French to English,
2:23
and you code it up
2:24
in what are computer versions of sentences.
2:28
And if you do that right, so is the idea,
2:31
then the machine will have that human expertise.
2:34
That's the Symbolic AI approach, right,
2:37
that human intelligent behavior is a problem of knowledge.
2:40
If you give the machine the right knowledge,
2:43
it will be able to do the problem.
2:45
But there's a different trend.
2:47
It says, "Look, forget about trying to tell the machine
2:50
how to do it by giving it the knowledge.
2:53
Just show the machine what you want it to do,
2:56
and get the machine to learn."
2:58
In the French to English translation example,
3:00
you're not telling it how to do the translation.
3:03
You're just saying, "Look, for this input,
3:05
this is what I would want you to produce as the output.
3:07
For this French input, I would want this English output."
3:10
And you give it lots of examples like that.
3:12
And the idea is it will learn how to do it.
3:15
So that's machine learning, is what that's all about.
3:19
And the techniques themselves are not a new thing.
3:21
Two researchers called McCulloch and Pitts,
3:23
in the 1940s, came up with this idea
3:25
for what are now called 'neural networks,'
The AI winter
3:28
but throughout the 60s and early 70s,
3:31
really progress stalled.
3:33
And so there was a backlash against AI in the mid-1970s,
3:37
and that was called 'The AI Winter.'
3:39
It turned out that to make neural networks work,
3:42
you needed lots and lots of data-
3:44
but also, these things are computationally very expensive.
3:47
You need lots of compute power
3:49
in order to make these neural networks work.
3:51
And that's the area where we've seen lots
3:53
of progress over the last 15 years.
3:55
That's really the reason that we're
3:56
having this conversation today.
3:58
That's the reason that AI is such an important field
4:01
at the moment.
The next era of intelligence: AGI
4:05
So what most of contemporary AI is about is focused
4:09
on getting AI systems to do very, very narrow tasks,
4:13
very, very specific things.
4:14
And in those specific tasks, it might be better
4:18
than any living human being, but it can't do anything else.
4:22
You can drive a car, I can drive a car,
4:23
I can then get out of the car and play a game
4:25
of football, rather badly in my case, and then
4:27
make a good meal and tell a joke, and I can do that-
4:31
the whole range of things.
4:32
You consider a driverless car,
4:34
however good it is at driving,
4:36
it's doing one tiny narrow thing.
4:39
So, the grand dream of AI,
4:42
it's not kind of formalized anywhere,
4:44
there's no very specific version of it, but nowadays it goes
4:47
by the name of 'Artificial General Intelligence,' AGI.
4:50
And basically what it means if AGI succeeds,
4:53
if we achieve with that grand dream,
4:55
then we'll have machines that have the same
4:58
intellectual capabilities that human beings do-
5:00
but there's one other fascinating part of the puzzle.
Why do humans have big brains?
5:04
So a colleague of mine here
5:06
at the University of Oxford called Robin Dunbar,
5:08
he's an evolutionary psychologist,
5:10
and he was interested in the following question:
5:13
Why do human beings have big brains?
5:18
It's a very natural question.
5:19
Why do human beings have big brains?
5:21
What Dunbar became convinced by was the idea
5:25
that we have big brains because we are social animals,
5:29
and we have big brains to be able to cope
5:32
with many social relationships.
5:34
You know, where I keep track of: 'What Bob thinks
5:37
about what Alice thinks about Bob, you know,'
5:39
that kind of thing-
5:40
how these stand in relation to one another.
5:43
And what I found about that so fascinating is
5:46
that it means that human intelligence is,
5:48
in a fundamental way, social intelligence.
Creating conscious machines
5:53
Back in the 1950s when John McCarthy
5:55
and his contemporaries were thinking about AI,
5:58
what they wanted to do was to demonstrate
6:00
that machines could do things like learn and solve problems.
6:05
And it's only much more recently
6:07
that AI has become concerned with these social aspects.
6:12
What happens if you have two AI systems
6:14
that can start to interact with one another?
6:16
Then how do we give them social skills,
6:18
skills like cooperation, the ability to work as a team,
6:22
to coordinate with each other, to negotiate with each other?
6:25
So, how might we get there, to conscious machines?
6:29
One of the steps along that path is the idea
6:33
that we will be able to build machines,
6:36
which can put themselves in another's mind.
6:39
I think that's a step in the right direction,
6:41
but the truth is we don't know how to even take that step
6:44
at the moment.
6:49
Human beings are wonderful creations.
6:52
I mean, they are the most incredible creations
6:55
in the entire Universe,
6:56
but there's nothing magic about them.
6:57
We are a bunch of atoms that are bumping
6:59
up against each other.
7:01
For that reason, I don't think there should be
7:03
any logical reason that says
7:05
that conscious machines aren't possible.
7:07
But saying that something is logically possible and saying
7:10
that we know how to do it are completely different things.
7:12
Do we know how to do it?
7:13
Absolutely not.
7:14
And actually, one of the fundamental problems
7:17
is that consciousness itself
7:19
in human beings is really not remotely understood.
7:23
It is one of the big mysteries in science.
7:26
How do that large number of neurons that are connected
7:29
in all those kind of weird ways create consciousness
7:33
and self-awareness, the human experience?
7:37
So the path ahead I think is gonna be slow and torturous.
7:40
These are fearsomely complex things that are being created.
7:44
But, one of the fascinating things, not about AI,
7:47
but about computing generally,
7:49
is that the limits to computing:
7:51
they're not the limits of concrete or steel
7:53
or anything like that in the physical world.
7:55
You're really bounded only by what you can imagine.
8:02
- Get smarter faster, with videos
8:04
from the world's biggest thinkers.
8:11
To learn even more from the world's biggest thinkers,
8:14
get Big Think+ for your business.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment