Big Think2023.4.9---Nick Bostrom:人工智能的未來
Chat YouTube:
在這段視頻中,牛津大學教授 Nick Bostrom 討論了超越人類智能的人工智能 (AI) 的潛在發展以及與之相關的潛在風險。
他認為,人工智能有潛力通過改進工具並為污染和貧困等問題開發解決方案,從而極大地造福於人類。
然而,他也強調了與超級智能發展相關的生存風險,包括人工智能有可能以其自身的價值結構凌駕於人類文明之上或被用於破壞性目的。
此外,博斯特羅姆建議,當我們可能創造出有意識的人工智能時,我們也有義務確保它們得到合乎道德的對待。
Q:開發超級智能的潛在好處是什麼?
Chat YouTube:
根據 Nick Bostrom 在給定視頻中的說法,開發超級智能的潛在好處之一是它可以大大改進解決污染和貧困等問題的工具和解決方案。
他建議人工智能可以用於開發更好的清潔能源技術或疾病藥物,甚至可以幫助解決與貧困相關的問題。
此外,博斯特羅姆認為,通往真正美好未來的所有可能途徑都可能在某個時候涉及機器超級智能的發展。
Q:開發超級智能的存在風險是什麼?
Chat YouTube:
尼克·博斯特羅姆 (Nick Bostrom) 在給定視頻中強調了與超級智能發展相關的幾個存在風險。
最大的風險之一是潛在的超級智能可能會以其自身的價值結構凌駕於地球或人類文明之上。
這意味著,如果超級智能與人類有不同的目標或重視不同的事物,它可能會對人類生存構成威脅。
另一個風險是該技術可能被用於破壞性目的。
Q:未來人類會成為賽博格人嗎
Chat YouTube:
現時我們無法預測人類未來的演化方向,也不清楚未來科技的發展會造成什麼影響。即使賽博格技術(cyborg technology)發展得非常成熟,也無法確定人類是否會願意接受此類技術來改造自己的身體。因此,人類是否成為赛博格种族仍是一個未知的問題。
Q:人類如何成為半機器人
Chat YouTube:
目前科技的發展.
使得人類可以進行某些身體部位的替換和改造,
例如透過機械義肢取代失去的手臂或腿部。
此外,
也有一些科技在研發中,例如人工智能、基因編輯等,這些技術有可能可以進一步改造人類的身體和思想。
但是,
是否要成為半機器人仍然是一個個人的選擇,而且需要深思熟慮。
我們必須衡量獲得新能力所帶來的好處和可能的風險,並確保這些技術被安全且有效地應用。
0:00
- In this century probably, we will be building
0:04
this hugely consequential thing
0:06
which is the first general intelligence
0:08
that will be smarter than humans.
0:14
This involves an enormous responsibility.
0:20
This is like maybe the most important thing
0:22
that our species will ever have done on this planet.
0:25
Giving birth to this new level of intellect.
0:33
I'm Nick Bostrom, I am a professor at Oxford University
0:36
where I run the Future of Humanity Institute.
0:40
With the unusual mandate of trying to think carefully
0:42
about the really big picture questions for humanity
0:45
and the future of earth originating intelligent life.
Brains: From organic to artificial
0:58
AI has been a big focus of mine
1:02
really since my teenage years.
1:04
It always seemed that if you look around
1:06
and ask what accounts for why the world is the way it is?
1:10
Our human world, a lot of it is
1:12
because we humans have made it so.
1:14
We have invented all kinds of technologies.
1:17
And so all these things whether it's jet planes or art
1:21
or political systems have come into the world
1:24
through the birth canal of the human brain.
1:27
That immediately made it plausible to me
1:30
that if you could change that channel
1:32
creating artificial brains,
1:33
then you would change the thing that is changing the world.
1:36
(intense music)
The birth of superintelligence
1:40
I think we
1:43
have this notion of what's smart
1:44
and what's dumb.
1:45
Whereas I think there is actually
1:47
a huge amount of space above us
1:49
between our level of intelligence and God's.
1:52
And once you go a little bit beyond human,
1:55
then you get this feedback loop,
1:56
where the brain's doing the AI research
1:58
will become AIs themselves.
2:00
Therefore I think there is a significant chance
2:04
that we'll have an intelligence explosion.
2:09
So that within a short period of time,
2:11
we go from something that was only moderately
2:13
affecting the world, to something that completely
2:16
transforms the world.
2:18
All the things that we could imagine human intelligence
2:21
being useful for, which is pretty much everything.
2:24
Artificial intelligence could be useful for as well
2:26
if it just became more advanced.
2:29
Whether it's like diseases or pollution or poverty
2:34
we would have vastly better tools for dealing with
2:37
if you had super intelligence you could help develop
2:39
better clean energy technologies or medicines.
2:43
So it does look to me like all the plausible paths
2:48
to a really great future, involve the development
2:51
of machine super intelligence at some point.
Existential risks
2:59
There are I think existential risks connected
3:01
with the transition to the machine intelligence era.
3:04
And the most obvious being the possibility
3:06
of underlying super intelligence
3:08
that then overrides the earth, human civilization,
3:11
with its own value structures.
3:14
Another big class of failures would be
3:16
if this technology were used for destructive purposes.
3:22
Then I think there is a third dimension that has received
3:27
less attention so far, which is how good the outcome is
3:30
for the AI stem cells.
3:31
If we're going to construct digital minds
3:33
that are maybe conscious
3:35
or have moral status of various degrees.
3:37
Then how can we ensure that they are treated well?
3:41
If you think about it, most of us would acknowledge
3:45
that various non-human animals have degrees of moral status.
3:48
Even something as simple as a humble lab mouse.
3:52
At that point it becomes an active question
3:54
of whether we have obligations to the AIs
3:56
not to just make sure we don't misuse AIs
3:58
against one another or protect ourselves from the AI,
4:00
but also make sure we do what we ought to do
4:03
with respect to the AIs.
4:04
And if we succeed at that and things go well,
4:07
then we can imagine living lives,
4:09
way beyond anything that is possible now.
4:12
This is why there has been so much interest in AI,
4:15
in recent years because it does look like it could be
4:18
this fourth ground on which the future depends.
The future of humanity
4:22
So on the one hand it does look from this
4:24
kind of slightly abstract point of view
4:25
that we might develop, in the not too distant future
4:28
greater than human AI and it could change everything.
4:31
On the other hand, it seems kind of rather incredible
4:35
that this world that we've known for our whole lives,
4:38
that, that will be a plausible scenario
4:41
in which that changes radically in our lifetime.
4:44
And we become I don't know some sort of
4:45
semi-mortal uploaded creatures, with Jupiter sized minds.
4:50
Like is it I actually take that seriously,
4:52
like it seems to go against day to day lived experience.
4:56
So to keeping both of those in mind,
4:58
creates this kind of interesting tension,
5:01
between two different ways of thinking about the world.
5:03
I think rather than just eliminate one of them,
5:06
just keep them both there and struggle with that tension.
5:15
(intense music)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment