Nick Bostrom: What happens when our computers get smarter than we are?

Here’s a solid talk from Nick Bostrom on how negligence in the design of super intelligent AI could potentially have adverse effects on humanity.

From the talk I specifically like this sentence, “If you create a powerful optimization process to maximise objective x, you better make sure your definition of x incorporates everything you care about.”, as setting off an intelligent AI to solve a problem, it would need to take into account all variables that align with the values of humanity.

This paperclip maximizer thought experiment from Nick helps illustrate the dangers of complacency albeit in a humorous manner.
http://wiki.lesswrong.com/wiki/Paperclip_maximizer

EDIT:

I should point out Nick’s talk and this post isn’t about deterring us from the goal of creating super intelligence; but instead highlighting the threats that would likely be faced if we implement intelligent AI without sufficient considerations for our safety.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s