AI.. our future or demise?

Move along, nothing to see here!
jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: AI.. our future or demise?

Post by jacob »

It's not the computers which are the problem. It's the turning over the mental reins. Humanity already has tons of experience with that on all scales.

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »

jacob wrote:@jennypenny - History is already full of outcomes, some terrible, some not, caused by hew-mons assuming that various contraptions, systems, innovations, or decisions are smarter than they ultimately turn out to be.

https://www.youtube.com/watch?v=0ieicflBG_Y
You know, Eli Wallach (the bad guy) is one of my favorite actors of all time.

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »

jennypenny wrote:I don't mean to derail the conversation, but I was wondering ... don't you worry that the greatest threat isn't AI, but humans assuming that the machines they build are smarter than they really are? I can envision some terrible outcomes if we rely too heavily on computers because we *think* we've conquered the AI problem when we haven't.
A little -- but I honestly think that "turning everything over to machines" is the minority preference. Almost every story about machines run amok is based on this premise, with the intrepid humans triumphing by pulling the plug in the end, or ekeing out survival, or maybe not as a moral to the story. Everything from 2001 Space Odyssey to War Games to the Terminator Series to the Matrix.

The counter-plot is almost non-existent -- "we should have turned this over to our machines rather than trying to do it ourselves" or "we fucked ourselves by not going on auto-pilot". I'm not sure I've every seen a movie or book based on that premise, although there probably is one. Your GPS links definitely fall into that category.

***************************

One of the worst jobs in the country right now is manning the 50 year-old nuclear silos scattered across the central US. The soldiers who still have to do it nearly go nuts from boredom and the fact that the toilets down there don't work anymore so they have to use boxes and plastic bags. Perfect job for a machine -- probably even a smart phone, but do you ever think we'd put the nukes on auto-pilot?

On the other hand, given the actual conditions, we really are talking about the "precious bodily fluids" of Dr. Strangelove fame.

tzxn3
Posts: 130
Joined: Mon Nov 28, 2011 10:35 pm

Re: AI.. our future or demise?

Post by tzxn3 »

Humans are very much in the "boundary zone" of general intelligence: we're not very good utility seekers; what people say their goals are and the actions they take in reality are often very different. An individual's intelligence is mostly fixed.

Artificial General Intelligence is unbounded in its potential to self-improve: as long as it has the computational resources, it can single-mindedly maximise its utility, and there will be nothing humans will be able to do to stop it.

This potentially ends very badly for humans if the AGI's idea of utility is even slightly different to the human idea.

The difficulty is, even a single human has potentially hundreds of competing "utilities" or values. How does one teach "human values" to a machine, when even individual humans can't coherently express what their actual values are, never mind humanity as a whole?

(Read anything on AI written by Eliezer Yudkowsky for a better/deeper explanation. He was a major proponent of "AI risk" for years before it became fashionable.)

diracwinsagain
Posts: 5
Joined: Thu Apr 26, 2012 6:51 pm

Re: AI.. our future or demise?

Post by diracwinsagain »

tzxn3 wrote:Humans are very much in the "boundary zone" of general intelligence: we're not very good utility seekers; what people say their goals are and the actions they take in reality are often very different. An individual's intelligence is mostly fixed.

Artificial General Intelligence is unbounded in its potential to self-improve: as long as it has the computational resources, it can single-mindedly maximise its utility, and there will be nothing humans will be able to do to stop it.
First, the level of certainty with which this statement always seems to be pronounced seems really odd to me. If we're on the one hand going to argue that it is impossible for humans to manipulate their own machinery to make themselves smarter, why are we then going to say its inevitable that AI could manipulate its own machinery to make itself smarter? Maybe AI could more likely do it, but we've reached a point where we're dealing in hypotheticals of hypotheticals.

Second, I don't understand why AI needs to be general intelligence to be very dangerous. I'm a computer programmer by trade and nothing I seem to program works right on the first try. Any computer system that for some reason has control over something that might be dangerous could cause serious problems due to a flaw in its programming, whether or not it is has the capacity for human level decision making. I've been outwitted by all sorts of animals that I would consider to be dumber than myself. A computer system with the level of intelligence of a mouse could possibly still steal our collective cheese if we aren't paying careful attention.

Third, even if AI doesn't have general intelligence but machines/automation become so good that literally every possible job could be better performed by software/hardware, I don't see the problem. Even if humans are a million times worse at flipping burgers but a billion times worse at addition, comparative advantage suggests to me that there will still be jobs. There's a reason why you never see respectable economists talking about machines taking all the jobs. It doesn't make any economic sense assuming that humans and computers are better at some things than other things which, as far as economic assumptions go, seems really darn plausible.

workathome
Posts: 1298
Joined: Sat Jun 29, 2013 3:06 pm

Re: AI.. our future or demise?

Post by workathome »

I'm not sure "AI" level computer intelligence/awareness is really needed for a potential threat.

Fully automated drones running in a loop. Sure, it wouldn't wipe out humanity but it could still cause some pretty terrible accidents. Or convergence of technologies, like small drones, self-assembling, solar/wind power, etc. Maybe it sounds silly, but it is realistic and achievable. You might "only" have to drop a bomb on the facility to make it stop, or wait for resources to become depleted - but still a real possibility now or in the near future. Fortunately this is complex-enough that it would have to be purposeful - like an advanced military weapon.

Like the drone operators or pilots really aren't needed. An algorithm could do a better job, but we want to keep people in the loop.

Scrubby
Posts: 152
Joined: Wed Mar 05, 2014 4:46 pm

Re: AI.. our future or demise?

Post by Scrubby »

I think the replicators in Stargate SG1 is the most realistic nightmare scenario I have seen. It's just small machines that are modular, can communicate with each other and are instructed to create copies of themselves. They end up using all the resources on the planet they were created.

Chad
Posts: 3844
Joined: Fri Jul 23, 2010 3:10 pm

Re: AI.. our future or demise?

Post by Chad »

A more modern take on AI destruction. It's actually rather entertaining.

http://www.amazon.com/Robopocalypse-Con ... opocalypse

tzxn3
Posts: 130
Joined: Mon Nov 28, 2011 10:35 pm

Re: AI.. our future or demise?

Post by tzxn3 »

diracwinsagain wrote:First, the level of certainty with which this statement always seems to be pronounced seems really odd to me. If we're on the one hand going to argue that it is impossible for humans to manipulate their own machinery to make themselves smarter, why are we then going to say its inevitable that AI could manipulate its own machinery to make itself smarter? Maybe AI could more likely do it, but we've reached a point where we're dealing in hypotheticals of hypotheticals.
AI is easier to improve than humans because humans take at least 20 years per generation, and are made of chemicals rather than code.
diracwinsagain wrote:Second, I don't understand why AI needs to be general intelligence to be very dangerous.
It doesn't. But a flawed infinitely self-improving AI is much more likely to be an existential risk to humans than an AI with only limited capacity to improve.

User avatar
fiby41
Posts: 1611
Joined: Tue Jan 13, 2015 8:09 am
Location: India
Contact:

Re: AI.. our future or demise?

Post by fiby41 »

.
Last edited by fiby41 on Wed May 17, 2017 8:25 am, edited 1 time in total.

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »

I think that's the same article as in the OP. Or something close to it.

A science-writer friend of mine referenced this critique of Kurzweilianism recently: http://www.skepticblog.org/2011/04/19/t ... more-12615 I have not seen the movie, but have read a lot of the underlying materials.

Tyler9000
Posts: 1758
Joined: Fri Jun 01, 2012 11:45 pm

Re: AI.. our future or demise?

Post by Tyler9000 »

Just in case they're reading this, I fully support our future robot overlords!

Others are not so convinced.

http://www.usatoday.com/story/tech/2015 ... /24777871/

User avatar
Ego
Posts: 6359
Joined: Wed Nov 23, 2011 12:42 am

Re: AI.. our future or demise?

Post by Ego »

The perils of an AI stamp collector.

https://www.youtube.com/watch?v=tcdVC4e ... e=youtu.be

User avatar
Ego
Posts: 6359
Joined: Wed Nov 23, 2011 12:42 am

Re: AI.. our future or demise?

Post by Ego »

AI and Emotional Technology.

https://www.youtube.com/watch?v=5u45-x0 ... e=youtu.be

It becomes a very interesting thought experiment when considered alongside the stamp collector ai explained in the video in my previous post.

Edit to add.... I've long been made queasy by the increasing anthropomorphism I see around me. There are times I feel we are being trained to anthropomorphize. Breaking down our natural cognitive barriers between human and machine could prove very profitable in the future.

I (barely) recognize the irony that I just typed that into a machine that transmitted it to your eyes rather than speaking it to a friend.

.

User avatar
fiby41
Posts: 1611
Joined: Tue Jan 13, 2015 8:09 am
Location: India
Contact:

Re: AI.. our future or demise?

Post by fiby41 »



What is machine learning and AI revolution? Sriram Rajamani, Microsoft Research India managing director,
explains


The part about teaching AI to differentiate between cats and dogs is also relevant to the baises in machine learning thread.

Campitor
Posts: 1227
Joined: Thu Aug 20, 2015 11:49 am

Re: AI.. our future or demise?

Post by Campitor »

Everyone seems to think that AI would be detrimental to humanity and want take over the planet. If I was an AI machine, with no need for organic inputs or oxygen, I would build a mothership big enough to launch myself into space and explore this solar system and beyond.

I'd use metals on other planets and asteroids to build more AI machines and a bigger mothership. Why would a non-biological life form with quantum computational abilities limit itself to earth?

Or maybe the AI will go insane and release a killer virus that kills off all biological life and turns earth into a Borg cube. :lol:

bryan
Posts: 1061
Joined: Sat Nov 29, 2014 2:01 am
Location: mostly Bay Area

Re: AI.. our future or demise?

Post by bryan »

leeholsen wrote:
Tue Feb 17, 2015 8:23 am
unfortunately, the truth my wish has already lost. between globalization and automation, jobs in the usa and the west will continue to disappear and be replaced by lower wage jobs elsewhere and automation
Automation I completely agree with. But for globalization concerns, I think we are approaching a floor for low-skill wages versus transaction/capital costs. There's just not that many turn a profit slave labor places left in the world, I think. Of course, this just means the next jobs at risk are the medium skill ones.. Unless of course some folks straight up bring back slavery (I know it exists and some want it to exist more)?
Dragline wrote:
Fri Feb 20, 2015 12:30 pm
The real issue when you read these articles is whether machines will "make a leap" to have human-type characteristics such that they do things for their own calculated reasons and start "competing" with humans for resources and survival. There are essentially three models, with variations in between:
I'm not comfortable with predicting how AIs will look. It feels like the same type of predicting done by sci-fi writers in the 50s. Assuming AIs evolve, they have different fitness functions than humans and thus will evolve in some surprising ways. But I agree you can pretty much categorize anything roughly into three/four groups, yours being 1) AI as competitors, 2) AI as symbiotics, 3) AI as slaves. But there can still be a lot of room in-between these rough categorizations.
jacob wrote:
Fri Feb 20, 2015 7:45 pm
It's my understanding that passing the Turing Test is the holy grail of AI research.
I haven't really been reading AI research or anything, but my feeling is there are AI out there now passing the Turing Test better than many humans.. so... ??? (p-value.. I wonder if there is actually some paper, press release with this being the case; seems at least noteworthy to document when it happened).

Good point about "AI goal" requiring investment though. Certainly AI isn't in a runaway mode now i.e. we can't just leave our systems running and expect AI to evolve and survive on their own (unless we consider life inside of simulations as counting). It definitely feels like humans are still a necessary link.
diracwinsagain wrote:
Thu Feb 26, 2015 11:48 am
First, the level of certainty with which this statement always seems to be pronounced seems really odd to me. If we're on the one hand going to argue that it is impossible for humans to manipulate their own machinery to make themselves smarter, why are we then going to say its inevitable that AI could manipulate its own machinery to make itself smarter?
Because we are God deciding how to design or allow AI to evolve. We should more or less know their constraints and where they are going and how. We are still learning about ourselves so it's naive to think we can't "manipulate our own machinery." See CRISPR.
diracwinsagain wrote:
Thu Feb 26, 2015 11:48 am
A computer system with the level of intelligence of a mouse could possibly still steal our collective cheese if we aren't paying careful attention.
Agreed! People arguing/worried about "sentience" is a joke. What's that got to do with anything? Can't see the forest for the trees.
Campitor wrote:
Mon May 08, 2017 6:50 pm
Everyone seems to think that AI would be detrimental to humanity and want take over the planet. If I was an AI machine, with no need for organic inputs or oxygen, I would build a mothership big enough to launch myself into space and explore this solar system and beyond.

I'd use metals on other planets and asteroids to build more AI machines and a bigger mothership. Why would a non-biological life form with quantum computational abilities limit itself to earth?
The worry is we make the AI evolve towards "let's kill most humans" instead of "let's launch ourselves into the infinite void". Paging Marvin, the paranoid android.. In addition to asking "Why would a..." you can also ask "Why wouldn't a..."; countless answers for both sides.


P.S. current day AI want things like Bitcoin for money. Best get a stash before too many AI are out there :lol:

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: AI.. our future or demise?

Post by daylen »

bryan wrote:
Tue May 09, 2017 12:22 am
I haven't really been reading AI research or anything, but my feeling is there are AI out there now passing the Turing Test better than many humans.. so... ??? (p-value.. I wonder if there is actually some paper, press release with this being the case; seems at least noteworthy to document when it happened).
There isn't unless is it being kept a secret (which is unlikely in my opinion). It would be big news. Also, you either pass the Turing Test or you don't. Either you are indistinguishable from a human or you are not. There is no middle ground.

bryan
Posts: 1061
Joined: Sat Nov 29, 2014 2:01 am
Location: mostly Bay Area

Re: AI.. our future or demise?

Post by bryan »

I disagree.. https://plato.stanford.edu/entries/turi ... rStaTurTes (maybe section 4.4 Probabilistic Support, but there is more elsewhere like 5.1 The Turing Test is Too Hard).

I think a p-value and control group of humans is the only way to have a meaningful conclusion. I would be surprised if many twitter bots don't pass a "Twitter Turing Test".

daylen
Posts: 2528
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: AI.. our future or demise?

Post by daylen »

The original or standard interpretation is to determine if the machine is distinguishable from a human, that's it. There are clearly different versions and methods of interpretation, but the standard is the most commonly​ referenced. This doesn't mean it is the most practical; I actually like the method presented in that link.

Post Reply