AI.. our future or demise?

Move along, nothing to see here!
northman
Posts: 74
Joined: Fri Mar 18, 2011 2:48 pm

AI.. our future or demise?

Post by northman »

I read a very good article, over Waitbutwhy.com about AI.

I for one enjoyed the piece, and really cemented my belifes, that I should quit my job and enjoy the remaining years we have left( tounge in cheek ).

Part 1:
http://waitbutwhy.com/2015/01/artificia ... ion-1.html

Part 2:
http://waitbutwhy.com/2015/01/artificia ... ion-2.html


What are your thoughts?

leeholsen
Posts: 325
Joined: Tue Apr 16, 2013 6:38 pm

Re: AI.. our future or demise?

Post by leeholsen »

i say AI and automation is our demise. i realize this is actually very anti-ere, as being for labor, wages anad manfacturing is getting away from the mindset of ere of being able to live on your needs and controlling your instant gratification self; but i also know that most people in the usa are so bought into the consumer culture; they need a growing workforce to support that culture.

unfortunately, the truth my wish has already lost. between globalization and automation, jobs in the usa and the west will continue to disappear and be replaced by lower wage jobs elsewhere and automation; but i still wish it because there's going to be some tough days in the usa when it can no longer attract enough jobs to keep wages increasing and people have to cut expenses and downsize and a lot of people who are used to the good life will not take that change easily.

i have a book recommendation for you if you liked this article: future files, its a bunch of predictions for the next 50 years mostly on how the world will work, shop and live.

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »

This is the Kurzweilian thesis that has been out there for some time now. What he really seeking is some kind of immortality wherein his "essence" can be transferred from his brain to an electronic device. He also hopes to resurrect his father's essence in some way.

There are always gaps in the futurists predictions, but sometimes you have to look closely for them. In the articles we read things like this: "This is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work."

That last assumption -- that "one of these far fetched strategies will work" is simply a leap of faith about transferring what we would call "consciousness" to machines. Maybe one will and maybe one won't -- we really don't know.

Then we get to the inevitable creation of God on Earth, which is the stock and trade of utopian and dystopian visions alike: "As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is: Will it be a nice God?"

Philosophy - wise, this is a funny "modern" issue. As many have pointed out, futurists often are in effect trying to create new quasi-religions out of science -- you have to ask yourself why there is any mention of "God" at all in those articles -- was that really necessary? A general critique of these conceits: (http://www.bbc.co.uk/news/magazine-14944470):

"Science hasn't enabled us to dispense with myths. Instead it has become a vehicle for myths - chief among them, the myth of salvation through science. Many of the people who scoff at religion are sublimely confident that, by using science, humanity can march onwards to a better world.

But "humanity" isn't marching anywhere. Humanity doesn't exist, there are only human beings, each of them ruled by passions and illusions that conflict with one another and within themselves.

Science has given us many vital benefits, so many that they would be hard to sum up. But it can't save the human species from itself.

Because it's a human invention, science - just like religion - will always be used for all kinds of purposes, good and bad."

*****************************************

In my experience, futurists tend to be quite vain and often wrong, but the errors are buried by survival bias, because with lots and lots of predictions, some of them being a bit vague, SOMEBODY will make the right guess and then can go back in time and convince you how brilliant they were/are. But think about all the things that have not happened that were predicted some time ago -- useable fusion powered reactors, nuclear planes, improved propulsion systems that allow us to easily go to Mars, and of course, the flying car.

The other thing futurists always get wrong is the implicit assumption that all new technologies will somehow be accessible and affect human societies equally. There are never any people living in horrible conditions in the futurist world. But even today, something like half of the world's population has never accessed the internet. I can only imagine that the super technologies will be available only to certain people in certain places, who may wish to restrict their use to exploit their own advantages -- humans are like that. (See the nascent cryogenics industry.) I don't think any of these things will change the way humans generally behave, but will magnify their goodness and badness as the case may be.

*****************************************

So where I come out is that I am sure that some of what is described in the articles will come to pass. And some of it will not, or be available only to a relatively few people. And I would not be so vain as to "bank" on any particular forecast, but will try to be prepared for the possibilities.

As an aside, I also think of the peculiar legal issues some of this would raise -- if your essence is transferred to a machine, are you still technically alive and allowed to keep managing all of your investments? Or maybe you transfer them to some kind of trust or corporate entity that effectively allows you to keep control.

User avatar
jennypenny
Posts: 6851
Joined: Sun Jul 03, 2011 2:20 pm

Re: AI.. our future or demise?

Post by jennypenny »

This is the topic of John Brockman's annual question ... 2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

Lots of interesting responses.

-------

This EconTalk podcast with Kevin Kelly (KK from the Technium) also touches on the subject in the second half. It's starts out slowly, but Kelly warms up a bit and it's pretty good.

disparatum
Posts: 61
Joined: Sun Mar 30, 2014 3:07 pm

Re: AI.. our future or demise?

Post by disparatum »

Dragline wrote: As an aside, I also think of the peculiar legal issues some of this would raise -- if your essence is transferred to a machine, are you still technically alive and allowed to keep managing all of your investments? Or maybe you transfer them to some kind of trust or corporate entity that effectively allows you to keep control.
For some other implications, watch the show Black Mirror, specifically the latest episode: https://en.wikipedia.org/wiki/White_Chr ... _Mirror%29

The series as a whole is pretty fascinating and dark, and I think it seeks to contradict the sort of specious assumption that with technology we'd somehow overcome or improve upon human weaknesses. It's counter-argument, I guess, is that technology simply allows us to carry our humanity to greater extremes and that it's a mistake to conflate the two.

akratic
Posts: 681
Joined: Thu Jul 22, 2010 12:18 pm
Location: Boston, MA

Re: AI.. our future or demise?

Post by akratic »

Dragline wrote:That last assumption -- that "one of these far fetched strategies will work" is simply a leap of faith about transferring what we would call "consciousness" to machines. Maybe one will and maybe one won't -- we really don't know.
I don't think you need a leap of faith to think that as technology advances on that artificial intelligence will surpass human intelligence. I mean we already have a blueprint for a sophisticated thinking machine (the human being) we just haven't finished understanding and reverse engineering it yet. Our technological intelligence is increasing monotonically -- and exponentially -- while our human intelligence is approximately fixed.

The only things I can really imagine standing in the way:
1) technological progress halts
or
2) or that which makes us smart is something non-physical and not part of the material world, something that can never be studied or understood or built by humans, something like an immaterial soul

I find 2) a bit too silly, but there's a number of disasters that could lead to 1), such as if humans blow up the planet with weapons, or if we run out of resources first (oops!).

Who knows the timeline, but barring disaster it really just seems like a matter of time to me until our creations are smarter than we are in every way. What those creations do at that point is anybody's guess.

henrik
Posts: 757
Joined: Fri Apr 13, 2012 5:58 pm
Location: EE

Re: AI.. our future or demise?

Post by henrik »

There is also the aspect of ever increasing connectedness. You might accidentally create a vacuum cleaner with an IQ of 300 and never know. If it's connected to the Internet though, the whole thing becomes much more unpredictable. Those mysterious chinese hackers? Maybe they're actually smart TV-s and thermostats in disguise making their first attempts at taking over?:)

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »

akratic wrote:
Dragline wrote:That last assumption -- that "one of these far fetched strategies will work" is simply a leap of faith about transferring what we would call "consciousness" to machines. Maybe one will and maybe one won't -- we really don't know.
I don't think you need a leap of faith to think that as technology advances on that artificial intelligence will surpass human intelligence. I mean we already have a blueprint for a sophisticated thinking machine (the human being) we just haven't finished understanding and reverse engineering it yet. Our technological intelligence is increasing monotonically -- and exponentially -- while our human intelligence is approximately fixed.

The only things I can really imagine standing in the way:
1) technological progress halts
or
2) or that which makes us smart is something non-physical and not part of the material world, something that can never be studied or understood or built by humans, something like an immaterial soul

I find 2) a bit too silly, but there's a number of disasters that could lead to 1), such as if humans blow up the planet with weapons, or if we run out of resources first (oops!).

Who knows the timeline, but barring disaster it really just seems like a matter of time to me until our creations are smarter than we are in every way. What those creations do at that point is anybody's guess.
The question is not whether machines will be able to think faster than humans. I'm taking that for a given. The internet does a lot of that right now.

The real issue when you read these articles is whether machines will "make a leap" to have human-type characteristics such that they do things for their own calculated reasons and start "competing" with humans for resources and survival. There are essentially three models, with variations in between:

1. The Terminator or Matrix model: Machines become fully conscious and take over. If they like humans, great (some Star Trek episodes are premised on this -- primitive cultures run by machine-gods); if not, watch out.

2. The Cyborg model: Humans and machines become integrated in some way that allows transference of human-like consciousness to machines. This is more like what Ray Kurzweil talks about.

3. The Wall-E model (at the beginning of that movie): Machines are still slaves. But they can do so much that most humans do very little or nothing for themselves. Some humans live like blobs and don't even move by themselves. And some areas of the earth are uninhabitable due to human activities, but humans don't care too much because they have technology to survive.

Right now we're at number 3 -- including humans becoming techno-addicts and physical blobs. Even getting to 2 is still theoretical -- we can put mechanical parts on humans, but we have not been able to put a human mind into a machine. I'm not convinced that we can, although I'm willing to entertain the thought. I'm not sure we really want to do that either, although I am sure some people do.

Getting to 1 is in the complete leap of technological religious faith category.

akratic
Posts: 681
Joined: Thu Jul 22, 2010 12:18 pm
Location: Boston, MA

Re: AI.. our future or demise?

Post by akratic »

Okay, I'm trying to understand the skepticism about 1 and 2 eventually happening.

As a thought experiment, imagine if we traveled through time and dropped a working NASA spaceship on the humans of the 1800's. How the spaceship works is so far above their heads that it seems alien or like magic, but regardless, the spaceship works and is theirs to fiddle with and try to understand.

A discussion ensues:

Engineer: "I don't understand how this thing works yet, but I'm working hard on understanding it, and I believe that eventually I'll be able to build something like it, a creation of my own making that can also fly us to the moon. Already I've figure out all these interesting things, but I haven't put it all together yet."

Skeptic: "You'll never be able to understand it or ever build something like it. Thinking that you could is tantamount to a complete leap of technological religious faith."

It just seems obvious to me in this thought experiment that the Engineer is right, not the Skeptic. He has a long way to go, but of course he or his descendants will figure it out eventually (unless the world gets blown up or politicians take away the spaceship or something).

Now I'm not trying to be snarky here or make a faulty comparison, but I can't figure out the difference between the AI discussion and the spaceship.

I mean we already have an example of a bunch of biological junk that if you arrange in a very specific way, you get thinking and consciousness. Unless you want to lean on our consciousness coming from something more than the biological junk, something immaterial like a soul, what exactly is going to stop us from understanding how our brains work and recreating it? I mean even if we do have to resort to building our computers out of biological junk for some reason.

Now I do understand the skeptics that say "not anytime soon" but I just can't understand the thought process behind "not anytime, ever".

jacob
Site Admin
Posts: 15906
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: AI.. our future or demise?

Post by jacob »

@akratic - It may be that comprehending the detailed workings of the human mind is sui generis to the actual operating/using of the human mind. That there's something akin to Godel's incompleteness theorem at work: Specifically, if you require a complete description of the mind, you can not reduce that description to something that is simpler and thus small enough to contain in that mind.

All technology (the kind of technology that humans build) is based on abstraction and on reducing the degrees of freedom to a manageable number. Our very most complicated technological achievements only have 20-30 such degrees of freedom (variables) in its most basic [non-abstractable] components.

Essentially, the human mind is really not that powerful at all.

What if understanding the human mind can not be reduced [by abstraction] to parts that require less than 500 degrees of freedom. Then we're SOL because that is so far outside of our abilities that it's impossible.

In short, we may not be sufficiently complex to understand our own complexity.

akratic
Posts: 681
Joined: Thu Jul 22, 2010 12:18 pm
Location: Boston, MA

Re: AI.. our future or demise?

Post by akratic »

@jacob - thanks, that helps.

JamesR
Posts: 947
Joined: Sun Apr 21, 2013 9:08 pm

Re: AI.. our future or demise?

Post by JamesR »

People keep making the mistake of conflating "human-level intelligence" with other human characteristics like consciousness or sentience.

There is no reason for Strong AI to have any sort of consciousness or sentience or other human-type characteristics.

It'll be easy enough to have a Strong AI to fake whatever is required for a smooth, human friendly interface. But fundamentally it'll just be a tool, a really powerful generalized solver & optimizer.

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »

Agreed -- that's basically my point.

And there is no reason to suppose that a smart machine would want to emulate humans in any way, even if it could. Including anthropomorphizing itself, other machines, animals or objects -- that's just a dumb thing we humans do, because it makes for entertaining stories and cartoons, and dogs take advantage of.

jacob
Site Admin
Posts: 15906
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: AI.. our future or demise?

Post by jacob »

It's my understanding that passing the Turing Test is the holy grail of AI research. We already have superhuman performance in some fields. Digit recognition (handwritten zip codes) is performed more accurately by computers. Expert systems are also often superior. Another field that has made a lot of progress is speech-to-text technology. Also see IBM's Watson.

jacob
Site Admin
Posts: 15906
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: AI.. our future or demise?

Post by jacob »

Another reason why the AI goal might not be reached is if it's such a big project that humanity simply can not afford the spare brainpower to tackle it. E.g. we're currently rich enough to afford educating a great number of string theorists despite them having made little progress in the past 30 years. Essentially, we've taken some of the best minds and wasted them on a sofar nonproductive lark. It takes at least 30 years of education before one can even begin to do cutting edge research on the string theory problem.

What if the number of years required for AI is 50 or 70 years? Is the AI problem of a scale that a human can [not?] possible learn enough to advance the edge before dying of old age?

Dragline
Posts: 4436
Joined: Wed Aug 24, 2011 1:50 am

Re: AI.. our future or demise?

Post by Dragline »


jacob
Site Admin
Posts: 15906
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: AI.. our future or demise?

Post by jacob »


User avatar
jennypenny
Posts: 6851
Joined: Sun Jul 03, 2011 2:20 pm

Re: AI.. our future or demise?

Post by jennypenny »

I don't mean to derail the conversation, but I was wondering ... don't you worry that the greatest threat isn't AI, but humans assuming that the machines they build are smarter than they really are? I can envision some terrible outcomes if we rely too heavily on computers because we *think* we've conquered the AI problem when we haven't.

jacob
Site Admin
Posts: 15906
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: AI.. our future or demise?

Post by jacob »

@jennypenny - History is already full of outcomes, some terrible, some not, caused by hew-mons assuming that various contraptions, systems, innovations, or decisions are smarter than they ultimately turn out to be.

https://www.youtube.com/watch?v=0ieicflBG_Y

User avatar
jennypenny
Posts: 6851
Joined: Sun Jul 03, 2011 2:20 pm

Re: AI.. our future or demise?

Post by jennypenny »

@jacob -- I was thinking of stuff like this, but scaled up to a global level. People seem so eager to turn over the mental reins to computers.

Post Reply