@Ego/7wb5 - My worry is not T vs F but in trend towards diminishing intellectualism and replacing intellect with algorithms---two factors that seem to daisychain and reinforce each other. It is likely that I will change my mind [of whether that's bad] when some program passes the Turing test. My concern is that this might not happen exclusively because computers get smarter but also because humans are getting dumber.
There are two reasons why I think the average human is getting dumber.
1) First is the deintellectualization which promotes beliefs and opinions over facts and reason. We're currently at a stage where it's perfectly acceptable for someone to hold a position that's only slightly more complex than 2+2=4 simply by saying "I don't believe that". We're not at the point where one can "not-believe" 1st grade math, but we're certainly at a point where it's socially acceptable to "not-believe" in 8th grade math results.
In many ways we're getting surrounded by the Dunning-Kruger effect in that public discourse has degenerated to a level where the ones making the dumb statements aren't even aware that they're making them because all the standards [of reasoning] have been removed. One can, without any shame, say: "I googled it, so I've research my facts. Maybe you guys should do the same."
2) Second is increasing faith in the algo or big data. First and foremost, there hasn't been much of a fundamental breaktrough. Rather what's driving this is using mostly the same old techniques but with vastly larger amounts of data and with much more speed than before. This has resulted in some idiot-savant successes, e.g. we have computers that are good at chess, 1980s computer games, closed captioning, medical diagnostics, ... However, I think it's important to acknowledge that putting ten idiot savants (or specialists) into the same room or the same computer does NOT make for genius insights because they lack the ability to draw connections between those fields. Any complex insight is limited by the average person/program's ability to handle said complexity. And that ability is way lower for computers than it even is for humans.
The main reason why autopilots on planes have been in use for decades is that while flying is much more complicated than driving (in the number of buttons and sticks), it's much less complex. There are no obstacles in the air or on a landing strip to bump into. If cars had to drive around on their onw from A to B on a large empty parking lot, driverless cars would be even easier than pilotless planes. It's the dealing with obstacles that makes the driverless car hard. Not from an algorithmic perspective but from dealing with much more information.
I can easily see consumers devolving into some kind of Eloi-like mental stupor (see facebook conversations for an example) because they let algos take over. This is already happening for infotainment data. People are happily digesting soundbytes that are dispensed by an algo for optimized ad-clicking. Just like people are getting so ignorant they aren't capable of realizing how ignorant they are, it might very well be the case that they're not capable of realizing just how ignorant an algorithm is either. If one's basic mathematical understanding is at the 1st grade level, then some being able to do linear regressions and make predictions based on that will possibly seem like magic: "Wow, that's a really smart graph, you got there. In the future, we might have graphs that are so smart we can't even imagine it!
". Indeed, given algos are more like 16th grade math, maybe this is why a public where the non-expert understanding hovers around a 4-8th grade level thinks it's like magic. Of course, this is contingent retaining the high esteem that algos currently hold in the mind of the public: If it comes out of a piece of technology, it must be true. Whereas if it comes out of an expert, it's just an opinion (insofar one disagrees).