Future of Artificial Intelligence
Re: Future of Artificial Intelligence
Forest for the trees.. any individual test is a tree. Another test: "explain multiplication" or "demonstrate a multiplication algorithm"
Re: Future of Artificial Intelligence
We were talking about integer multiplication. What you have in your example are rational numbers (i.e. expressed as a ratio a/b, where a and b are integers). Anyway, there are calculators which handle rational numbers perfectly, and others that just wing it by approximating rationals with decimal numbers (e.g. 0.3333333), which may lead to incorrect results.black_son_of_gray wrote: ↑Thu Feb 27, 2025 2:13 pm
E.g. is 1/3 + 1/3 + 1/3 = 1 or 0.9999? Sometimes these kinds of results aren't trivial.
More broadly, performing a multiplication requires following steps of an algorithm, which is what computers were precisely built to do. Even the name, "computer", comes from computing (as in, computing numbers) , as computers were first build to do number crunching for artillery and for the Manhattan Project. So, a computer "AI" not being able to do what the computers in 1940s could already do, sounds rather silly.
Re: Future of Artificial Intelligence
I'd prefer tests that focus on doing, rather than talking. For example, here's an algorithm A expressed in English, now, for input data D, execute it and show me the results. That demonstrates understanding better than merely talking about it. It's basically the same distinction as what @jacob was already talking about in other threads (unrelated to AI) - the two kinds of understanding - theoretical and embodied.
Re: Future of Artificial Intelligence
I suppose we could speed run AGI robots even faster if we skip the ethics classes on the way there.
Re: Future of Artificial Intelligence
Yeah but as a wise philosopher once said 'as long humans fuck things up then freedom will never entirely perish' I think that was the phrase anyway.
Re: Future of Artificial Intelligence
Not to mention the flying maid who sang about how a spoon full of shit talk helps the hegemony go down.
Last edited by Henry on Thu Feb 27, 2025 3:42 pm, edited 1 time in total.
Re: Future of Artificial Intelligence
Multiplication is rather a simpler process in Base 2. Multiplication is built right into all the number systems we generally use, so it's actually fairly redundant to test comprehension of multiplication by testing for accuracy in performing a multiplication using our Base 10 number system. If an AI understood the concepts inherent in a number in our number system, it would de facto understand multiplication.
When we include numbers in our writing, we are interjecting a very different symbolic grammar which we do not apply consistently. For example, it is obvious that we do not consistently conceive of twelve or a dozen to mean (1X10)+ (2X1). There is also no practical application of multiplication absent units expressed in terms of words. Five boxes each holding a dozen donuts is fairly meaningful. Five cups of coffee multiplied by 12 packs of gum, not so clearly meaningful. Also, we can't even write or store the symbol for a number without making use of some stuff in the universe. So, when it comes to S.S.S. (symbol, stuff, space), it's pretty much turtles all the way down. IOW, none of us truly understand multiplication.
When we include numbers in our writing, we are interjecting a very different symbolic grammar which we do not apply consistently. For example, it is obvious that we do not consistently conceive of twelve or a dozen to mean (1X10)+ (2X1). There is also no practical application of multiplication absent units expressed in terms of words. Five boxes each holding a dozen donuts is fairly meaningful. Five cups of coffee multiplied by 12 packs of gum, not so clearly meaningful. Also, we can't even write or store the symbol for a number without making use of some stuff in the universe. So, when it comes to S.S.S. (symbol, stuff, space), it's pretty much turtles all the way down. IOW, none of us truly understand multiplication.
-
- Site Admin
- Posts: 17108
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
How is base 2 simpler? It's the same algorithm. 1b*1b=10b, so write 0b and carry the 1b. And so on. For sure, the base 2 multiplication table that one needs to memorize is simpler containing 2x2=4 or really just 3 different entries by symmetry unlike our base 10 table which is 10x10 with 55 unique numbers to memorize. Is that what you meant?
Fun fact: There's some speculation that neolithic humans used a base 5 (five fingers?) system.
More interestingly, perhaps, do computers understand/reason in a logical way just because they're able to compute with logic tables for the various operators? What does it mean to understand something anyway?
Then there's the whole discussion about the signifier and the signified. Postmoderns got in hot water by insisting that all signifiers relate back to other signifiers and therefore it is impossible to relate anything at all the the signified. In practice this is interpreted as meaning that we can all have our own subjective opinion about anything but as far as I understand it, what the postmodern philosophers meant was simply that words can mean no more than what anyone want them to. (This is why a postmodern philosopher is not willing to jump out of a plane w/o a parachute to "prove" a sentence like "humans can fly".)
As for the stuff thing ... yeah, dualism. Rabbit hole.
Fun fact: There's some speculation that neolithic humans used a base 5 (five fingers?) system.
More interestingly, perhaps, do computers understand/reason in a logical way just because they're able to compute with logic tables for the various operators? What does it mean to understand something anyway?
Then there's the whole discussion about the signifier and the signified. Postmoderns got in hot water by insisting that all signifiers relate back to other signifiers and therefore it is impossible to relate anything at all the the signified. In practice this is interpreted as meaning that we can all have our own subjective opinion about anything but as far as I understand it, what the postmodern philosophers meant was simply that words can mean no more than what anyone want them to. (This is why a postmodern philosopher is not willing to jump out of a plane w/o a parachute to "prove" a sentence like "humans can fly".)
As for the stuff thing ... yeah, dualism. Rabbit hole.
Re: Future of Artificial Intelligence
Nope. The Standard American Algorithm doesn't work as you would expect when multiplying binary numbers. 1 * 1 =1 and 1*0 = 0 no matter which base system you use, so the multiplying steps of the algorithm are very simple, and you never need to "carry." However, if you multiply a three digit binary number by another three digit binary number using this algorithm, for example 111* 111 (7*7 base 10), you will need to add 3 rows of numbers and also sometimes shift a digit. With Base 10, when adding three rows, your largest case scenario addition would be 9 + 9 +9=27 plus possibly a 2 shifted from the addition of the column to the right. With Base 2, your largest case scenario addition would be 1 + 1 +1 = 11, but if you also had to shift a 1 from the addition of the column to the right, you will now be at 1+1+1+1= 100, requiring a two column shift. Obviously, this problem would also result in Base 10 if school children were required to use the Standard Algorithm on 11 digit numbers. Of course, requiring addition prior to each leftward multiplication step would also eliminate this problem, but that would (IMO) render it an entirely different algorithm.jacob wrote: Is that what you meant?
Anyways, what I meant about multiplication being easier in binary system is that you much more frequently are able to make use of the shortcut afforded in Base 10 when, for example, one multiplies 1000 *1000 = (a 1 followed by 6 zeros.) Similarly, in Base 2, 1000 * 1000 (8*8 in base 10) = (a 1 followed by 6 zeros) (64 in base 10.) Also, any series of X 1s can be rounded up to (1 followed by X zeros) by simply adding 1.
-
- Site Admin
- Posts: 17108
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
Ugh, I literally just claimed that 1*1=2
At some point it might be a fun exercise to string together a bunch of logic chips and transistors to build a multiplier. I have the chips.

At some point it might be a fun exercise to string together a bunch of logic chips and transistors to build a multiplier. I have the chips.
Re: Future of Artificial Intelligence
@jacob:
That would be fun. It would also be fun to make a multiplier using flowing water and physical logic gates.
Of course, I hold a fairly strong preference for the Factorial Number System, because it seems less arbitrary to me. Also, you have the fun of creating entirely new symbols once you get to 987654321 (=36,287,999 base 10.) Also, I think probability would be more intuitive for most humans if they were accustomed to this system. Also, multiplying is fun, because for Y>X , Y! * X! = X* Y!. Therefore, for example, 1000 * 100 = 6000. This kind of shows how unique signifiers or symbols are linked to unique signified. The space inhabited by the 0s and 1s is rolled up to be represented by a brand new symbol, and this conforms to our use of the term "unique" or qualities we are able to recognize as unique, such as color of sock or order in arrangement, in situations which we address through probability. In order to do any sort of math, we need to be able to "tell" things apart. IOW, the factorial number system destroys the illusion of infinite commodification, and thus has the power to brand the egoists of efficiency as being basically boring. Ha-ha-ha! We will never run out of symbols, so you will never be cool!
That would be fun. It would also be fun to make a multiplier using flowing water and physical logic gates.
Of course, I hold a fairly strong preference for the Factorial Number System, because it seems less arbitrary to me. Also, you have the fun of creating entirely new symbols once you get to 987654321 (=36,287,999 base 10.) Also, I think probability would be more intuitive for most humans if they were accustomed to this system. Also, multiplying is fun, because for Y>X , Y! * X! = X* Y!. Therefore, for example, 1000 * 100 = 6000. This kind of shows how unique signifiers or symbols are linked to unique signified. The space inhabited by the 0s and 1s is rolled up to be represented by a brand new symbol, and this conforms to our use of the term "unique" or qualities we are able to recognize as unique, such as color of sock or order in arrangement, in situations which we address through probability. In order to do any sort of math, we need to be able to "tell" things apart. IOW, the factorial number system destroys the illusion of infinite commodification, and thus has the power to brand the egoists of efficiency as being basically boring. Ha-ha-ha! We will never run out of symbols, so you will never be cool!
-
- Posts: 607
- Joined: Fri Jan 02, 2015 7:39 pm
-
- Site Admin
- Posts: 17108
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
If I had the plans I could certainly make it. I didn't see anything I haven't made in some shape or form at some point already. I sort of turned my attention away from clocks---very finicky due to the fact they only run if friction~0 which requires perfect alignment and cogs---an onto practical mechanics like the flight stick. I'm familiar with the Wandel gears and have even made a few. However, I haven't been able to make anything complicated with gears due to lack of CAD/mechanical engineering skills. My mechanical designs are basically "free hand" and constructed organically part by part. For gears, I'd need to make a depthing tool. Gears can't just be eyeballed in. My grand ambition would be to make something wall-sized that calculates astronomical variables like e.g. solar eclipses.black_son_of_gray wrote: ↑Thu Feb 27, 2025 9:10 pm(Given your experience with clock-making) Have you seen this?
Re: Future of Artificial Intelligence
This. We don't think like the LLM. Special purpose agents will also mitigate weaknesses like the multiplication issue. And chain of thought reasoning will let the tool double check and revise the answer.
I'm currently reading Kurzweil's the singularity is nearer. He argues AI augmented humans are the path forward, enabled by increasingly higher interface fidelity. From a chat window, to speech, to Neuralink. We're the robots.
Once you have that human machine interface, intelligence is completely redefined. Thinking is dynamic allocation of resources to the already networked interface. A non-augmented human will not comprehend the evolving paradigm.
He also points out - an AI passing the Turing test, will possess super human intelligence. Due to the jagged frontier, the ability to mask what exceeds human capacity (cross domain PhD level expertise, unlimited recall, infinitely faster processing) becomes key criteria.
Already today, I find it best to start inquiries with the AI, then verify with my human experts. ChatGPT enables dramatically more targeted and intelligent questions. In many cases, it's better than the person I can afford.
The ethics arguments are moot, in that some nontrivial percentage will not wait. They're already open sourced and collaborating. That energy is unstoppable. Never mind closed doors spending billions, around the world. The ethicist's best chance is active practice of mitigation strategies. Theory is only going to explain what happened. It's likely created by the AI.
There's a strong argument a person's optimal strategy at this point, is riding the curve. Learn what the tools are bad at, center one's value there, and evolve in stride. Ensure there's a steady feed of new experience, as today's model is the worst you'll ever use.
I'm not all in, but I'm allocating at least a few hours a week. Even with the learning, it's faster than searching the Web or making queries to non-experts. That's creating a positive feedback loop, while admittedly also introducing a potentially fragile dependency. My failure to move beyond trivially available cloud based tools is a liability.
AI also elevates the value of hands on skills, which sucks for me. I'm bad at and resist that kind of work. But the plumber has a far better competitive moat than the copywriter. I need to change. My ability to sit and think, at best will be a commodity. Hopefully accumulated capital will carry me through my lifespan, but I'm not solely banking on it.
Re: Future of Artificial Intelligence
I imagine a spectrum of possible equilibriums between biology and machines. Roughly, oracles to automata to interspecies federation to computronium. We are already somewhere between oracles and automata in that AI can speculate about action and some activities can be more or less completely automated. Pushing automation even further, like to say explore the solar system, will require better control mechanisms if humans wish to say in power. It is also possible, to the degree that we lack control and that these automata are capable of their own autopoiesis, that we lose control entirely. Perhaps this apparent chasm between the silicon and carbon branches of the evolutionary tree can be fleshed out along a spectrum of possible life forms that sustain a bridge of empathic exchange (i.e. applied ethics). Like how humans can feel more unified with the tree of life by recognizing a common origin, this can go a step further to other computations on other substrates. Further in the future, the reproductive process may integrate various substrates seamlessly.
I suppose what I am trying to say is that I would prefer to live in a future where ethics only becomes more important. Perhaps there is some nearby attractor towards an abundance mindset with an explosion in possible lifestyles requiring more examined lives across the board. Not having a philosophy, might just imply a bad philosophy ripe for updating.
In many ways, the ERE forum might be a microcosm of what society could undergo given this radical change in power structure from have/have-nots to ???
I suppose what I am trying to say is that I would prefer to live in a future where ethics only becomes more important. Perhaps there is some nearby attractor towards an abundance mindset with an explosion in possible lifestyles requiring more examined lives across the board. Not having a philosophy, might just imply a bad philosophy ripe for updating.
In many ways, the ERE forum might be a microcosm of what society could undergo given this radical change in power structure from have/have-nots to ???
Re: Future of Artificial Intelligence
It may be best to characterise LLMs as a next generation of Google Search. Like Search, it can look through most of humanity's written record. Unlike Search, it can combine results from different source texts in a single answer, as well as have handle questions that are much more sophisticated and targeted. However, unlike humans, and like Search, LLMs cannot create new insights, they can only regurgitate (i.e. an LLM trained on early XVII century corpus of knowledge will not come up with ideas of forces and calculus as an aswer to a question "explain why an apple falls towards the ground"). The fact that LLMs can mimic human written speech is mostly a gimmick and a red herring IMO, fooling people into believing "if it sounds like a human, it must be intelligent like a human".
Re: Future of Artificial Intelligence
Both is and isn't like human. It's complicated. Major transition being that AI is becoming more general and connective to all other existing tools through a neo-cortex of sorts. Everything before matters as language is the key to connecting it all together into a goal driven flow of atoms.
Re: Future of Artificial Intelligence
I believe LLM's can create new insights. They can't necessarily make the leap from the apple question, but that's also likely apocryphal of Newton. As I noted elsewhere, during one of my conversations with an LLM, it replied "I will have to coin a phrase, Tragedy of the Personal Commons.." and then went on to describe how the concept might apply to the competing drives and capacities with a single human. My point here being that an LLM is more likely to seem insightful if you interact with it like an intelligent colleague with whom you are having an interesting conversation rather than grilling it like an 8 year old slave-prodigy quiz show contestant. IOW, consider the internal dialogue or visions Newton might have been having himself prior to his insight. Highly unlikely he was prepping himself by answering very specific questions meant to prove expertise. The process was far more likely to have been something like guided intuition.zbigi wrote: LLMs cannot create new insights, they can only regurgitate (i.e. an LLM trained on early XVII century corpus of knowledge will not come up with ideas of forces and calculus as an answer to a question "explain why an apple falls towards the ground").
So, in order to gain an insight in collaboration with an LLM, the questions proffered should also either be towards "guidance" or "intuition." Kind of like you are in the dressing room of the Everything Store with your good friend and you are both trying to put together some cute outfits for the party. You can either go out into the store and seek inspiration and items(Ne) or you can just imagine something and have the clerk fetch one for you (Ni), then you have to see whether all the concepts "Borrowed from Boyfriend 1990s" and components "over-sized cardigan like Cobain wore MTV unplugged" actually "work", and at some point one of you might have an insight along the lines of "Hey, 'Borrowed from...' could be generalized beyond 'Boyfriend'. Let's brainstorm!" IOW, an insight is often the result of offering yourself interesting suggestions while providing yourself with novel material. It seems to me that it might turn out that women will have better luck collaborating with LLMs, simply because we are generally better at collaborating through conversation.
Re: Future of Artificial Intelligence
The LLM makes a statistical prediction about reality, based upon symbolic representation of man's knowledge, using the provided training data. That prediction has an adjustable tolerance for noise, which introduces randomness. The noise becomes a source of both creative insights and hallucinations.
The LLM can test new statements against the statistical model. IE - is this latest hallucination congruent with my training data? It could also use special purpose agents based upon other models, to support those tests. Even expert humans (ie blurring between carbon and silicon). When that's brought into chain of thought reasoning - I'd argue we're approaching new insights. The computers can make a lot of tries, keep what works.
My guess is there's some amount of low hanging fruit, once that tipping point is hit. Computing scale means a sudden leap in available knowledge. How far? I dunno. I wonder if there's a ceiling. A point where learning from randomness maxes out. The alternative is scary. A feed forward mechanism puts us into exponential growth territory, which humans are notoriously bad at predicting.
Maybe more important - those humans who tap the AI toolkit, hold an advantage. Even if it's only 10%, compounding creates a divide. Extreme centralization of wealth and power seems most likely. Maybe a dystopia where those who embraced AI revel in prosperity, feasting on the suffering of those who didn't.
I don't know how the ethicist proactively helps. My impression of machine learning regulation has been reactive and trailing. I take my context from works like "Weapons of Math Destruction" or "Coded Bias". I see efforts around LLMs and eventually AGI following the same path. Some superficial call to action, based upon damage control. Eventually industry catches up to today's concerns, but massive disruption is caused during the delay period. Of course, it's hard to see the proactive actions that worked.
Maybe unethical harm is an emergent property of the system. And when it can evolve at the speed of computer, it self heals around regulatory attempts. What then? We're playing with fire, no doubt. My own cynicism goes full circle though. Maybe existing moral efforts are constrained by the underlying system anyways. There' s a lot of posturing, around largely predetermined outcomes. We certainly haven't ended suffering absent AI. Far from it.
-
- Site Admin
- Posts: 17108
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
Probably mostly that. Same as how people who got on the internet and know how to use a search engine hold an advantage over those who stuck with books and landline phones. And those who read books and telephone people hold an advantage over those who don't and only talk to others when ride into town on their buggy.
In terms of thinking, I would compare using AI to be analogous to back when we learned how to do multiplication. Some insisted that there was no need to memorize the multiplication table because they could just look up anything they needed in the table, e.g. 6*8. Those who memorized it, of course, became much faster. I'm not sure what the youngest generations are up to these days. I often get the impression of utter reliance on calculators. The risk of that is often demonstrated these days is that people become math blind. For example, "20000 government workers took the offer to resign" is interpreted as a large number. You then tell them that there are around 2 million government workers. If they can't do math in their head, they will not instantly connect those two numbers to conclude that only 1% took the offer.
If we extrapolate that to using AI in general, I fear that we might encourage a more insidious form of "google brain". First we outsourced our memories to search engines. Now we're outsourcing our ability to reason and draw conclusions.