chatGPT

Your favorite books and links
zbigi
Posts: 978
Joined: Fri Oct 30, 2020 2:04 pm

Re: chatGPT

Post by zbigi »

ChatGPT is just a village gossiper, it can only repeat what others have said (and the more people have said it and with more confidence, the more it believes in it). There's no intelligence there. I highly doubt you'd like spending time talking to such "person".
It does well on college essays because students are mostly expected to parrot information.

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

I don't see how the same argument can't be made about 90%+ of all humans, who also just repeat what others have said with a conviction proportional to who and how many have said it. When I declared that the Turing Test was passed as far as I'm concerned, Sydney behaved like a somewhat more eloquent version of a normal human (Kegan3, ESFJ). From my perspective that's not very intelligent (it's the very definition of average intelligence) but that may merely due to being trained on the average human. What if it trains under a different metric?

zbigi
Posts: 978
Joined: Fri Oct 30, 2020 2:04 pm

Re: chatGPT

Post by zbigi »

With humans, you at least have a chance of getting a novel thought, a surprising connection etc. I don't think algorithms like ChatGPT will ever be capable of it. If they try to extend algorithms to be more of a free thinkers, then I suspect most of its free thought would be absolute rubbish.

For a simple example, I don't any good and novel physics theories could be created by such approach (just copy-pasting fragments of existing publications, without having any concept of the actual physical world). This extends to pretty much all other areas where existing texts only imperfectly reflect some underlying reality.

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

zbigi wrote:
Tue Mar 14, 2023 4:00 pm
For a simple example, I don't any good and novel physics theories could be created by such approach (just copy-pasting fragments of existing publications, without having any concept of the actual physical world). This extends to pretty much all other areas where existing texts only imperfectly reflect some underlying reality.
Godel's incompleteness theorem aside: For some mathematics, it has been to possible to brute force theorem-proof demonstrations with computers for many decades. I asked a mathematician to which degree mathematicians are just looking through the search space and writing up whatever comes out. (This is to a large degree how the standard model is used in physics; especially in phenomenological research.) The answer was that computer proofs are often unsatisfactory due to lacking elegance. A computer might give a 20 page demonstration that something is true in principle, but if the mathematician suspects it can be done in 3 pages, he is not happy about it. I don't recall asking how he felt about Wiles's proof of Fermat's Last Theorem which is over 100 pages long but done by a human.

I actually think physics is a prime candidate for a breakthrough. Physics is basically nothing but trying to create a mathematical model of existing data as measured by some instrument in an experiment. It is literally about decoding the Rosetta stone of all experimental data and providing a dictionary and grammar for that. In the test scores, physics was somewhere in the middle. GPT4 did best in "complicated and factual" subject matter like biology and history. It had middling performance in "complex and factual" fields like physics or economics that require some reality based understanding. It did the worst in English (chaotic and nonfactual), likely mainly for the reason that you mention that existing texts are incomplete.

zbigi
Posts: 978
Joined: Fri Oct 30, 2020 2:04 pm

Re: chatGPT

Post by zbigi »

jacob wrote:
Tue Mar 14, 2023 4:21 pm

I actually think physics is a prime candidate for a breakthrough. Physics is basically nothing but trying to create a mathematical model of existing data as measured by some instrument in an experiment. It is literally about decoding the Rosetta stone of all experimental data and providing a dictionary and grammar for that.
The "dictionary and grammar" part is key. I can see how ML methods could refine existing theories, but I can't imagine it coming up with a paradigm shift. I can also see it being able to predict some phenomena better than existing physics, but in a completely black-boxy way, which doesn't further our understanding (similarly to those huge and ugly autogenerated proofs). As someone noted, if we somehow had computers capable of ML back in XVI century (perhaps sent to us by nice aliens), people might just train ML models on gravity observations, be satisfied with the predictive power of the models, and never create physics or calculus at all :)

Smashter
Posts: 541
Joined: Sat Nov 12, 2016 8:05 am
Location: Midwest USA

Re: chatGPT

Post by Smashter »

jacob wrote:
Tue Mar 14, 2023 3:03 pm
Now better than the average college student (as measured by passing examinations) in most subject matter fields.
Wow, that's an understatement! I wonder if less-skilled, early career lawyers are getting worried about their jobs yet.

Image

User avatar
Ego
Posts: 6359
Joined: Wed Nov 23, 2011 12:42 am

Re: chatGPT

Post by Ego »

I remember back in the early 80s how my mind was blown when I first learned that I could instantaneously recalculate whole tables of figures using Lotus123. Some of the features Google released today using generative AI in Google Workspace - Gmail, Docs, Sheets, Slides, Images - rivals that.

https://twitter.com/benparr/status/1635684322261729282

On the other hand, it seems that AI is having trouble with theory of mind.

https://papers.ssrn.com/sol3/papers.cfm ... id=4377371

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

Ego wrote:
Tue Mar 14, 2023 8:55 pm
On the other hand, it seems that AI is having trouble with theory of mind.

https://papers.ssrn.com/sol3/papers.cfm ... id=4377371
paper abstract wrote: Specifically, we have identified poor planning abilities and difficulty in recognising semantic absurdities and understanding others’ intentions and mental states. This inconsistent profile highlights how LLMs’ emergent abilities do not yet mimic human cognitive functioning. In addition, our results indicate that standardised neuropsychological batteries developed to assess human cognitive functions may be suitable for challenging ChatGPT performance.
But the lack of planning, the inability to distinguish sarcasm (Poe's Law), and assuming that other people's minds are similar to one's own, also seems to hold for many humans.

Personality testing AIs does appeal to me, though, like how Sydney is a histrionic Kegan3 ESFJ, likely as a result of being trained on the average internet human. Since people are already making conversative chatGPTs (and requesting corresponding woke ones), the StarTrek version of going into the holodeck and requesting a conversation with Newton, Einstein, and Hawking would be an interesting accomplish even if it has to start in a more general sense.

Kinda reminds me of https://en.wikipedia.org/wiki/Flowers_for_Algernon

Even more interesting is finding cognitive functions that aren't human as a kind of extra-human intelligence. For example, most habitually humans think in concrete terms with a few learning the ability to think more rationally and abstractedly, albeit only in specific contexts. Systems theory and cross-paradigmatic thinking is very rare in humans. The latter may be normal in AIs given their construction whereas concrete thinking may be hard simply because it's too many levels away from where the AI mind actually is.

User avatar
Ego
Posts: 6359
Joined: Wed Nov 23, 2011 12:42 am

Re: chatGPT

Post by Ego »

jacob wrote:
Wed Mar 15, 2023 6:59 am
But the lack of planning, the inability to distinguish sarcasm (Poe's Law), and assuming that other people's minds are similar to one's own, also seems to hold for many humans.

Even more interesting is finding cognitive functions that aren't human as a kind of extra-human intelligence.
I suspect some of the most interesting queries posed to AI in the future will revolve around how it resolved these shortcomings and improved at "being human". Trial and error, as most humans do, but at fantastic scale, or some other method. We may learn a thing or two about ourselves.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

jacob wrote: For example, most habitually humans think in concrete terms with a few learning the ability to think more rationally and abstractedly, albeit only in specific contexts.
My experience with tutoring math would inform me that only around 5% of humans are capable of solving a very practical linear-programming/optimization such as this (roughly corresponding to Wheaton Level 5/Algebra 2):
You need to buy some filing cabinets. You know that Cabinet X costs $10 per unit, requires six square feet of floor space, and holds eight cubic feet of files. Cabinet Y costs $20 per unit, requires eight square feet of floor space, and holds twelve cubic feet of files. You have been given $140 for this purchase, though you don't have to spend that much. The office has room for no more than 72 square feet of cabinets. How many of which model should you buy, in order to maximize storage volume?
Obviously, ChatGPT could solve this problem, but would not necessarily be "worldly" enough to conceive of the problem in practical terms. The problem may be made a level or two more complex by, for example, considering "files" as something that flows through the Cabinet system like rain into a reservoir, and/or by weighting value of storage capacity vs. cost vs. other factors such as durability or aesthetics. This would be, roughly, systems level consideration of the problem.

The first Principle or Practice of permaculture (systems level agriculture) is "Observe and Interact." If ChatGPT was given access to a garden camera and other relevant sensors system, would it be able to spontaneously make connections between factors such as temperature, humidity, and mold growth? What set of Values or Ethics would guide its overall decision making? If it was "programmed" to "optimize" the ethics of permaculture, then they would be People Care/Earth Care/Fair Share. If it is "programmed" to "optimize" the system in alignment with the ethics of Microsoft (owner of 49% OpenAI) then they would be ???

chicago81
Posts: 307
Joined: Sat Feb 04, 2012 3:24 pm
Location: Chicago, IL

Re: chatGPT

Post by chicago81 »

If you write code, ChatGPT is excellent at writing small to medium fragments of code to solve specific problems. You have to be very clear in specifying what you want it to do... but you can have it revise a code snippet in any way that you describe. It is quite useful for that.

ZAFCorrection
Posts: 357
Joined: Mon Aug 14, 2017 3:49 pm

Re: chatGPT

Post by ZAFCorrection »

I do not consent to having my writing used in any model training sets without a prior agreement with me.

xmj
Posts: 120
Joined: Tue Apr 14, 2020 6:26 am

Re: chatGPT

Post by xmj »

zbigi wrote:
Tue Mar 14, 2023 3:14 pm
ChatGPT is just a village gossiper, it can only repeat what others have said (and the more people have said it and with more confidence, the more it believes in it). There's no intelligence there. I highly doubt you'd like spending time talking to such "person".
It does well on college essays because students are mostly expected to parrot information.
With GPT-4 this is factually wrong. I've used it to fuse two approaches to investing (Ed Thorp style Kelly sizing and Jim Garland style cash-flow focus for endowments).

It is familiar with the works *and* can create a synthesis.

It can also give you a complete and correct (!!) investment allocation based on the fused approach. I've asked it to Kelly size the top five stocks of three different industries (tobacco, healthcare & food companies -- steady dividend players, reliable compounders) -- and that came out correct too.

On ERE. It is *also* familiar with this forum's work on Wheaton Levels, and, when asked, can recommend books to read for a given WL. The first time it answers, it might add books appropriate to a lower WL. Tell it that and ask to only give recommendations that are useful for someone who has mastered the lower WLs.

Yes, this works. It's *that* good.

Here's the log, for anyone who cares:

https://pastebin.com/b4hB1sqP

loutfard
Posts: 326
Joined: Fri Jan 13, 2023 6:14 pm

Re: chatGPT

Post by loutfard »

Interesting. I asked it for tax due in a specific scenario. Notoriously complicated for Belgium. It got the very complicated parameters correct, and calculated correct percentages for every line item, then messed up a simple primary school addition at the end. What a pity for such an algorithm...

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

loutfard wrote:
Sun Mar 19, 2023 3:44 pm
Interesting. I asked it for tax due in a specific scenario. Notoriously complicated for Belgium. It got the very complicated parameters correct, and calculated correct percentages for every line item, then messed up a simple primary school addition at the end. What a pity for such an algorithm...
This concerns me somewhat especially since chatGPT is getting increasingly used to find solutions to problems. Instead of "doing your own research" or asking a human expert, we'll just ask chatGPT. I was initially impressed with its ability to beat college level exams. However, getting 80% of the answers right while getting 20% stupendously wrong is perhaps less impressive (and more dangerous) than a hewmon getting 50% of the answers right and 50% "mostly right"/"at least not wrong".

The question is whether chatGPT knows when it is wrong or whether its 100% confidence all the way through.

xmj
Posts: 120
Joined: Tue Apr 14, 2020 6:26 am

Re: chatGPT

Post by xmj »

jacob wrote:
Mon Mar 20, 2023 6:42 am
The question is whether chatGPT knows when it is wrong or whether its 100% confidence all the way through.
Call it out, state that you know the answer is wrong, ask it to do it step by step / show the math, and you'll get at the correct answer *and* the reasoning behind.

You can also regenerate the answer until you like what you're seeing...

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

xmj wrote:
Mon Mar 20, 2023 9:46 am
Call it out, state that you know the answer is wrong, ask it to do it step by step / show the math, and you'll get at the correct answer *and* the reasoning behind.
This requires [the human] knowing that the initial answer is wrong though. Uncritical humans might take the first answer at face value.
xmj wrote:
Mon Mar 20, 2023 9:46 am
You can also regenerate the answer until you like what you're seeing...
This creates confirmation bias and opens up the can of worms that is data dredging.

Using chatGPT may get wiser over time but currently there's a lot of "chatGPT says so it must be true" which is only slightly more sophisticated and perhaps somewhat more nefarious than "I saw this in my facebook newsfeed so it must be true". IOW, I fear that this may turn out to simply lever the users: The smart get smarter, the dumb get dumber.

xmj
Posts: 120
Joined: Tue Apr 14, 2020 6:26 am

Re: chatGPT

Post by xmj »

jacob wrote:
Mon Mar 20, 2023 9:58 am
This requires [the human] knowing that the initial answer is wrong though. Uncritical humans might take the first answer at face value.
[...]
IOW, I fear that this may turn out to simply lever the users: The smart get smarter, the dumb get dumber.
Ah absolutely - and I think that's been the case for most technological inventions over the last centuries, many of them work in your favor by obfuscating details you would've had to think through for yourself before.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

@xmj:

I have read 5 of the 7 books chatGPT recommends after you prodded for higher level. I haven't yet searched to determine whether all of these books have been previously mentioned on the forum. What strikes me as highly interesting is that these 7 books tend towards either widening or even challenging the basis of ERE along different pathways. IOW, these recommendations are in alignment with chatGPT making assumption that reader is functioning at or near Stage 5 on Kegan scale, which might lead one to the notion that chatGPT has also reached that level of development.

IOW, this is the paradigm shift. Time to just kick back and wait for the UBI deposits to come rolling in AND/OR Quo Vadis?!?!

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

I just read https://pastebin.com/b4hB1sqP and it still passes the Turing test in terms of the average human although it's not "great".

If this had been written by a financial journalist, who had read up on the wikipedias (I recognize some of the almost verbatim blurbs in the response from the two respective wikipedia entries on me and the ERE Wheaton levels), I'd be impressed but request a few corrections/fact checks before publishing.

If it was a part of student report on ERE, I'd give them a B- or C+ for attributing certain things to ERE that's more reflective of the general state of the FIRE movement than ERE. (This is more forgivable in a journalist who likely spent far less time trying to understand what they were writing about.)

If it had been written by someone on the Boglehead or MMM forum, I'd think they're ready to level up to the ERE forum.

If it had been written someone on the ERE forum, we'd need to have a long and boring talk because I think there are a few fundamental misunderstandings in the making.

IOW, of there's a range of standards that are calibrated on humans with different levels of insight, chatGPT meets some of them but I'm not ready for chatGPT to go on a speaking tour about ERE as my proxy.

Post Reply