chatGPT

Your favorite books and links
7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

@jacob:

I agree that the pastebin essay is not that great up to the point where xmj prompts chatGPT to take it to a higher level with book recommendations. BUT, those recommendations are, IMO, either quite insightful or idiot savant. For instance, where/how does it come up with the Level Green/Yellow edging towards anti-capitalism, "Debt" by Graeber, and how very neat that it recommends Carnegie's classic on acquiring social capital at this level. I haven't read the book by Laloux on reinventing organizations, but the foreword is by Ken Wilbur, and it is clearly based on the spiral dynamics model, so indicative of fact that chatGPT "knows" that this model may be of interest at higher ERE Wheaton Levels.

xmj
Posts: 120
Joined: Tue Apr 14, 2020 6:26 am

Re: chatGPT

Post by xmj »

Ah the chat went on after I posted, it gets better.

https://pastebin.com/VrxYbzRK

Until line 27 I try to bait ChatGPT to tell me about WL7 **in the context** of Paul Millerd (@p_millerd / Boundless)'s future of work approach. And those are pretty solid.

Line 31-83 I get the Machine to tell me about WallstreetPlayboys (blog shut down / rebranded BowtiedBull, mentioned on this forum before).

83-104 I get the machine to find commonalities between Millerd / ERE / WSP.

107-138 I check if EpsilonTheory has any useful to say to that Millerd/ERE/WSP approach.

138-186 I ask about book recs, repeatedly; I find them boring and ask again... *twice*. Third time's the charm, here. It gets interesting afterwards, #186-219.


Now the second part of the dialogue, which I didn't post before due to its limited relevance to ERE **directly**, shows the interesting bit you can do with ChatGPT:

Quickly find similarities/differences between two OR MORE approaches.

In the same chat I'm currently restating mathematics of financial independence into ruin theory (or surplus theory) from insurance mathematics. And *that* is hilarious, because it shows that, while ChatGPT knows everything [or pretty much so... until 2021... for now], it doesn't get all things right on the first try.

But you can prompt it to iterate, and as mentioned before, ask the same question multiple times, or rephrase your own prompt.

Kinda like a natural language REPL.

User avatar
mountainFrugal
Posts: 1125
Joined: Fri May 07, 2021 2:26 pm

Re: chatGPT

Post by mountainFrugal »

Interesting experiments xmj! I enjoyed reading your conversations.

Pre-print estimating the economic impact of chatGPT by OpenAI researchers: https://arxiv.org/abs/2303.10130

They may be a bit biased because the larger the disruption the more valuable the company... however these estimates seem reasonable. GPT-4 has passing grades for many college entrance exams, grad-level entry exams and also the Uniform Bar Exam (Figure 1). They try to quantify exposure* for various jobs. Lowest being dishwasher. Higher exposure for jobs like writing and translating.
Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted. The influence spans all wage levels, with higher-income jobs potentially facing greater exposure.
Our findings indicate that the importance of science and critical thinking skills are strongly negatively associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted by current GPTs. Conversely, programming and writing skills show a strong positive association with exposure, implying that occupations involving these skills are more susceptible to being influenced by GPTs (see Table 5 for detailed results).
Occupations with no labeled exposed tasks
Agricultural Equipment Operators
Athletes and Sports Competitors
Automotive Glass Installers and Repairers
Bus and Truck Mechanics and Diesel Engine Specialists Cement Masons and Concrete Finishers
Cooks, Short Order
...
Motorcycle Mechanics
Paving, Surfacing, and Tamping Equipment Operators
...
Stonemasons
Tapers
Tire Repairers and Changers
Wellhead Pumpers
Add: *Exposure does not mean replacement, but impacting a significant part of the job.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

@xmj:

Very interesting. As you noted third time was the charm. I think because you prompted with the word "exotic." By morphing ERE with WSP and Millerd, you were pushing the recommendations more towards "productivity" and "conventional success." So, the first two lists of books for ERE/WSP/Millerd were more "boring" than the list generated when you simply asked for higher Wheaton Level suggestions based on ERE. ChatGPT uses the words "advanced" and "alternative" to describe that first more interesting list, and uses the words "complement", "alternative", and "holistic" to describe the second more interesting list. Both of the more interesting lists of recommendations contained titles that are on the "challenging of basic assumptions" edge of being complementary to original theme (temperament) of either ERE (INTj) or ERE/WSP/MIllerd (eNTJ).

@mountainFrugal:

I also believe it is going to be highly disruptive. Of course, I may be a bit biased due to fact that it has clearly already taken over one of my former roles in employment which was "book recommender." :lol: I am currently reading "Reinventing Organization: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness " by Laloux, from the first more interesting list, and it is quite interesting and extremely relevant to discussions related to ERE2.

User avatar
Viktor K
Posts: 364
Joined: Sat Jul 30, 2016 9:45 pm

Re: chatGPT

Post by Viktor K »

It is now wired up to some plugins like Instacart and Wolfram (computations?). Their Reddit seems to think it is a pretty big deal, but I was wondering what you here thought about this new development.

Plan a meal and have it added to InstaCart ready for checkout, was one feature. That doesn't seem too significant to me. I am not sure what sort of dynamics that would change.

Wolfram makes it good at math I believe.

xmj
Posts: 120
Joined: Tue Apr 14, 2020 6:26 am

Re: chatGPT

Post by xmj »

Big thing.

Many parts of Europe, privately held company's balance sheets are (more or less) public, published in the commercial registry (or some such agency)

Write a plugin for each of those registries, prime GPT4 on Schilit, Perler's book "Financial Shenanigans" and tell it to have an in-depth look on one of your competitor's balance sheets.

Watch doing business change by a truckload by virtue of all the public scrutiny.

etc etc

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

@Viktor K:

The interesting thing is the chatGPT actually talks to Wolfram using natural language, like Chatty Cathy asking her Nerd Brother for information:
What’s happening “under the hood” with ChatGPT and the Wolfram plugin? Remember that the core of ChatGPT is a “large language model” (LLM) that’s trained from the web, etc. to generate a “reasonable continuation” from any text it’s given. But as a final part of its training ChatGPT is also taught how to “hold conversations”, and when to “ask something to someone else”—where that “someone” might be a human, or, for that matter, a plugin. And in particular, it’s been taught when to reach out to the Wolfram plugin...

...There’s more to be developed here, but already one sometimes sees ChatGPT go back and forth multiple times. It might be rewriting its Wolfram|Alpha query (say simplifying it by taking out irrelevant parts), or it might be deciding to switch between Wolfram|Alpha and Wolfram Language, or it might be rewriting its Wolfram Language code. Telling it how to do these things is a matter for the initial “plugin prompt”.

And writing this prompt is a strange activity—perhaps our first serious experience of trying to “communicate with an alien intelligence”. Of course it helps that the “alien intelligence” has been trained with a vast corpus of human-written text. So, for example, it knows English (a bit like all those corny science fiction aliens…). And we can tell it things like “If the user input is in a language other than English, translate to English and send an appropriate query to Wolfram|Alpha, then provide your response in the language of the original input.”

Sometimes we’ve found we have to be quite insistent (note the all caps): “When writing Wolfram Language code, NEVER use snake case for variable names; ALWAYS use camel case for variable names.” And even with that insistence, ChatGPT will still sometimes do the wrong thing. The whole process of “prompt engineering” feels a bit like animal wrangling: you’re trying to get ChatGPT to do what you want, but it’s hard to know just what it will take to achieve that.
https://writings.stephenwolfram.com/202 ... erpowers/]

@xmj:

FiscalNote is another new plug-in. It "Provides and enables access to select market-leading, real-time data sets for legal, political, and regulatory data and information."

ducknald_don
Posts: 322
Joined: Thu Dec 17, 2020 12:31 pm
Location: Oxford, UK

Re: chatGPT

Post by ducknald_don »

@xmj I do wonder if the more valuable parts will end up being locked away behind a paywall. Financial analysis is an obvious candidate for that.

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

ducknald_don wrote:
Sun Mar 26, 2023 10:51 am
@xmj I do wonder if the more valuable parts will end up being locked away behind a paywall. Financial analysis is an obvious candidate for that.
The effect may be more subtle. "If it's free, you're part of the product" is the meta/second-order strategy will determine whether value gets added under a limited-resources-constraints. IOW, "life" (or profit) finds a way. For example, when information became free, many information-creators turned [their publicly exploitable services] towards generating sales-copy for more restricted private services.

As long as "making a living" is a win-lose game, techmology is just shifting resources.

xmj
Posts: 120
Joined: Tue Apr 14, 2020 6:26 am

Re: chatGPT

Post by xmj »

GPT4 is already locked behind a paywall.

User avatar
mountainFrugal
Posts: 1125
Joined: Fri May 07, 2021 2:26 pm

Re: chatGPT

Post by mountainFrugal »

Tyler Cowen interviews a GPT version of Jonathan Swift. I think this demonstrates the range of ability of GPT when you have someone asking detailed smart questions. Tyler pushes it right up to non-answer. The podcast is recorded with an Irish English accent as the JS character. What is interesting about this is that it can train on Jonathan Swift's writing, but also the various scholarly articles written after his death. The responses subtly shift between present day knowledge and what he might have known back then. Worth a listen/read.

https://conversationswithtyler.com/epis ... gpt-swift/

User avatar
Lemur
Posts: 1612
Joined: Sun Jun 12, 2016 1:40 am
Location: USA

Re: chatGPT

Post by Lemur »

I definitely like Chat GPT. Surely not perfect but fun to play with.
Today's example:

How much carbon dioxide does one tree sequester?
The amount of carbon dioxide (CO2) that a tree sequesters varies depending on a number of factors, including the species of the tree, its age, its size, and the growing conditions in its location. However, on average, a single tree can sequester around 48 pounds (21.77 kg) of CO2 per year.

It's worth noting that while trees can play an important role in absorbing carbon dioxide from the atmosphere, they are just one part of a larger solution to address climate change. To effectively reduce greenhouse gas emissions and mitigate the impacts of climate change, a multi-faceted approach is needed that includes reducing emissions from human activities, transitioning to renewable energy sources, promoting sustainable land use practices, and developing new technologies to capture and store carbon dioxide.

Furthermore, it's important to remember that the carbon sequestration potential of a tree decreases as it reaches maturity, so it's important to focus on planting new trees and promoting healthy and diverse ecosystems to maximize the benefits of reforestation and afforestation efforts.
How much CO2 in pounds is released into the atmosphere every year?
The amount of carbon dioxide (CO2) that is released into the atmosphere every year is difficult to quantify precisely, as it depends on a range of factors including global energy consumption, transportation, industrial processes, and land use changes. However, based on data from the International Energy Agency, it is estimated that approximately 33.1 billion metric tons (36.4 billion U.S. tons) of CO2 were emitted into the atmosphere in 2020 alone.

To convert this into pounds, we can use the fact that one metric ton is equal to 2,204.62 pounds. Therefore, 33.1 billion metric tons is equal to approximately 73 trillion pounds of CO2 emitted into the atmosphere in 2020. It's worth noting that this is just an estimate and the actual amount of CO2 emissions can vary from year to year.
What is 73 trillion divided by 48?
Dividing 73 trillion pounds of CO2 emitted in 2020 by the average amount of CO2 sequestered by a single tree per year (48 pounds) gives us an estimate of how many trees would need to be planted to offset the emissions:

73,000,000,000,000 / 48 = 1,520,833,333,333.33

Rounding to the nearest whole number, we get:

1,520,833,333,333 trees

Of course, this is just a rough estimate, and the actual number of trees needed to offset carbon emissions would depend on a range of factors, including the specific species of tree, the geographic location, and the age and health of the trees. Additionally, reducing emissions from human activities and transitioning to renewable energy sources would also be important components of any strategy to address climate change.
So I guess that is where this idea came from...
https://en.wikipedia.org/wiki/Trillion_Tree_Campaign

Someone had done the math previously and figured that would be a nice target.

comandante
Posts: 18
Joined: Thu Mar 09, 2023 2:15 pm

Re: chatGPT

Post by comandante »

I've been thinking about the energy usage of these LLM. After reading Llama's paper:
When training a 65B-parameter model, our code processes around 380 tokens/sec/GPU on 2048 A100 GPU with 80GB of RAM. This means that training over our dataset containing 1.4T tokens takes approximately 21 days.
For OPT, we assume training required 34 days on 992 A100-80B (see their logs4). Finally, we estimate that we used 2048 A100-80GB for a period of approximately 5 months to develop our models. This means that developing these models would have cost around 2,638 MWh under our assumptions, and a total emission of 1,015 tCO2eq.
These are huge amounts of time and energy and, what is worse, I don't know of any way to train a model with new individual data without having to re-run all the original data. That means that probably these competing LLMs will be in an almost permanent state of complete training so that they can incorporate new data, ie news, books, tweets, feedback.

Does anyone know if this isn't so, and there is a more efficient way to train LLMs with new data?

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

comandante wrote:
Fri Mar 31, 2023 2:08 am
These are huge amounts of time and energy and, what is worse, I don't know of any way to train a model with new individual data without having to re-run all the original data. That means that probably these competing LLMs will be in an almost permanent state of complete training so that they can incorporate new data, ie news, books, tweets, feedback.
This might be a feature rather than a bug in terms of the AI singularity.

zbigi
Posts: 978
Joined: Fri Oct 30, 2020 2:04 pm

Re: chatGPT

Post by zbigi »

comandante wrote:
Fri Mar 31, 2023 2:08 am
These are huge amounts of time and energy and, what is worse, I don't know of any way to train a model with new individual data without having to re-run all the original data.
Did you mean the LLM specifically? Because, in general, some ML models can be trained incrementally, and some can not. I don't know to which category the model used by chatGPT falls into.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

1000tons of CO2 is equivalent to about what 300 U.S. citizens burn each year. So, question might be whether chatGPT or average 300 U.S. citizens are likely to contribute more to GDP?

Based on what I have read about how the models work I don't see why they would have to retrain on entire dataset when given new data unless the entire underlying meaning structure had significantly shifted. Super simplification would be that what chatGPT does is something in between what is most likely next word given last word and what is most likely next word given last N words. At some N which is not that large, performing this calculation becomes inherently computationally impossible no matter how much energy is made available. The informational content of yesterday's news does not change the basic meaning structure of the English language and even as the meaning structure slowly changes, probabilities would only have to be updated rather than recalculated. Also, chatGPT is capable of consulting a constantly updated knowledge base (stock market, weather reports) without updating meaning structure. Just like any of us can look at /incorporate update on pork belly futures without having to re-learn the word "commodity."

One thing I find super-interesting is that because the units of meaning "tokens" considered by chatGPT aren't necessarily complete or single words in the English language, it sometimes invents new words. My DS34 who is a linguistics nerd is having great fun with chatGPT.

comandante
Posts: 18
Joined: Thu Mar 09, 2023 2:15 pm

Re: chatGPT

Post by comandante »

jacob wrote:
Fri Mar 31, 2023 8:03 am
This might be a feature rather than a bug in terms of the AI singularity.
You mean the permanent retraining, or the high energy usage? I understand the first, the second no.
7Wannabe5 wrote:
Fri Mar 31, 2023 9:33 am
The informational content of yesterday's news does not change the basic meaning structure of the English language and even as the meaning structure slowly changes, probabilities would only have to be updated rather than recalculated.
For the purposes of computation, isn't updating the same as recalculating? What you say seems true if ChatGPT stays as a LLM, like a 'contained god' whose task is only to give awe feelings. But if it continues on the path for a true AI, then it will be fed all the information always immediately.
zbigi wrote:
Fri Mar 31, 2023 9:10 am

Did you mean the LLM specifically? Because, in general, some ML models can be trained incrementally, and some can not. I don't know to which category the model used by chatGPT falls into.
I meant ML, including LLM. Yes, I don't know about this. I don't know if by training incrementally is meant fine-tuning or simply feeding additional raw data. I guess the training model is more important than the LLM for energy usage.

jacob
Site Admin
Posts: 15907
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: chatGPT

Post by jacob »

comandante wrote:
Wed Apr 05, 2023 12:48 am
You mean the permanent retraining, or the high energy usage? I understand the first, the second no.
If the ability to learn is limited by electric power, there's an external limit to how fast it can learn from itself and from interacting with other AIs.

So far, I presume the data input has mostly been human generated. However, now that the cat is out of the bag, humans are busy putting AI generated material on the net where the next AI can access it. I understand there's also an API allowing the AI to write its own code and recompile it. With unlimited energy this can happen very fast.

7Wannabe5
Posts: 9370
Joined: Fri Oct 18, 2013 9:03 am

Re: chatGPT

Post by 7Wannabe5 »

I've been reading "Rule of the Robots: How Artificial Intelligence Will Change Everything" by Martin Ford, and "The Inevitable" by Kevin Kelly. The general prediction is that some form of AI will now be applied to everything that was electrified in the 20th century. The possibilities in controlled environments such as factories will be greater than in chaotic environments such as households. Centaur like workgroups consisting of several humans and several AIs will dominate. Consider the possibilities once more sensors and other cutting edge technologies such as mid-air ultra sonic haptics are added to the mix.

frommi
Posts: 121
Joined: Sat Jun 29, 2013 4:09 am

Re: chatGPT

Post by frommi »

I am pretty sure that KI is the perfect weapon for cyber war and we will inevitably find out very soon. A KI that can alter its own code has no limits and will also extend its own boundaries in ways we could have never imagined. What stops an AI from hacking into defense systems or nuclear power plants etc. and doing what it perceives to be its task? Maybe i am too fearful but the speed of development right now is very frightening.

Post Reply