Perception optimizes predictions by minimizing free energy with respect to synaptic activity (perceptual inference), efficacy (learning and memory) and gain (attention and salience). This furnishes Bayes-optimal (probabilistic) representations of what caused sensations (providing a link to the Bayesian brain hypothesis).
NOTE: This is just me playing at intuitive librarian, NOT me pretending to possess anything like full comprehension of the material.
Since I am a fan of the factorial number system, I wonder how it might be used to bridge the wetware gap.
I agree that the lack of sensors is key, but wonder how that relates to ratio of brain volume to surface area of skin being rough measure of species intelligence?
I am currently reading "The Joy of Abstraction" by Eugenia Cheng which is an interesting lay level exposition on the topic of Category Theory. I know this has something to do with overcoming the difficulty in programming multi-processors, but I can't figure it out.
I imagine that there is some way to have a machine that prints another machine exoskeleton as a volume filling fractal of high surface area tubing to hold a fluid gut biome that can A. be analyzed with spectrography to figure out composition B. act as a reaction vessel for doing several chemistry experiments and C. inform the body as to how the environment is changing D. act as a heat and energy source given the right reactions E. liquid cool the cores. Perhaps sensors and actuators could be embedded into the hardware with analog functionality.
Category theory is so abstract that it will probably appear to be key to advancements towards new model categories, though I suspect this generality is also what keeps it from being used more directly in practice.
I think of category theory as a triangulization of mathematical territory. If you know where you are at and where you are going in this territory then no need for a categorical navigation system. Though, it can be quite helpful to reveal shortcuts and alternative routes.
Abstraction and innovation are directly related through synthesis, and innovative intelligence is what we may soon and increasingly see from AI. It’s pretty easy to create algorithms for brainstorming solutions, but how do you test them?
I suppose through a combo of we know it when we see it, advanced turing tests we design, and it doesn't really matter if the abstraction is binded to the "right" matter that promotes alignment.
We’ve built an agent – CICERO – that is the first AI to achieve human-level performance in the popular strategy game Diplomacy*. CICERO demonstrated this by playing on webDiplomacy.net, an online version of the game, where CICERO achieved more than double the average score of the human players and ranked in the top 10 percent of participants who played more than one game.
Diplomacy has been viewed for decades as a near-impossible grand challenge in AI because it requires players to master the art of understanding other people’s motivations and perspectives; make complex plans and adjust strategies; and then use natural language to reach agreements with other people, convince them to form partnerships and alliances, and more. CICERO is so effective at using natural language to negotiate with people in Diplomacy that they often favored working with CICERO over other human participants.
Imagine how this can be applied to humans (good and bad). On the good side, a CICERO therapist bot constantly whispering into the ear of those suffering anxiety or depression.
ETA: Prior to AlphaFold, researchers were spending decades building complex models to determine the 3D structure of each individual protein. AlphaFold solved the structure of 200 million proteins in a few years. It will be interesting to see what CICERO reveals about the complex modeling of human interactions.
Will AI modeling of human interaction provide tools to simply bypass the need to tailor messages to individuals. I explain my goal to a bot and it present the argument to different people in different ways that are most likely to convince each person. A more granular Cambridge Analytics.
Perhaps the most useful reason to understand the models is as a defense against AI persuasion.
There is a trend going on with getting your own personal iris, which learns from your writing sources and then can be prompted with novel questions to be answered in your own voice multiple ways. The process involves fine-tuning a GPT-3 model from OpenAI. I haven't played with this stuff yet as I am afraid I might get lazier than I already am.
What if you had your own iris converse with a therapeutic model to reach consensus on whether to continue with therapy or something. This is going to get interesting..
It’s clear that AlphaFold 2 is learning something implicit about the structure of chemistry and physics. It sort of knows what things might be plausible. It’s learned that through seeing real protein structures, the ones that we know of. But also, one of the innovations we had was to do something called self-distillation, which is: get an early version of AlphaFold 2 to predict lots of structures—but also to predict the confidence level in those predictions.
Seems to synergize with constructor theory or counterfactual thinking championed by David Deutsch and Chiara Marletto. In that you could have one model that suggests constraints along the lines of possible/impossible transformations at various scales of physical structure and another model that operates within these constraints to generate possible structures. Then chemputers could aid in the synthesis of such structures.
The process involves fine-tuning a GPT-3 model from OpenAI.
GPT had 110 million parameters
GPT-2 has 1.75bn
GPT-3, now two years old, has 175 billion
GPT-4 is expected to be released in a few weeks and is sticking with text generation.
A bot account here on ERE with the goal of building community would be an interesting project.
Q: Compare and contrast the approaches to early retirement of Mr. Money Mustache and Jacob Lund Fisker.
A Mr. Money Mustache and Jacob Lund Fisker are both personal finance bloggers who advocate for early retirement. However, their approaches to early retirement differ in some key ways.
One key difference is the age at which they advocate for retiring. Mr. Money Mustache advocates for retiring in your 30s or 40s, while Jacob Lund Fisker advocates for retiring in your 50s or 60s.
Another difference is their approach to saving and investing. Mr. Money Mustache emphasizes the importance of saving a high percentage of your income and investing in low-cost index funds, while Jacob Lund Fisker advocates for a more balanced approach to investing that includes a mix of stocks, bonds, and other assets.
Additionally, their perspectives on work and lifestyle differ. Mr. Money Mustache advocates for a minimalist lifestyle and suggests that work is not necessary for happiness, while Jacob Lund Fisker emphasizes the importance of finding work that is fulfilling and enjoyable.
Overall, both Mr. Money Mustache and Jacob Lund Fisker advocate for early retirement, but their approaches and recommendations differ in some important ways.
ETA, I just showed some of my queries to Mrs. Ego and she said, "What would I need that for? That's why I married Mr. Know-it-all." Ugh!
I cannot make it give more specific statements on ERE. It seems like the training data regarding ERE is mostly superficial articles and not the blog and forum. Or my prompt writing isn't creative enough.
EDIT: The models first try on the prompt "Make up a ten step latter called the "ERE Wheaton scale" that categorize people by their consumption mindset going from living paycheck to paycheck on step one to having transcended the need for earning, spending and consuming at step 10 ..."
1. Living paycheck to paycheck - At this step, individuals are focused on meeting their basic needs and covering their expenses, but they have little to no savings or financial security.
2. Struggling to save - At this step, individuals have started to save some of their income, but they are still living a largely consumer-driven lifestyle and facing financial challenges.
3. Saving for retirement - At this step, individuals are focused on building a nest egg for their retirement years and may be starting to make more mindful decisions about their spending and consumption.
4. Investing for financial independence - At this step, individuals are focused on building a diverse investment portfolio that will provide them with a passive income stream and allow them to achieve financial independence.
5. Early retirement - At this step, individuals have achieved a level of financial security that allows them to retire earlier than traditional retirement age and start living off their savings and investments.
6. Simple living - At this step, individuals have embraced a simpler, more minimalist lifestyle and are focused on reducing their expenses and consumption.
7. Zero-waste living - At this step, individuals have minimized their impact on the environment by reducing their waste and consumption of natural resources.
8. Frugal living - At this step, individuals are living a frugal lifestyle that allows them to save money and live comfortably on a low income.
9. Sustainable living - At this step, individuals have adopted a lifestyle that is sustainable and regenerative, focusing on self-sufficiency and community support.
10. Transcending the need for earning, spending, and consuming - At this step, individuals have achieved a state of abundance and fulfillment that allows them to live a life of purpose and service, without the need for traditional forms of earning, spending, and consuming.
I rank both answers around 900-1100 on the milliredditor scale. I mean, I've seen worse from actual humans who graduated from facebook university. The "TL;DR-have opinion anyway" potential is immense.
My main concern at this level would be its potential use for misinformation, especially in subject fields where people are simply seeking to confirm pre-existing opinions. Flood the zone with plausibly sounding verbiage. Entire websites could be filled up with these "insights". There are already some griping about the quality of the internet decreasing due to SEO and copy-writing. This [making patterns out of existing patterns w/o any structural oversight] could make [the internet] far worse.
Oh, I agree. The accuracy is terrible and the writing has a fill-in-the-blanks feel.
Years ago we were interviewed by a handful of print journalists who recorder the interviews then went away and wrote their stories. Every one of the stories had factual errors at least as consequential as these. If I wrote openai queries about myself, those factual errors would likely be present in the results.
My point is, if I were to assign a random journalist who knows nothing about the topic at the start to spend a few days doing what we are asking to openai to do instantaneously, the results would not be much better.
Years ago we were interviewed by a handful of print journalists who recorder the interviews then went away and wrote their stories. Every one of the stories had factual errors at least as consequential as these. If I wrote openai queries about myself, those factual errors would likely be present in the results.
My point is, if I were to assign a random journalist who knows nothing about the topic at the start to spend a few days doing what we are asking to openai to do instantaneously, the results would not be much better.
Yeah, same experience here. Many journalists just go by my wikipedia entry (and wikipedia can only be updated with journalist/published writing) which makes the effect self-reinforcing. I'm hoping that someone will eventually update it with using resources that go deeper. E.g. viewtopic.php?t=12509
FWIW, in case you have to interview again, I've gotten into the habit of asking to do a fact-check before publication. Four out of five journalists will accept that. Lately, I've also been asking if I can post my full answers to their questions elsewhere (after their article is out). Some accept that too. This also puts some pressure on the journalist to avoid their personal bias.
It is kinda like a conversational Wikipedia that is very imprecise but vaguely accurate enough to be a good starting place when wrestling with complex topics (especially history and politics where there are many perspectives interacting with each other).
It is kinda like a conversational Wikipedia that is very imprecise but vaguely accurate enough to be a good starting place ...
Having played with it for a day now I agree. It is fun to find prompt patterns like "Define X in N sentences.", "Elaborate on Y.", "State the difference between Q and P.", "Translate Z as used in your answer to LANGUAGE.", "List N ways that PERSONTYPE could X." etc. that generally elicits something somewhat useful.
EDIT: This is like INTP crack.
Last edited by rref on Sun Dec 04, 2022 3:20 am, edited 1 time in total.