Transdisciplinary transitions and apprenticeships

The "other" ERE. Societal aspects of the ERE philosophy. Emergent change-making, scale-effects,...
7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Transdisciplinary transitions and apprenticeships

Post by 7Wannabe5 »

zbigi wrote: It is only actionable for humans because we, unlike AI, can map the words in generated sentences onto actual real-world objects and concepts, and also can parse the propositional logic encoded in sentences.
AI has also been trained on images to the extent that it can recognize many more objects than most humans. For instance, the Picture This app can identify well over 10,000 different species of plants and determine whether the specimen is healthy or diagnose any problem. So, if somebody was good at robotics, they could build a solar powered garden robot that could wander about with Picture This and then, for instance, perform SNIP action whenever a "weed" or a diseased plant was encountered.

I read that the latest releases of AI have also been trained on video, which I would assume could train them in some sense of movement being associated with verbs. Why would "Alfred, cook me some cajun tater tots." be any more difficult to parse than "Alexa, play me some classic rock of the 1970s?"

Okay, so I asked handiest AI "If you were a robot, and I asked you to cook me some tater tots, what would you do?" and I got a dead-end "I am not allowed to answer this kind of question, because I am not a robot." response. So, I rephrased the prompt to simply "How would a robot cook tater tots?" and this was the response:
A robot designed for cooking could potentially prepare tater tots using a pre-programmed recipe and a series of mechanical actions. Here’s a general idea of how it might work:

Ingredient Preparation: The robot would need to have the necessary ingredients for making tater tots, such as potatoes, flour, salt, and oil. It could use mechanical arms or similar devices to gather and measure these ingredients.

Cooking Process: The robot would follow the recipe’s instructions. This could involve grating the potatoes, mixing them with flour and salt, shaping the mixture into small cylinders (the typical shape of tater tots), and then frying them in oil until they’re crispy and golden brown.

Safety Measures: Throughout this process, the robot would need to take safety precautions, such as managing the heat of the oil and ensuring that the tater tots are cooked thoroughly.

Serving: Once the tater tots are cooked, the robot could use a spatula or similar tool to remove them from the oil, let them drain, and then serve them.

Remember, this is a simplified explanation. A real cooking robot would likely involve much more complex programming and machinery. Also, it’s important to note that while there are robots that can assist with certain cooking tasks, a fully autonomous cooking robot like the one described here is currently beyond our technological capabilities. However, the field of robotics is advancing rapidly, so who knows what the future holds? 😊
ETA: Prompt of "Do humans possess free will" just generated a short essay outlining various philosophical take on the matter. However, "Does pushing on a door cause it to open?" generated the following which is unlikely to be based directly on human composed article entitled "How to open a door." meant for human audience.
Yes, pushing on a door can cause it to open, provided that the door is designed to open with a push motion and it is not locked or blocked. This is due to the application of force which results in the door moving on its hinges. However, not all doors are designed to open with a push; some doors require pulling or sliding instead. So, the effect of pushing on a door will depend on its design and current state (locked/unlocked, blocked/unblocked).

zbigi
Posts: 1002
Joined: Fri Oct 30, 2020 2:04 pm

Re: Transdisciplinary transitions and apprenticeships

Post by zbigi »

7Wannabe5 wrote:
Thu Jan 11, 2024 7:16 am
So, if somebody was good at robotics, they could build a solar powered garden robot that could wander about with Picture This and then, for instance, perform SNIP action whenever a "weed" or a diseased plant was encountered.
Yeah, that could be tried, but I'm guessing it could work well only in relatively controlled environment (like a modern factory farm). Even in such circumstances, grabbing on a stem of an arbitrary weed and pulling it in a way that will be effective for that particular plant is an extremely hard robotics task. Many people don't know it, but current robotics state of the art is pretty crappy at arbitrary object manipulation, esp. if the objects aren't rigid - even folding a basket of laundry is more or less beyond current capabilities.
I read that the latest releases of AI have also been trained on video, which I would assume could train them in some sense of movement being associated with verbs.
Really depends on what "training" means here. If it's just treating videos as a series of 2d stills, then it's obviously not relevant (because the robot will have to operate in 3d world). Only if the algorithm did depth recovery, and parsed videos as 3d scenes based on that, then it could be somewhat relevant. This field (reliable depth recovery from videos, not to mention training on them) is incredibly hard and still in its nascent.
Why would "Alfred, cook me some cajun tater tots." be any more difficult to parse than "Alexa, play me some classic rock of the 1970s?"
Because the former is an arbitrary sentence pertaining to an arbitrary action, while I'm guessing the latter is not treated as a sentence at all [1], but rather is scanned for a bunch of pre-programmed trigger words, pertaining to the set of functionalities Alexa provides.

[1] Just verify by saying "Alexa, DON'T play me some classic rock", which will still trigger music - or "Alexa, if 1 is greater than 2, play classic rock. Play prog rock otherwise."

7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Transdisciplinary transitions and apprenticeships

Post by 7Wannabe5 »

zbigi wrote:Many people don't know it, but current robotics state of the art is pretty crappy at arbitrary object manipulation
Yes, this is true. Moving through space requires a great deal of brain activity for humans too. It would probably be easier to design the robot to do a controlled burn of the weeds.
zbigi" wrote: Only if the algorithm did depth recovery, and parsed videos as 3d scenes based on that, then it could be somewhat relevant.
Gotcha. Maybe AI's ability to always know its location could be utilized? Like how assassin drones seek out and kill particular humans based on facial recognition and likely location?
zbigi wrote: Just verify by saying "Alexa, DON'T play me some classic rock", which will still trigger music - or "Alexa, if 1 is greater than 2, play classic rock. Play prog rock otherwise."
Okay, so I prompted AI with "If cats have fur, tell me Britney Spear's greatest hit, otherwise solve sixteen divided by four." and the response was:
Cats do have fur. Britney Spear's greatest hit is often considered to be "...Baby One More Time."
followed by some more possibilities for greatest hit based on varying criteria, but NO solution for the math problem. IOW, it had no difficulty with parsing or logic.

7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Transdisciplinary transitions and apprenticeships

Post by 7Wannabe5 »

OTOH...



Prompt: Please compose a brief meta-systematic analysis of the climate crisis from the perspective of spiral dynamics and chaos theory.
The climate crisis is a complex and dynamic phenomenon that challenges the conventional ways of thinking and acting. It requires a meta-systematic analysis that can integrate multiple perspectives and levels of development, as well as account for the nonlinear and unpredictable nature of the system. Spiral dynamics is a framework that describes the evolution of human values and worldviews, from survival-oriented to holistic and integral. Chaos theory is a branch of mathematics that studies the behavior of complex systems that are sensitive to initial conditions and exhibit emergent patterns. Using these two lenses, we can understand the climate crisis as a result of the clash between different value systems, as well as the emergence of new possibilities and challenges from the interactions of multiple agents and factors. A meta-systematic analysis of the climate crisis would aim to transcend and include the diverse perspectives and values, as well as embrace the uncertainty and creativity of the system.

jacob
Site Admin
Posts: 16001
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: Transdisciplinary transitions and apprenticeships

Post by jacob »

7Wannabe5 wrote:
Thu Jan 11, 2024 9:25 am
Prompt: Please compose a brief meta-systematic analysis of the climate crisis from the perspective of spiral dynamics and chaos theory.
That answer reminds me of the kind of student who talks a lot but doesn't say much... and usually gets an A... and ends up working in investment banking. The answer was but a more elaborate description of the definitions of the words in the question.

Try: Please suggest a brief meta-systematic solution the climate crisis from the perspective of spiral dynamics and chaos theory.

7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Transdisciplinary transitions and apprenticeships

Post by 7Wannabe5 »

@jacob:

Yeah, I was thinking GIGO. Tried yours. Not a whole lot better.

Prompt: Please suggest a brief meta-systematic solution to the climate crisis from the perspective of spiral dynamics and chaos theory.
From a meta-systematic perspective, addressing the climate crisis requires an understanding of the complex, interconnected systems that contribute to it. This involves recognizing the interplay between individual behaviors, societal structures, and global ecological patterns.

Spiral Dynamics suggests that human consciousness evolves through different stages, each with its own worldview and approach to problem-solving. To address the climate crisis, we need to foster the evolution of consciousness towards stages that recognize the interconnectedness of all life and the importance of sustainability.

Chaos Theory, on the other hand, emphasizes the sensitivity of complex systems to initial conditions. Small changes can have large, unpredictable consequences. This underscores the importance of early intervention and the potential impact of seemingly small actions. For instance, Edward Lorenz’s work on weather patterns led to the development of chaos theory and highlighted the unpredictable nature of weather systems.

Applying these perspectives to the climate crisis, a meta-systematic solution might involve:

Promoting Conscious Evolution: Encourage a shift in societal values towards stages that prioritize ecological sustainability and recognize our interconnectedness with the environment.

Leveraging Small Changes: Identify key areas where small changes could have a significant impact on reducing greenhouse gas emissions and promoting sustainability.

Embracing Complexity: Acknowledge the complexity of the climate system and the uncertainty inherent in our predictions. Use this as a call to action, recognizing that every bit of prevention and mitigation can help.

Integrating Perspectives: Develop solutions that integrate multiple perspectives and disciplines, recognizing that the climate crisis is a complex issue that cannot be solved through a single lens.

Remember, this is a high-level overview and the actual implementation of these ideas would require detailed planning and coordination at multiple levels of society. It’s also important to note that while these theories can provide valuable insights, they are just two of many lenses through which we can view and address the climate crisis.
I followed up with "What are some small changes that could have a signficant impact?" and not too interesting list of individual behavioral changes such as switching to plant based diet and driving and flying less was generated. Then I prompted "Why don't most humans make these small changes?' and found this response somewhat more interesting:
There are several reasons why people might not make these small changes to combat climate change:

Perception of Threat: Climate change represents a trade-off between short-term and long-term benefits; it’s a nonlinear problem; the effects of climate change are distant from most people; and the future is always more uncertain than the present.

Skepticism and Mistrust: The language describing climate change as a crisis and an urgent threat is often met with suspicion. The disconnect between crisis rhetoric and individuals’ own beliefs and experiences can sow doubt about the motivations of those making these claims.

Cost and Confusion: The cost of greener options and confusion about what actions to take can be barriers to change.

Lack of Resources: Some people feel they need more resources and equipment from public authorities to do more for the planet.

Pride in Current Actions: Some people feel proud of what they are currently doing for the planet and may not see the need to do more.

These factors highlight the complexity of the issue and the need for comprehensive solutions that address these barriers. It’s important to remember that while individual actions are crucial, systemic changes are also needed to effectively combat climate change. 🌍

zbigi
Posts: 1002
Joined: Fri Oct 30, 2020 2:04 pm

Re: Transdisciplinary transitions and apprenticeships

Post by zbigi »

@7 Yeah, As far as I know, the LLMs have no trouble processing simple logic or math. However, the original problem still remains - e.g. they would have no idea that you asked them to play the Britney Spears song (as opposed to asking them about most popular song, which is in the realm of purely knowledge/text processing, so LLM had no problem with that - simiarly how Google Search will return pages containing top Britney Spears songs, without any understanding of the concept of Britney Spears or a song).
Alexa solves the problem of understanding what you ask of it via limited, rudimentary hacks, but those hacks are easy to combine with a LLM. So, Alexa can't know that 1 + 1 = 2, and LLMs can't know that you actually want to play the music. A solution that would combine the two is not obvious to me.

7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Transdisciplinary transitions and apprenticeships

Post by 7Wannabe5 »

zbigi wrote:Alexa solves the problem of understanding what you ask of it via limited, rudimentary hacks, but those hacks are easy to combine with a LLM. So, Alexa can't know that 1 + 1 = 2, and LLMs can't know that you actually want to play the music. A solution that would combine the two is not obvious to me.
I am about the least tech savvy human who is also actually working on an M.S. in tech, so it took me a minute to research this and figure out whether we were communicating at cross-purposes.

Any smart device in the realm of Alexa (or similar) has a number of Skills. The invocation of specific Skill(s) is combined with varied Utterances, to produce the desired action. Alexa makes use of NLU (natural language understanding) to manage the great variety of Utterances.

https://towardsdatascience.com/how-amaz ... 06004709d3

Alexa is being updated to include LLM, so that it's ability to manifest device Skills will be greatly enhanced through much wider and more nuanced interpretation of invocation of Skills and associated Utterances.
Whenever a user request or a device interaction (e.g. touch) occurs, Alexa orchestrates a series of actions, interfacing with Alexa’s LLM to construct prompts and iteratively make calls until the completion of the task. Alexa will enhance the prompt with additional signals and data such as memory, context and user preferences before executing API/actions based on LLM predictions...

This means customers using Alexa’s LLM can say “Alexa, the floor is dirty,” and Alexa will be able to infer the intended action is ‘vacuum’, and state ‘start.’ ...

Dynamic Controller enables customers to use a wider range of utterances to control the unique features of their smart home devices in a more conversational way (e.g. a customer can request a lighting solution provider to “make the lights like a misty forest” ...The LLM will infer the colors for the misty forest, the corresponding energy level and the correct capability interface (CreateTemporaryDynamicScene) from the utterance before sending a directive to the corresponding Skill. The directive will contain the device endpoints and the requested payload that indicate how best to set the dynamic lighting scene for the customer.
https://developer.amazon.com/en-US/blog ... s-sep-2023

IOW, in this example, Alexa will "know" that the lighting device is looking for "colors" and "light level", so can combine with Utterance from customer to construct prompt for LLM along the lines of "What colors and lighting level are associated with a misty forest?" Therefore, the range of Skills currently available in smart devices and robotics would seem to be the primary limiting factor, whereas the ability of AI to comprehend that "Suck on the rug." or "Vanquish those Cheerios crumbs." are likely equivalent to Activate Vacuum Skill is already verging on quite advanced.

zbigi
Posts: 1002
Joined: Fri Oct 30, 2020 2:04 pm

Re: Transdisciplinary transitions and apprenticeships

Post by zbigi »

7Wannabe5 wrote:
Fri Jan 12, 2024 12:54 pm
https://developer.amazon.com/en-US/blog ... s-sep-2023

IOW, in this example, Alexa will "know" that the lighting device is looking for "colors" and "light level", so can combine with Utterance from customer to construct prompt for LLM along the lines of "What colors and lighting level are associated with a misty forest?" Therefore, the range of Skills currently available in smart devices and robotics would seem to be the primary limiting factor, whereas the ability of AI to comprehend that "Suck on the rug." or "Vanquish those Cheerios crumbs." are likely equivalent to Activate Vacuum Skill is already verging on quite advanced.
Interesting. If I understand their intentions correctly, they want to pass whatever user said (perhaps after some processing), into the LLM, and LLM is supposed to output desired API call. Pretty cool idea, if they can pull it off with any degree of accuracy.

loutfard
Posts: 381
Joined: Fri Jan 13, 2023 6:14 pm

Re: Transdisciplinary transitions and apprenticeships

Post by loutfard »

This talk of AI inspired me to write a few lines. Nor legal nor illegal drugs involved. Here it is:
I feel the wind in my hair of the ai revolution rushing by with in its woke the hordes of the enchanted. Where will it take them? Why are they clamoring for me to start running too?

I think I'll hunker down, wait for things to settle down a bit and on my morning walk go scavenge for what they dropped in their rush. I'm quite excited already what I will find.

Now if only I could get rid of the madman pulling on my arm, that is! I tell him: "But solving problems myself is half of the fun to me! I'd rather try homelessness than helplessness." For a brief moment, his senses return. He gives me a shy, apologetic smile and starts running again.

7Wannabe5
Posts: 9446
Joined: Fri Oct 18, 2013 9:03 am

Re: Transdisciplinary transitions and apprenticeships

Post by 7Wannabe5 »

@loutfard:

Lovely. Truly.

However, I am still considering how amusing or money/hassle saving it might be to further extend the capabilities of my 83 year old mother’s “smart” environment. She already has a smart speaker and orders everything she needs to be delivered utilizing apps on her tablet.

Post Reply