Right. One time I was trying out a new Plant ID app, but I didn't enable GPS, so it didn't work very well until I figured out that it didn't know where I was. I have played around with several of them, and I would not judge them to yet be accurate enough for important determination such as edibility.bryan said: Another clear segment of emergence is AI/machine learning. For instance we have neural nets that can classify with quite high probability subject content of an image.
I think I am interested in adaptive feedback. Your self-tuning thermostat example is kind of on the money, but I would be part of the mechanism.Sclass said: I'm not so sure of the human analogy. I take it you're interested in AI? A friend I used to work with described adaptive feedback as a form of AI.
What I would like to be able to do is somehow combine the capabilities of systems simulation software with 3-D landscape design/Sketch-up/animation and the spreadsheet where I tracked my happiness level based on various lifestyle factors, and continued input of data gathered by me and other streamed sources such as weather forecast, stock market reports, fitbit, plant database or calendar.
The first time I designed a garden using 3-D software, almost 20 years ago, I was rather amazed because my planting actually ended up looking very much like it was predicted to look by the program. However, in retrospect, I realize that this is because there was a certain level of collaboration, covert level of understanding of how to proceed with such a project, going on between me and the program. For instance, the program and I were both assuming that I would clear the ground of weeds and provide adequate soil. Also, the database of 3-D models of plants was limited to varieties with well-known characteristics. So, given these rather strict parameters, the program was able to project a quite accurate image of the appearance of the garden in given season and/or after several years of growth had occurred.
OTOH, when I kept a spreadsheet where I kept track of my daily happiness level and how I spent my time, I was rather surprised by some of the resulting correlations. For instance, I determined that I enjoyed travel/novel activity even more than I had expected. However, I have not been able to do a very good job of feedback looping these results, or developing measures a bit more complex than self-reported level of daily happiness.
Anyways, I am making this sound very serious, but it is really my idea of a fun toy. For instance, wouldn't it be cool if I could inform my system model that I added 3 American Cranberry shrubs to my garden this morning, and then there would be a 1.2% probability that if I requested animation of Future Me at 7 AM on 9/5/2025, I would see that I was putting American Cranberry Jelly on my toast for breakfast, but then if I failed a blood glucose test on 1/26/2020, and the system learned that I had stuck to my resolve to give up Jelly for 3 months, the likelihood of 9/5/2025 animation showing me eating any sort of jelly on toast would fall to .25%.