Future of Artificial Intelligence
Re: Future of Artificial Intelligence
On the flip side, memory and reasoning may be extended and supported further into murky waters.
Re: Future of Artificial Intelligence
My school recently had a guest seminar on using AI during job hunts. I haven't had to job hunt since before AI took off, so I don't think I was fully aware of how pervasive it is now. These were the main lessons I took from the seminar:
While your resume should still be human readable, it is safe to assume no one will read it. If the interviewee or anyone else is going to read it, they will use chat gpt to get a summary. Resumes should be written by and for AI and ATS. Aim for 50-75% keyword match, anything below and you will be filtered out, anything above and you will be seen as too qualified/expensive. Give the model your resume, the job description, mission statement to generate multiple resume versions. Run your resumes through various ATS programs yourself to see how they score against the job description and company info.
Because referrals have the highest success rate (as in 10% success), this should be your first method of application. This makes LinkedIn imperative. Absolutely no one can refrain from LinkedIn in this market. Not for moral reasons, privacy reasons, etc. When networking via email and internet, you can use chat gpt to generate every question, answer, and insight you say to them. Give the model their LI profile, company profile, facebook profile, etc. and it will generate responses specific to that person.
At this time, AI should be used for interview practice, but not during the interview itself. There are different models out there just for this purpose. They will grade your performance and progress over time, but you should still get feedback from one human (preferably from the company) before the real interview.
During the seminar, a question was asked along the lines of, "How do I out-AI the company that I am applying to that is trying to out-AI me back?" And his answer was 1. to experiment with the AI tools they use for recruitment 2. to use AI as much as possible without it appearing in your in-person interactions. Not a thought should cross your brain without it crossing a model too.
While your resume should still be human readable, it is safe to assume no one will read it. If the interviewee or anyone else is going to read it, they will use chat gpt to get a summary. Resumes should be written by and for AI and ATS. Aim for 50-75% keyword match, anything below and you will be filtered out, anything above and you will be seen as too qualified/expensive. Give the model your resume, the job description, mission statement to generate multiple resume versions. Run your resumes through various ATS programs yourself to see how they score against the job description and company info.
Because referrals have the highest success rate (as in 10% success), this should be your first method of application. This makes LinkedIn imperative. Absolutely no one can refrain from LinkedIn in this market. Not for moral reasons, privacy reasons, etc. When networking via email and internet, you can use chat gpt to generate every question, answer, and insight you say to them. Give the model their LI profile, company profile, facebook profile, etc. and it will generate responses specific to that person.
At this time, AI should be used for interview practice, but not during the interview itself. There are different models out there just for this purpose. They will grade your performance and progress over time, but you should still get feedback from one human (preferably from the company) before the real interview.
During the seminar, a question was asked along the lines of, "How do I out-AI the company that I am applying to that is trying to out-AI me back?" And his answer was 1. to experiment with the AI tools they use for recruitment 2. to use AI as much as possible without it appearing in your in-person interactions. Not a thought should cross your brain without it crossing a model too.
Not only the ability to reason, but to the ability to interact with other people digitally. The seminar included a demo where the speaker demonstrated hunting for a job with these tools, and it had an odd aura of detachment to humans. An attitude of "just give the model their life story and yours and let it sort everything out"
Re: Future of Artificial Intelligence
Perhaps this is true for some businesses.
That said, just recently, just by talking with people directly and forming social networks I have begun getting job offers. The resume and interview would come basically after they already knew whom they would hire. Nearly no computer required. In my local area, I don't see this changing much.
This is coming from mostly Millenials and Gen X folks, a number of whom use basic flip phones or dumb phones.
My weirdest application for a job came from my third stint working for UPS. They hired me based off how quickly and accurately I could click on stuff in a little computer game they made. Applicants could also play this game on a smart phone rather than a computer.
For reference: my work preference is in the skilled trades and manual labor field, also teaching.
Re: Future of Artificial Intelligence
Speculative future governance model, "AI as the engine, humans as the steering wheel":
https://vitalikblog.w3eth.io/general/20 ... umans.html
https://vitalikblog.w3eth.io/general/20 ... umans.html
-
- Posts: 613
- Joined: Sun Jul 01, 2018 11:45 am
Re: Future of Artificial Intelligence
I've been considering how to use LLMs to extend our operational capacity at newest employer. I think we are alllllmost at the point where hallucinations are infrequent enough for LLM agents to be useful in production for narrow tasks (i.e. structured decision making on unstructured data).
What this means is I can imagine operations teams in the (very near) future being tasked with "Teach OpsBot how to make a yes/no determination on X instance of Y issue." (e.g. is this post "harmful" content that should be downranked according to our site's policies) Ops managers continue to research and update policies as they do today, but instead handing off their work to OpsBots instead of IC's. The winners are those who can (a) engage in the most productive Socratic dialogue to induce "intuition" in their model-based minions and (b) create a diverse and useful curricula for their OpsBots to learn the job in whatever domain.
Likely you'll see OpsBots with subspecialties and then ManagerBots which ensemble their feedback and make a final determination based on their "team's" work. In the past, you'd normally have the escalation tree be ML model -> ops IC -> ops manager. Now I can see the escalation being from traditional ML model -> LLM agent -> ops manager or perhaps ML model -> LLM agent -> ops IC -> ops manager in the short term. Inference cost is still high enough that this is a human-replacement not a default model at high volume.
Why now? Longer context windows for multi-shot training and more agent/tooling integrations as the space matures.
I can imagine in, say, 1-2 years no longer hiring entry-level folks in our ops org. I've been a skeptic of the usefulness of these models, but given the improvements seen in the last 4-ish years... I think we're almost there in terms of rubber hitting the road in terms of adoption. That said, I do still think we're a long way off from utilizing agents for complex, creative, and high-context work-- at least based on what I've seen to date.
What this means is I can imagine operations teams in the (very near) future being tasked with "Teach OpsBot how to make a yes/no determination on X instance of Y issue." (e.g. is this post "harmful" content that should be downranked according to our site's policies) Ops managers continue to research and update policies as they do today, but instead handing off their work to OpsBots instead of IC's. The winners are those who can (a) engage in the most productive Socratic dialogue to induce "intuition" in their model-based minions and (b) create a diverse and useful curricula for their OpsBots to learn the job in whatever domain.
Likely you'll see OpsBots with subspecialties and then ManagerBots which ensemble their feedback and make a final determination based on their "team's" work. In the past, you'd normally have the escalation tree be ML model -> ops IC -> ops manager. Now I can see the escalation being from traditional ML model -> LLM agent -> ops manager or perhaps ML model -> LLM agent -> ops IC -> ops manager in the short term. Inference cost is still high enough that this is a human-replacement not a default model at high volume.
Why now? Longer context windows for multi-shot training and more agent/tooling integrations as the space matures.
I can imagine in, say, 1-2 years no longer hiring entry-level folks in our ops org. I've been a skeptic of the usefulness of these models, but given the improvements seen in the last 4-ish years... I think we're almost there in terms of rubber hitting the road in terms of adoption. That said, I do still think we're a long way off from utilizing agents for complex, creative, and high-context work-- at least based on what I've seen to date.
Re: Future of Artificial Intelligence
Hallucinations are probably best considered bugs for final/critical decisions, though they can also be utilized as features to be selected towards better performance at complex, creative, and high-context work (essentially increasing the thinking temperature). This is also partially what drives inference scaling where the trick is to select the best inferences, ideally from a diverse set. Analogy in evolution being how mutations can compound into increasingly sophisticated adaptations.
-
- Site Admin
- Posts: 17108
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: Future of Artificial Intelligence
On students and AI: https://thewalrus.ca/i-used-to-teach-st ... pt-cheats/
Particularly this, which also has bearing on the tendency to focus on "the degree" rather than "the education".
The begs another question: Are results---the resulting thought---all we care about or does the thinking-process that achieved that result also matter? I'm somewhat undecided, but I do cringe a bit when AI has obviously been used in any answer/reply. But what when it's less obvious or even creative?
Particularly this, which also has bearing on the tendency to focus on "the degree" rather than "the education".
In light of recent discussions, to which degree (no pun intended) does the ability to "think for yourself" remain valued. There's already been some real world erosion in that the ability to work in a team letting others do the thinking is already of some if not more value.The best way to get people to see why education matters is to educate them; you can’t really grasp that value from the outside. There are always students who are willing but who feel an automatic resistance to any effort to help them learn. They need, to some degree, to be dragged along. But in the process of completing the assigned coursework, some of them start to feel it. They begin to grasp that thinking well, and in an informed manner, really is different from thinking poorly and from a position of ignorance.
That moment, when you start to understand the power of clear thinking, is crucial. The trouble with generative AI is that it short-circuits that process entirely. One begins to suspect that a great many students wanted this all along: to make it through college unaltered, unscathed. To be precisely the same person at graduation, and after, as they were on the first day they arrived on campus. As if the whole experience had never really happened at all.
The begs another question: Are results---the resulting thought---all we care about or does the thinking-process that achieved that result also matter? I'm somewhat undecided, but I do cringe a bit when AI has obviously been used in any answer/reply. But what when it's less obvious or even creative?
- mountainFrugal
- Posts: 1335
- Joined: Fri May 07, 2021 2:26 pm
Re: Future of Artificial Intelligence
I think the thought process still matters at least in the medium term. AI can answer questions in a knowledge work way much more effectively than it can in helping you navigate the real world (for now). How you arrive at at thought does still matter, and perhaps a lot more, if you spend more of your time outside of a digital landscape.
Another thought experiment is having AI make all of your decisions (as some journalists have done). Would society be better and more rational (assuming AI is orange value meme) as a whole if people offloaded more of their choices to AI?
Another thought experiment is having AI make all of your decisions (as some journalists have done). Would society be better and more rational (assuming AI is orange value meme) as a whole if people offloaded more of their choices to AI?
Re: Future of Artificial Intelligence
I like the idea of thought being what weaves preferences together throughout a society. Thoughts are not really owned in the sense that feelings or values are (aligns with the model in my journal). The binding of thoughts and feelings is necessary for agency, decision making, or personhood. I don't really see this going away so much as morphing into new forms.
Re: Future of Artificial Intelligence
I don't know that AI has dramatically changed the calculus here. When I took an ethics class twenty years ago, answers were still being parroted. The course had a set of dilemmas ( ie abortion, slavery, war, gun control), with fully explored argument spaces. Typically, you could look up whatever prompt and find the answer. At worst, you had to translate to a like problem, similar to mapping a mathematical proof.
Attempting to create an original argument was kinda stupid. There's no way you were going to exceed the existing corpus of knowledge, especially not in an undergraduate level class.
And of course, there were biases towards type of argument, reflective of the underlying environment. Philosophy classes at a private liberal arts university, populated by 18-22 year old academic high achievers, who can afford to take philosophy, are relatively homogeneous.
So all AI really avoids is time wasted on lookup and translation. When one hopes some amount of learning took place. And that it will be retained for the long term. IMO technology is exposing weakness present in the author's pedagogical model. Training kids to look something up and rephrase it was always stunted.
If what's being taught is an evolved form of thought, challenge the students to use it. Create experiential situations. Don't sit them alone with paper to ponder. It's a terrible way to achieve real learning, let alone grow someone's ability to learn. There needs to be true risk, competition, consequences, rewards.
IMO criticisms of AI are often reactions to underlying BS being exposed. Maybe a bulleted list of insights was always enough. Maybe the class was never more than resume padding. Perhaps the individuals don't have capacity for a higher level of thought, or it simply isn't a useful strategy for their context.
The author's underlying premise - his educational frame leads to a richer life, strikes me as a deep assumption. I don't think it's true for everyone, maybe not even most.
Re: Future of Artificial Intelligence
Even back in the olden days when I was an undergraduate, it was known that, for instance, fraternity houses would store copies of exams and essays. It was just another example of the Matthew Effect, "to those who have, more shall be given..." At this point in the progression, access to AI as a resource is still fairly democratic. The problem is that the quality of an essay written or the project completed in collaboration with AI has not yet been well determined. The easy solution is to freely allow and recommend AI as one writing tool among others, but return to grading on a strict curve. This will at least force the students to critically read the essays they produce with AI and consider which version is the best, or experiment with variations towards improvement.
Instructors could also experiment with setting the bar higher by making assignments more towards open-ended projects. The core project could even address the author's (linked above) central concern. For example, the first week of class the students might choose from a selection of topics in Ethics and submit their first rough proposal for a project that will display their critical thinking or original research or unique experiment, etc. etc. related to the topic. This would likely have the effect of making teaching more interesting, because the final project artifacts would likely be quite varied and rarely limited to a simple essay. The instructor might have the challenge of grading a work of performance art on the ethics related to eating meat vs. a video game designed to demonstrate different concepts of justice vs. a collection of data visualizations related to consumer acceptance of recently patented euthanasia devices, etc. etc. IOW, instead of teaching towards expertise within their narrow field, they may have to adopt a more open framework more towards 1970s Gifted Child Program, because maybe AI lends itself to tearing down the walls between departments or even the walls between the University and The Street, The Field, The Stage, etc. etc. etc.
Instructors could also experiment with setting the bar higher by making assignments more towards open-ended projects. The core project could even address the author's (linked above) central concern. For example, the first week of class the students might choose from a selection of topics in Ethics and submit their first rough proposal for a project that will display their critical thinking or original research or unique experiment, etc. etc. related to the topic. This would likely have the effect of making teaching more interesting, because the final project artifacts would likely be quite varied and rarely limited to a simple essay. The instructor might have the challenge of grading a work of performance art on the ethics related to eating meat vs. a video game designed to demonstrate different concepts of justice vs. a collection of data visualizations related to consumer acceptance of recently patented euthanasia devices, etc. etc. IOW, instead of teaching towards expertise within their narrow field, they may have to adopt a more open framework more towards 1970s Gifted Child Program, because maybe AI lends itself to tearing down the walls between departments or even the walls between the University and The Street, The Field, The Stage, etc. etc. etc.
Re: Future of Artificial Intelligence
To me, this whole situation is a war is won before the battle is fought. There is a corporate hegemony of unlimited financial resources that is going to overwhelm humanity with AI. An individual's position on the "value" discussion of education is mainly hereditary, so past "value" positions will ultimately recede into oblivion. And as we are experiencing, technological revolutions take place quicker so I would argue, this debate is an anachronism waiting to happen. Everyone hates Elon Musk until he offers you a high paying job to geek out all fucking day. But more importantly, this is not a linear discussion. It's a 3D discussion. I went back to school in 2011-2014. The institution I went to did not allow for an on-line degree. I greatly enjoyed the classroom experience, but the travel and time became overwhelming. Today it's offered on-line. So the question is not whether a philosophy degree is important, it's ultimately whether you opt to have your robot teach you a course on Plato's Republic while it's loading your dishwasher. And let's face it, the value of this discussion, in this context, was once a source of debate, yet here we are not even considering the question. And in a few years, this very form of communication could very well be a thing of the past.
Re: Future of Artificial Intelligence
On a higher minded day I don't "hate" Elon, because I comprehend with compassion that he is on the autism spectrum and was raised in a morally primitive environment, but I still wouldn't accept a high paying job from him. To the extent that I enjoy geeking out, I can do it on my own dime.Henry wrote:Everyone hates Elon Musk until he offers you a high paying job to geek out all fucking day.
Re: Future of Artificial Intelligence
Do you hold everyone you ever accepted money from and every product you ever owned to that standard? Did you ever own a Japanese made electronic device? Or an article of clothing made in China? Do you know if every purchaser of books you sold may have held viewpoints that you found abhorrent? Henry Ford was a virulent anti-semitic. Doesn't seem to hurt the sales of F-150s. So it will be for TSLA.
Re: Future of Artificial Intelligence
It is obvious to me that the thinking-process that achieved the result matters most. The whole Pythagoras and Euclidian world (geometry) was for me a discovery and a joy to learn (at 14/15 year old) as it was a kind of complete (axioms) knowledge which made me understand about 360 degrees, surface and cubic measurement etc. which I have used all my life.jacob wrote: ↑Thu Mar 06, 2025 5:43 pm
In light of recent discussions, to which degree (no pun intended) does the ability to "think for yourself" remain valued. There's already been some real world erosion in that the ability to work in a team letting others do the thinking is already of some if not more value.
The begs another question: Are results---the resulting thought---all we care about or does the thinking-process that achieved that result also matter? I'm somewhat undecided, but I do cringe a bit when AI has obviously been used in any answer/reply. But what when it's less obvious or even creative?
To place it in context: in 1953 I went to the first class of elementary school. I learned writing on a real slate with a stylo (made of another stone) and had a little sponge to clear the slate. Last week I sat beside a girl of about 18 in an airplane. She was writing with a stylo on an iPad and made math sums. I asked her: can the iPad read your stylo (handwritten) sums? The answer was no, but she could send her written sums to her teacher. That's all what changed...
Re: Future of Artificial Intelligence
No, of course not, but that is different than committing to full-time contract working in position subservient to somebody who demonstrates behavior in conflict with my values. For example, I did in the past quit a corporate management job on the occasion I was disgusted by a secretive plan to layoff long-time employees. My disgust was primarily generated by how gleeful with their petty power some of the other managers seemed to be about being privy to the big secret. I also do not suffer from suck-up syndrome or the sort of second-grader loyal-soldier morality that would inhibit me from informing those who were going to be laid-off about the "big secret." I absolutely would also not hesitate to whistle-blow in any situation in which that form of behavior was warranted. Humans who are groveling to those of higher rank and shitty to those of lower rank are repulsive to me. Although, I do also empathize that it is difficult at times to be fully self-aware about this variety of behavior, especially in complex situations. It can be very difficult to sort out the line over which you have crossed from "good' to "good German" whatever your mish-mash of ideologies to druthers.Henry wrote:Do you hold everyone you ever accepted money from and every product you ever owned to that standard?
OTOH, I also do make an effort to associate with others with whom I strongly do not agree on politics or religion or similar matters at the level of peer, as opposed to "boss man" or "guru." And, I also think that it is best practice to extent benefit of doubt when lacking knowledge. For example, I sold rare books on variety of topics I personally found distasteful, because, for instance, somebody might be using a book on the topic of how to hide a dead body for the purpose of writing a mystery novel. But, that IMO is far different than eyes-wide-open hard-con-pushing swampland on commission for Glengarry Glenn Ross.
Also, the primary means by which I currently earn income is by teaching disadvantaged children, and I admit that this likely pushes me a bit more towards Goody-Two-Shoes communitarian perspective than my former career as a self-employed rare book dealer (teeny-tiny capitalistic entrepreneurial enterprise.) And the Venn diagram overlap of these professions also firmly places me in opposition to giant-bloated-monopolistic-crony-predator-style-Corporate-Capitalism. So, MMV considerably.
Re: Future of Artificial Intelligence
My use of "everyone" was the use of an extreme/absolute term to drive home a general point. There are always hold outs. For whatever reason. But in general "wow cool car" "Yeah and you should see my insurance savings " "Do you cut in line to get to Mars" wins the day in a consumer society. And that's how it will be because that's how it has always been. Every metal band I grew up on, their biggest hit wasn't "Axtheon Speaks Zarusthra Thrusting in Your Rotating Butthole" (despite how good a song it was) but some saccharine concoction of eternal devotion that everyone slow danced to on prom night. Products brings people together. Mark. Steve. Elon. Weiner. Asshole. Foreign invader. Distinctions without differences as all are multi-billionaires. It is only getting worse.
Re: Future of Artificial Intelligence
Yeah, I grok you Henry, but maybe there is something to be learned from the ruins that remain of the culture referenced with "When in Rome..." "Bread and Circuses" and "Render unto Caesar..."? Chainsaw is to violin as ...
I might change my moniker from 7Wannabe5 to QuoVadis.
I might change my moniker from 7Wannabe5 to QuoVadis.
Re: Future of Artificial Intelligence
Absolutely there is something to learn. But history doesn't repeat it rhymes but it doesn't rhyme until it repeats but it won't repeat before we're dead so we won't live to see the rhyme so just suck it up and be fucking virtuous about it.7Wannabe5 wrote: ↑Sat Mar 08, 2025 1:05 pmYeah, I grok you Henry, but maybe there is something to be learned from the ruins that remain of the culture referenced with "When in Rome..." "Bread and Circuses" and "Render unto Caesar..."? Chainsaw is to violin as ...
I might change my moniker from 7Wannabe5 to QuoVadis.
Henry Aurelius
Re: Future of Artificial Intelligence
I would genuinely love it if you wrote a book 7. I'm thinking of along the lines Thomas Moore's Utopia, but set in a post-post-apocolyptic upper peninsular, where society thrives with a theory-of-everything understanding of human nature (maybe people wear rings according to their spiral dynamics colour?)
There's also a book you might be interested in called 'Against the Sexual revolution' by Louise Perry. I've not read it but I saw an interview with her, some of which I agreed with some of which I disagreed with.