GPT-3 (OpenAI's newest ginormous pre-trained transformer)
-
- Posts: 581
- Joined: Sun Jul 01, 2018 11:45 am
GPT-3 (OpenAI's newest ginormous pre-trained transformer)
I've been looking at some GPT-3 demo videos and while there's a lot of promise imo (I'm actually pretty confident I'd call what it is doing some level of "reasoning") I feel like I'm mostly seeing low-creativity applications for web designers/coders or chatbots.
What do you think about GPT-3? What would you do if you got an API key?
What do you think about GPT-3? What would you do if you got an API key?
-
- Posts: 56
- Joined: Mon Jul 24, 2017 9:13 am
- Contact:
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
I'm pretty sure you can actually already play around with it (or a very recent version) here:bostonimproper wrote: ↑Sat Aug 15, 2020 3:22 pmWhat do you think about GPT-3? What would you do if you got an API key?
https://play.aidungeon.io/
Click 'Play'
Click 'NEW SINGLEPLAYER GAME'
Choose 6)
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Ask it to fill out my application for UBI.
-
- Posts: 581
- Joined: Sun Jul 01, 2018 11:45 am
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
@onewayfamily FYI, AI Dungeon primarily uses blend of GPT-2 and limits parameters to game for GPT-3 access: https://twitter.com/nickwalton00/status ... 1478936577
-
- Posts: 56
- Joined: Mon Jul 24, 2017 9:13 am
- Contact:
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
@bostonimproper yeah thanks for pointing that out.
Although further down that thread he says:
"If you do a custom prompt then start a game it will add onto it before you even do an action. That first addition is what I mean."
...to which someone replies:
"Ah, but afterwards does the custom prompt still use gpt-3?"
...and Nick says:
"Yep. "
So now I'm slightly more confused than I was, but it seems like only the first prompt is GPT-2, and from then on it's GPT-3?
Although further down that thread he says:
"If you do a custom prompt then start a game it will add onto it before you even do an action. That first addition is what I mean."
...to which someone replies:
"Ah, but afterwards does the custom prompt still use gpt-3?"
...and Nick says:
"Yep. "
So now I'm slightly more confused than I was, but it seems like only the first prompt is GPT-2, and from then on it's GPT-3?
-
- Site Admin
- Posts: 15993
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
https://www.theguardian.com/commentisfr ... icle-gpt-3
Holy shitsnacks! Should I not be impressed? Even if it's "just transformative" of existing text, it passes the Turing test with this writer. One can only imagine the level of informational warfare this allows. Comparably speaking, having silly humans misinform each other by sharing memes on facebook is practically stone age tech.
Add: It appears that a human editor improved the output somewhat. Still ...
Holy shitsnacks! Should I not be impressed? Even if it's "just transformative" of existing text, it passes the Turing test with this writer. One can only imagine the level of informational warfare this allows. Comparably speaking, having silly humans misinform each other by sharing memes on facebook is practically stone age tech.
Add: It appears that a human editor improved the output somewhat. Still ...
-
- Posts: 581
- Joined: Sun Jul 01, 2018 11:45 am
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Yeah, I think people are really underestimating the impact here.
Note also it "only" costs a couple million dollars to train the transformer. Expensive for your average startup (which will probably license directly with OpenAI) but cheap af for a government looking to launch a misinformation campaign.
This is the thread that convinced me it is game over for humans: https://twitter.com/danielbigham/status ... 3114248194
Note also it "only" costs a couple million dollars to train the transformer. Expensive for your average startup (which will probably license directly with OpenAI) but cheap af for a government looking to launch a misinformation campaign.
This is the thread that convinced me it is game over for humans: https://twitter.com/danielbigham/status ... 3114248194
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
I was convinced when Watson correctly identified a picture of bear berry I fed it.
- TheWanderingScholar
- Posts: 650
- Joined: Thu Dec 30, 2010 12:04 am
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Shit, is my generation (Zillenials}, to feel the full effects of AI workforce displacement? Because that guardian article was actually well written, and took less time to edit than human's according to the editor. Which considering most of the writing process is editing and rewrites, it kind of terrifying.
-
- Posts: 960
- Joined: Sun Sep 02, 2018 11:57 am
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Fears of "super AI" are generally overblown. Reporters and laymen don't really understand how AI works, and so this all seems more impressive than it is.
AI, at its root, is basically a fancy version of search. Classic AI was formulated as a problem with a search space. The search space is basically the set of all possible solutions, right or wrong, to a problem. AI works by traversing the search space and evaluating the possible solution against a function that represents how to solve the problem.
The critical thing to understand about this is it's insanely slow. The solution space for complex problems often grows factorially or worse. Furthermore, not all problems can be formulated as Turing computable functions. Now obviously, AI has done some impressive things, but it's not magical and it has its limits. The No Free Lunch in Search and Optimization theorem explains this.
It's the same thing with machine learning. Basically, AI is an analytical way to solve a problem. As computer scientists ran into more complex problems, they became too complicated to compute directly. So they invented machine learning, which is just statistics applied to computer science. And like all statistics, machine learning is constrained by quality in data. It is true that they've done some impressive things with it, but again, it's no more advanced than a glorified statistical model.
Also consider inherent limits to knowledge, such as Godel's incompleteness theorem that state it's not algorithmicly possible to prove all truths of a system. It's also not entirely clear that a system is capable of creating something more intelligent than itself. True, evolution did that, but it took billions of years. Computers are fast but they're not that fast.
Now this isn't to say these technologies won't be disruptive, but they are disruptive for reasons of how a select group of humans are using them at their own advantage and to the disadvantage of everyone else. These algorithms are not intelligent in themselves, but they can be abused by the powers that be. It's a Fetishization of the Commodity problem. It's easy to attribute to AI itself what in fact certain humans are doing to other humans.
I would also argue that passing the Turing test is less that computers have become smart and more that people behind screens have become less intelligent vis-a-vis social media.
Thank you all for coming to my TED talk.
AI, at its root, is basically a fancy version of search. Classic AI was formulated as a problem with a search space. The search space is basically the set of all possible solutions, right or wrong, to a problem. AI works by traversing the search space and evaluating the possible solution against a function that represents how to solve the problem.
The critical thing to understand about this is it's insanely slow. The solution space for complex problems often grows factorially or worse. Furthermore, not all problems can be formulated as Turing computable functions. Now obviously, AI has done some impressive things, but it's not magical and it has its limits. The No Free Lunch in Search and Optimization theorem explains this.
It's the same thing with machine learning. Basically, AI is an analytical way to solve a problem. As computer scientists ran into more complex problems, they became too complicated to compute directly. So they invented machine learning, which is just statistics applied to computer science. And like all statistics, machine learning is constrained by quality in data. It is true that they've done some impressive things with it, but again, it's no more advanced than a glorified statistical model.
Also consider inherent limits to knowledge, such as Godel's incompleteness theorem that state it's not algorithmicly possible to prove all truths of a system. It's also not entirely clear that a system is capable of creating something more intelligent than itself. True, evolution did that, but it took billions of years. Computers are fast but they're not that fast.
Now this isn't to say these technologies won't be disruptive, but they are disruptive for reasons of how a select group of humans are using them at their own advantage and to the disadvantage of everyone else. These algorithms are not intelligent in themselves, but they can be abused by the powers that be. It's a Fetishization of the Commodity problem. It's easy to attribute to AI itself what in fact certain humans are doing to other humans.
I would also argue that passing the Turing test is less that computers have become smart and more that people behind screens have become less intelligent vis-a-vis social media.
Thank you all for coming to my TED talk.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
The faulty assumption being that super AI must work like current AI.
Computers are pretty darn fast and can communicate super fast relative to us, so even a few mediocre "general" AI's working in concert against the clear target of humanity could be our downfall. Just look at how disruptive terrorism can be.
..cough.. westworld..
Regarding the incompleteness theorems, these show a limit to knowledge representation (Ti) within a single system but not necessarily to knowledge itself (Si and Ni). This also ignores the possibility of complimentary representative/formal systems.
Computers are pretty darn fast and can communicate super fast relative to us, so even a few mediocre "general" AI's working in concert against the clear target of humanity could be our downfall. Just look at how disruptive terrorism can be.
..cough.. westworld..
Regarding the incompleteness theorems, these show a limit to knowledge representation (Ti) within a single system but not necessarily to knowledge itself (Si and Ni). This also ignores the possibility of complimentary representative/formal systems.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
This short 7 min video helped me understand the limitations of gpt3 https://www.youtube.com/watch?v=ZNeNMTSMA5Y
The channel also has a reading group where they go through and discuss related research papers.
The channel also has a reading group where they go through and discuss related research papers.
-
- Posts: 450
- Joined: Thu Aug 30, 2012 2:11 pm
- Location: Oxford, UK Walkscore: 3
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
It's already happening, automnation has been displacing jobs for a long time now. At the start of the last century the UK had a million people working in coal mining. 60% of the population of the US worked in agriculture in the 1850's, now it's about 1.5%.TheWanderingScholar wrote: ↑Wed Sep 09, 2020 1:17 pmShit, is my generation (Zillenials}, to feel the full effects of AI workforce displacement? Because that guardian article was actually well written, and took less time to edit than human's according to the editor. Which considering most of the writing process is editing and rewrites, it kind of terrifying.
At this point doubters would argue that despite these changes we have been fine. I'm somewhat doubtful about that and think society has become increasingly dysfunctional in an attempt to keep everybody gainfully employed. The fact that we have just shut down large parts of our economy without complete collapse shows that much of what we do isn't really that important.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Maybe mankind's only Hope is to be more efficient than machines at nuclear fuel extraction in a post oil World, so that AI keeps them as slaves.
-
- Site Admin
- Posts: 15993
- Joined: Fri Jun 28, 2013 8:38 pm
- Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
- Contact:
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
One take-away from fiby41's video is that GPT-3 is not grounded [in reality] and that it doesn't have/seek structure (a sign or perhaps the very definition of intelligence). However, I think the same can be said of much of social media and by extension many humans who just regurgitate soundbites from other social media posts. That's another way of saying that many humans act without intelligence and without any reference to reality in certain (closed) environments.AnalyticalEngine wrote: ↑Tue Sep 29, 2020 5:23 pmI would also argue that passing the Turing test is less that computers have become smart and more that people behind screens have become less intelligent vis-a-vis social media.
Facebook would clearly be an environment where GPT-3 would meet the bar. The question, therefore, is about in which type of environments GPT-3 would pass the Turing test against a human. Those are the environments where competition is already happening. For exactly, I don't see anything preventing a bunch of GPT-3 bots completely taking over 95% of online political trolling^H^H^H^Hdiscourse and drowning out any human input.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
I've been making jokes for AIs in comment sections for years, in hope that they would at least keep me as a joker.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
@Jean I once knew a basilisk named Roko that wagered some pascals on the future.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
A supreme intelligence would know that threat in a distant future is quite an ineficient way to motivate human beings.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Exactly. I think that rather moving goal posts by miffed classic AI researchers, we are dealing with additional goal posts. The academic goal of 'Passing the Turing Test' in a complex environment is less interesting than the more impactful 'Passing the Turing Troll Test' to rile up blue/red teams on Twitter/FB/ERE forums*.jacob wrote: ↑Wed Sep 30, 2020 11:11 amFacebook would clearly be an environment where GPT-3 would meet the bar. The question, therefore, is about in which type of environments GPT-3 would pass the Turing test against a human. Those are the environments where competition is already happening. For exactly, I don't see anything preventing a bunch of GPT-3 bots completely taking over 95% of online political trolling^H^H^H^Hdiscourse and drowning out any human input.
To see another nice example, check out play.aidungeon.io. Move over Zork! We've come a long way since ELIZA.
* It wouldn't be that hard to refit GPT3 for generating 'jacob'-like posts, 'jennypenny'-like posts etc, given enough example posts.
-
- Posts: 960
- Joined: Sun Sep 02, 2018 11:57 am
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Interesting post on GPT-3 posing as human on reddit for a week: https://www.kmeme.com/2020/10/gpt-3-bot ... r.html?m=1