Page 1 of 2
GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Sat Aug 15, 2020 3:22 pm
by bostonimproper
I've been looking at some GPT-3
demo videos and while there's a lot of promise imo (I'm actually pretty confident I'd call what it is doing some level of "reasoning") I feel like I'm mostly seeing low-creativity applications for web designers/coders or chatbots.
What do you think about GPT-3? What would you do if you got an API key?
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Sun Aug 16, 2020 1:27 pm
by onewayfamily
bostonimproper wrote: ↑Sat Aug 15, 2020 3:22 pm
What do you think about GPT-3? What would you do if you got an API key?
I'm pretty sure you can actually already play around with it (or a very recent version) here:
https://play.aidungeon.io/
Click
'Play'
Click
'NEW SINGLEPLAYER GAME'
Choose
6)
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Sun Aug 16, 2020 3:15 pm
by 7Wannabe5
Ask it to fill out my application for UBI.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Mon Aug 17, 2020 6:00 am
by bostonimproper
@onewayfamily FYI, AI Dungeon primarily uses blend of GPT-2 and limits parameters to game for GPT-3 access:
https://twitter.com/nickwalton00/status ... 1478936577
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Mon Aug 17, 2020 11:57 am
by onewayfamily
@bostonimproper yeah thanks for pointing that out.
Although further down that thread he says:
"If you do a custom prompt then start a game it will add onto it before you even do an action. That first addition is what I mean."
...to which someone replies:
"Ah, but afterwards does the custom prompt still use gpt-3?"
...and Nick says:
"Yep.
"
So now I'm slightly more confused than I was, but it seems like only the first prompt is GPT-2, and from then on it's GPT-3?
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Tue Sep 08, 2020 2:55 pm
by jacob
https://www.theguardian.com/commentisfr ... icle-gpt-3
Holy shitsnacks! Should I not be impressed? Even if it's "just transformative" of existing text, it passes the Turing test with this writer. One can only imagine the level of informational warfare this allows. Comparably speaking, having silly humans misinform each other by sharing memes on facebook is practically stone age tech.
Add: It appears that a
human editor improved the output somewhat. Still ...
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Tue Sep 08, 2020 4:29 pm
by bostonimproper
Yeah, I think people are really underestimating the impact here.
Note also it "only" costs a couple million dollars to train the transformer. Expensive for your average startup (which will probably license directly with OpenAI) but cheap af for a government looking to launch a misinformation campaign.
This is the thread that convinced me it is game over for humans:
https://twitter.com/danielbigham/status ... 3114248194
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 09, 2020 7:41 am
by 7Wannabe5
I was convinced when Watson correctly identified a picture of bear berry I fed it.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 09, 2020 1:17 pm
by TheWanderingScholar
Shit, is my generation (Zillenials}, to feel the full effects of AI workforce displacement? Because that guardian article was actually well written, and took less time to edit than human's according to the editor. Which considering most of the writing process is editing and rewrites, it kind of terrifying.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Tue Sep 29, 2020 5:23 pm
by AnalyticalEngine
Fears of "super AI" are generally overblown. Reporters and laymen don't really understand how AI works, and so this all seems more impressive than it is.
AI, at its root, is basically a fancy version of search. Classic AI was formulated as a problem with a search space. The search space is basically the set of all possible solutions, right or wrong, to a problem. AI works by traversing the search space and evaluating the possible solution against a function that represents how to solve the problem.
The critical thing to understand about this is it's insanely slow. The solution space for complex problems often grows factorially or worse. Furthermore, not all problems can be formulated as Turing computable functions. Now obviously, AI has done some impressive things, but it's not magical and it has its limits. The No Free Lunch in Search and Optimization theorem explains this.
It's the same thing with machine learning. Basically, AI is an analytical way to solve a problem. As computer scientists ran into more complex problems, they became too complicated to compute directly. So they invented machine learning, which is just statistics applied to computer science. And like all statistics, machine learning is constrained by quality in data. It is true that they've done some impressive things with it, but again, it's no more advanced than a glorified statistical model.
Also consider inherent limits to knowledge, such as Godel's incompleteness theorem that state it's not algorithmicly possible to prove all truths of a system. It's also not entirely clear that a system is capable of creating something more intelligent than itself. True, evolution did that, but it took billions of years. Computers are fast but they're not that fast.
Now this isn't to say these technologies won't be disruptive, but they are disruptive for reasons of how a select group of humans are using them at their own advantage and to the disadvantage of everyone else. These algorithms are not intelligent in themselves, but they can be abused by the powers that be. It's a Fetishization of the Commodity problem. It's easy to attribute to AI itself what in fact certain humans are doing to other humans.
I would also argue that passing the Turing test is less that computers have become smart and more that people behind screens have become less intelligent vis-a-vis social media.
Thank you all for coming to my TED talk.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Tue Sep 29, 2020 5:56 pm
by daylen
The faulty assumption being that super AI must work like current AI.
Computers are pretty darn fast and can communicate super fast relative to us, so even a few mediocre "general" AI's working in concert against the clear target of humanity could be our downfall. Just look at how disruptive terrorism can be.
..cough.. westworld..
Regarding the incompleteness theorems, these show a limit to knowledge representation (Ti) within a single system but not necessarily to knowledge itself (Si and Ni). This also ignores the possibility of complimentary representative/formal systems.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 30, 2020 2:06 am
by fiby41
This short 7 min video helped me understand the limitations of gpt3
https://www.youtube.com/watch?v=ZNeNMTSMA5Y
The channel also has a reading group where they go through and discuss related research papers.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 30, 2020 3:53 am
by tonyedgecombe
TheWanderingScholar wrote: ↑Wed Sep 09, 2020 1:17 pm
Shit, is my generation (Zillenials}, to feel the full effects of AI workforce displacement? Because that guardian article was actually well written, and took less time to edit than human's according to the editor. Which considering most of the writing process is editing and rewrites, it kind of terrifying.
It's already happening, automnation has been displacing jobs for a long time now. At the start of the last century the UK had a million people working in coal mining. 60% of the population of the US worked in agriculture in the 1850's, now it's about 1.5%.
At this point doubters would argue that despite these changes we have been fine. I'm somewhat doubtful about that and think society has become increasingly dysfunctional in an attempt to keep everybody gainfully employed. The fact that we have just shut down large parts of our economy without complete collapse shows that much of what we do isn't really that important.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 30, 2020 9:46 am
by Jean
Maybe mankind's only Hope is to be more efficient than machines at nuclear fuel extraction in a post oil World, so that AI keeps them as slaves.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 30, 2020 11:11 am
by jacob
AnalyticalEngine wrote: ↑Tue Sep 29, 2020 5:23 pm
I would also argue that passing the Turing test is less that computers have become smart and more that people behind screens have become less intelligent vis-a-vis social media.
One take-away from fiby41's video is that GPT-3 is not grounded [in reality] and that it doesn't have/seek structure (a sign or perhaps the very definition of intelligence). However, I think the same can be said of much of social media and by extension many humans who just regurgitate soundbites from other social media posts. That's another way of saying that many humans act without intelligence and without any reference to reality in certain (closed) environments.
Facebook would clearly be an environment where GPT-3 would meet the bar. The question, therefore, is about in which type of environments GPT-3 would pass the Turing test against a human. Those are the environments where competition is already happening. For exactly, I don't see anything preventing a bunch of GPT-3 bots completely taking over 95% of online political trolling^H^H^H^Hdiscourse and drowning out any human input.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 30, 2020 11:44 am
by Jean
I've been making jokes for AIs in comment sections for years, in hope that they would at least keep me as a joker.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Sep 30, 2020 12:52 pm
by daylen
@Jean I once knew a basilisk named Roko that wagered some pascals on the future.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Thu Oct 01, 2020 4:27 am
by Jean
A supreme intelligence would know that threat in a distant future is quite an ineficient way to motivate human beings.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Sat Oct 03, 2020 1:27 pm
by Quadalupe
jacob wrote: ↑Wed Sep 30, 2020 11:11 am
Facebook would clearly be an environment where GPT-3 would meet the bar. The question, therefore, is about in which type of environments GPT-3 would pass the Turing test against a human. Those are the environments where competition is already happening. For exactly, I don't see anything preventing a bunch of GPT-3 bots completely taking over 95% of online political trolling^H^H^H^Hdiscourse and drowning out any human input.
Exactly. I think that rather moving goal posts by miffed classic AI researchers, we are dealing with additional goal posts. The academic goal of 'Passing the Turing Test' in a complex environment is less interesting than the more impactful 'Passing the Turing Troll Test' to rile up blue/red teams on Twitter/FB/ERE forums*.
To see another nice example, check out play.aidungeon.io. Move over Zork! We've come a long way since
ELIZA.
* It wouldn't be that hard to refit GPT3 for generating 'jacob'-like posts, 'jennypenny'-like posts etc, given enough example posts.
Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)
Posted: Wed Oct 07, 2020 10:22 am
by AnalyticalEngine
Interesting post on GPT-3 posing as human on reddit for a week:
https://www.kmeme.com/2020/10/gpt-3-bot ... r.html?m=1