GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Move along, nothing to see here!
jacob
Site Admin
Posts: 15969
Joined: Fri Jun 28, 2013 8:38 pm
Location: USA, Zone 5b, Koppen Dfa, Elev. 620ft, Walkscore 77
Contact:

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by jacob »

the article wrote: They'd take everything we wrote as a seed and produce a nearly endless forest of new content. If even 0.01% of that is useful, that's a Wikipedia's worth of good ideas. Then what is our job? To sort through it? Except of course someday they will do that for us also.
It's my understanding that at least some mathematical research already works like this. "AIs" have been able to come up with proofs for theorems for quite a while, so I asked a prof a couple of years ago why these machines haven't mined out the mathematical universe yet. Apparently, they are being used but mainly to prospect ... and human mathematicans play the role of sorting through the output to look for the usual goals of elegance, etc. To wit, human mathematicians aren't happy with a 100 page proof even if it is a valid proof if there's an elegant way to do it in 1 page, say. In short, theorems weren't seen as much as a "end" (like they are to this physicist) as much as a "means" to improve human thinking and/or look for mathematical "beauty".

AnalyticalEngine
Posts: 956
Joined: Sun Sep 02, 2018 11:57 am

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by AnalyticalEngine »

It is true AI can be used to generate mathematical proofs, but honestly the single biggest limit on AI is computational complexity. AIs tend to solve math using syntax trees or other tree structures, which grow exponentially, factorially, or worse. (See genetic programming for just one example: https://en.wikipedia.org/wiki/Genetic_p ... esentation)

So it's not even so much a 100 page proof as it is 100-page starting point, exponential growth factor to solve. Obviously some stuff can be solved this way, but even something with O(2^n) (Let alone O(n!)) can get solution spaces bigger than all particles in the known universe fairly quickly.

Hence why they can be used to prospect. If you train/release them on a smaller problem of a bigger set, it cuts out all the wasted search space and time.

Bloating is honestly one of the biggest limitations in AI. Interesting to note too that a similar problem occurred in evolution, with most DNA being junk. I do think inflated verbiage/insight ratio is one of the biggest hurtles that AI developers have to jump, and it definitely inflates what we end up with as a solution.

These techniques are also being used to create algorithms for drone paths that allow drones to navigate certain terrain autonomously: https://journalofbigdata.springeropen.c ... 018-0134-7

User avatar
Jean
Posts: 1897
Joined: Fri Dec 13, 2013 8:49 am
Location: Switzterland

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by Jean »

Actually, junk DNA is mostly old genes in a neutralized form, which allow them to reapear from time to time. It can prove usefull if it was only temporarly handicaping (handicape that lead to the selection of a deactivated form in the first place).

daylen
Posts: 2535
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by daylen »

AnalyticalEngine wrote:
Wed Oct 07, 2020 11:42 am
It is true AI can be used to generate mathematical proofs, but honestly the single biggest limit on AI is computational complexity.
It depends. Many problems do not actually require a complex implementation but rather are limited by the sheer amount of data that needs to be organized, scrubbed, and merged. Computational complexity is no doubt important if you want to train a neural net. The most trans-formative innovations in AI are going to require the capacity for fuzzier pattern-recognition when working with unstructured data. In other words, an AI that partially thinks like a human will significantly open up possible applications as well as potential catastrophes.

The complexity of many problems can be reduced significantly at the expense of being answered with less precision.
Last edited by daylen on Wed Oct 07, 2020 8:06 pm, edited 2 times in total.

AnalyticalEngine
Posts: 956
Joined: Sun Sep 02, 2018 11:57 am

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by AnalyticalEngine »

daylen wrote:
Wed Oct 07, 2020 4:10 pm
In other words, an AI that partially thinks like a human will significantly open up possible applications as well as potential catastrophes.
I think this would be one of the most significant developments in AI that could happen. The problem is that AI that truly thinks like humans may not be possible. Human brains are made up of cells and chemicals and include an entire human body that's relevant to how they work. AIs are fundamentally Turing machines. It may be such that the way human cognition works is fundamentally not something that can be calculated on a Turing machine. It could very well be that the chemical reactions in the human brain are absolutely critical to the process and can't really be replaced with a Turing machine equivalent.

Of course, maybe I'm wrong too, but until we really understand the true nature of human cognition, we can't really answer this question.

AI doesn't really have to be smarter than humans to cause disruptions. I'd argue that social media has already taken us there. ;) But the critical difference is that it's humans disrupting other humans with an incredibly complex tool moreso than anything special about a bunch of bits executing on a Turing machine.

daylen
Posts: 2535
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by daylen »

AnalyticalEngine wrote:
Wed Oct 07, 2020 4:20 pm
Of course, maybe I'm wrong too, but until we really understand the true nature of human cognition, we can't really answer this question.
I'm working on it. :)

I use to be in a more skeptical position on this issue but lean more towards substrate independence these days. Perhaps a general AI can be done on a turning machine, though I would bet more on GAI requiring a different architecture entirely. A major predicament I see is that humans want to define what an AI is too narrowly and set too many constraints on what it can become. This is for good reason as they do not want it to fuck up, but I also think this approach has very little chance of ever producing more human-like machines. Humans scarcely know who they are or will become.
Last edited by daylen on Thu Oct 08, 2020 10:20 am, edited 4 times in total.

daylen
Posts: 2535
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by daylen »

The easiest way I see to construct a general AI is to base it on humans as much as possible. Some people like Bostrom think the first GAI's will not be in human form, but I would not be surprised considering all the humans around that could be used to train it in communication(*). We have no examples of intelligent agents in nature that do not have a community to communicate within (asides from God if that is your thing). You could say this is because we base intelligence partially on the ability to communicate (i.e. abstraction) because we do.

AGI needs an assortment of senses that modulate input, a series of developmental periods, parental nourishment, cultural wet-ware to learn a language with, a hierarchical nervous system with neurons that are simultaneously gates and containers, a few pre-configured neural nets to control involuntary and semi-voluntary movements, an innate grammar, etc. etc. etc.

(*) Also it is probably best that the AGI thinks it is somewhat like us early on. Engineering a convincing human body is the easy part.

daylen
Posts: 2535
Joined: Wed Dec 16, 2015 4:17 am
Location: Lawrence, KS

Re: GPT-3 (OpenAI's newest ginormous pre-trained transformer)

Post by daylen »

Since I am already on a rant here, I will also comment on the "general" versus "specific" dichotomy being thrown around. Clearly this is not a perfect binary, but there may exist a defining point at which intelligence is general enough to improve itself (not just software but hardware). Building an AI that is general enough to pass for human in a DITLTT(*) would indicate that humanity as a whole is general enough to pass this point in some sense(&). Though, a single entity(^) that can pass this point without collaboration would probably get out of hand quick. Perhaps a human-level, silicon-based machine could do it given that it may be easier to reverse/additively engineer than our carbon-based bodies. Then again, creating the damn thing may require a certain degree of human reverse-engineering that is not feasible given our limited future (likely).

(*) Day in the life Turing test: A human spends a day with an AI with an artificial human body.

(&) ..in another sense we already have through tech/infrastructure but these are not internal to us in yet another sense.

(^) Sharp pocket of negative entropy, let's say.

Post Reply