
In year 2017 OpenAI beats a pro Dota-2 player by 1v0, it was a new milestone in world of AI.
In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summation, and more.This method was developed by scientists at OpenAI, and they called it GPT-2. The goal was to be able to perform this task with as little supervision as possible.This means that they unleashed this algorithm to read the internet, and the question is, what would the AI learn during this process?
And to answer it, we have a look at this report from 2017, where an AI was given a bunch of Amazon product reviews and the goal was to teach it to be able to generate new ones, or continue a review when given one.Then, something unexpected happened.
source:openai.com
The finished neural network used surprisingly few neurons to be able to continue these reviews, and upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it not only needs to learn English, but also needs to be able to detect whether the review seems positive or not.

How it does this?
GPT-2 was able to learns whatever it needs to learn to perform the sentence completion properly. And to do this, it needs to learn English by itself, and that’s exactly what it did! It also learned about a lot of topics to be able to discuss them well.And also it can able to solve rubiks cube with robotic hand.
And now, the next version appeared, by the name GPT-3.This version is now more than a 100 times bigger, so our first question is, how much better can an AI get if we increase the size of a neural network?

These are the results on a challenging reading comprehension test as a function of the number of parameters.As you see, around 1.5 billion parameters, which is roughly equivalent to GPT-2, it learned a great deal, but its understanding is nowhere near the level of human comprehension.
As we can see It nearly matched the level of humans.
This was possible before, but only with neural networks that are specifically designed for a narrow task.
Let’s look at 5 practical applications of GPT3
Here's a sentence describing what Google's home page should look and here's GPT-3 generating the code for it nearly perfectly. pic.twitter.com/m49hoKiEpR
— Sharif Shameem (@sharifshameem) July 15, 2020
GPT-3 Does The Work™️ on generating SVG charts, with a quick web app I built with @billyjeanbillyj. With a short sentence describing what you want to plot, its able to generate charts with titles, labels and legends from about a dozen primed examples.
— ken (@aquariusacquah) July 21, 2020
cc @gdb pic.twitter.com/cBxukHIlKx
After many hours of retraining my brain to operate in this "priming" approach, I also now have a sick GPT-3 demo: English to LaTeX equations! I'm simultaneously impressed by its coherence and amused by its brittleness -- watch me test the fundamental theorem of calculus.
— Shreya Shankar (@sh_reya) July 19, 2020
cc @gdb pic.twitter.com/0dujGOKaYM
=GPT3()... the spreadsheet function to rule them all.
— Paul Katsen (@pavtalk) July 21, 2020
Impressed with how well it pattern matches from a few examples.
The same function looked up state populations, peoples' twitter usernames and employers, and did some math. pic.twitter.com/W8FgVAov2f
Scalable, accessible, free-to-the-tenant eviction defense. pic.twitter.com/UgvSFywQmq
— Francis Jervis (@f_j_j_) July 16, 2020
This changes everything. 🤯
— Jordan Singer (@jsngr) July 18, 2020
With GPT-3, I built a Figma plugin to design for you.
I call it "Designer" pic.twitter.com/OzW1sKNLEC