An AI which can compete human brain ?

 


In year 2017 OpenAI  beats a pro Dota-2 player by 1v0, it was a new milestone in world of AI. 

In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summation, and more.This method was developed by scientists at OpenAI, and they called it GPT-2. The goal was to be able to perform this task with as little supervision as possible.This means that they unleashed this algorithm to read the internet, and the question is, what would the AI learn during this process?

And to answer it, we have a look at this report from 2017, where an AI was given a bunch of Amazon product reviews and the goal was to teach it to be able to generate new ones, or continue a review when given one.Then, something unexpected happened. 

source:openai.com

The finished neural network used surprisingly few neurons to be able to continue these reviews, and upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it not only needs to learn English, but also needs to be able to detect whether the review seems positive or not.

source:openai.com

How it does this?

GPT-2 was able to learns whatever it needs to learn to perform the sentence completion properly. And to do this, it needs to learn English by itself, and that’s exactly what it did! It also learned about a lot of topics to be able to discuss them well.And also it can able to solve rubiks cube with robotic hand.

And now, the next version appeared, by the name GPT-3.This version is now more than a 100 times bigger, so our first question is, how much better can an AI get if we increase the size of a neural network?


These are the results on a challenging reading comprehension test as a function of the number of parameters.As you see, around 1.5 billion parameters, which is roughly equivalent to GPT-2, it learned a great deal, but its understanding is nowhere near the level of human comprehension.

As we can see It nearly matched the level of humans.

This was possible before, but only with neural networks that are specifically designed for a narrow task.

comparison, GPT-3 is much more general.

Let’s  look at 5 practical applications of GPT3


1. OpenAI made this AI accessible to a lucky few people, and it turns out, it has read a lot of things on the internet, which contains a lot of code, so it can generate website layouts from a written description.


2. It also learned how to generate properly formatted plots from a tiny prompt written in plain English.

3. It can properly typeset mathematical equations from a plain English description as well.

4. It understands the kind of data we have in a spreadsheet, in this case, population, and fills the missing parts correctly.

5. It can also translate a complex legal text into plain language, or, the other way around, in other words, it can also generate legal text from our simple descriptions.
6. It was able to design an ui in figma(ui designer tool) by simply writing which kind of design we want.
Also their is many project build using OpenAI GPT-3 you can find it on its official site and over the internet.


0 Comments

Post a Comment