GPT-3 Playground, the AI that can write for you

Share This Post

An introduction to Artificial Intelligence

When it comes to artificial intelligence, there are two main ideas on the table. Some believe that artificial intelligence will eventually surpass human intelligence, while others believe that artificial intelligence will always be at the service of humans. However, there is one thing both sides can agree on: it is advancing faster and faster every year.

Artificial intelligence (AI) is still in its early stages of development, but it has the potential to revolutionize the way humans interact with technology.

A simple, general description would be that AI is a process of programming a computer to make decisions on its own. This can be done in a number of ways, but the most common is through the use of algorithms. Algorithms are a set of rules or instructions that can be followed to solve a problem. In the case of AI, algorithms are used to teach the computer how to make decisions.

In the past, artificial intelligence was mainly used for simple tasks such as playing chess or solving mathematical problems. However, artificial intelligence is now being used for more complex tasks, such as facial recognition, natural language processing, and even autonomous driving. As artificial intelligence continues to evolve, there is no telling what it will be capable of in the future. With the rapid expansion of AI capabilities, it is important to understand what it is, how it works, and its potential implications.

The benefits of AI are enormous. With the ability to make its own decisions, AI has the potential to improve the efficiency of a myriad of industries and provide opportunities for all types of people. In this article, we will talk about GPT-3.

What is GPT-3 and where does it come from?

3DgY99bH6RTVmkdkUeVvtLJXJX3rhhWVO9d2wHSOnP7XUyNQKO8hEFss kgzr3aaILy3qesxi9ehVhmQ5gh27nj9Pox16DQDPi1kY hRFJrpts7iahLut MvWeUg1veYW1OT oZzVyPSIpnwLuXdIkUmO MABrYYWElYV8f9AnMiwTpeC8H7JQJbOZiuQ

GPT-3 was created by OpenAI, a pioneering AI research company based in San Francisco. They define their goal as “ensuring that artificial intelligence benefits all of humanity.” Their vision for creating artificial intelligence is clear: a type of AI that is not limited to specialized tasks but performs a wide variety of tasks, just like humans.

A few months ago, the company OpenAI released for all users its new language model called GPT-3. GPT-3 stands for Generative Pretrained Transformer 3 and consists of generating text from a premise called prompt. Simply put, it could be called a high-level “auto-completion”. You can for example provide two or three sentences on any given topic and GPT-3 will do the rest. You can also generate conversations and the answers will be based on the context of the previous questions and answers.

It is important to emphasize that each answer offered by GPT-3 is only one possibility, therefore it will not be the only possible answer, in addition, if you test the same premise several times, it may offer a different and even contradictory answer. It is therefore a model that returns answers based on what has been previously said and relates it to everything you know to obtain the possible answer that makes the most sense. That means that it is not obliged to give an answer with real data, something we have to take into account. That doesn’t mean that you can’t provide her with data about some work you have done and talk about it, but it will always be necessary to contrast the information. The better the context, the better the answers you will get, and vice versa.

OpenAI’s GPT-3 language model has been previously trained to be what it is. Its training consisted of studying a huge amount of information available on the Internet. GPT-3 was fed with all the public books, the entire Wikipedia and millions of web pages and scientific papers available on the Internet. In short, it has absorbed the most important human knowledge that we have published on the web during our entire history.

  Artificial intelligence in eCommerce: benefits, statistics, facts, use cases & case studies

After reading this information and analyzing it, the language model created connections in a 700 GB model located on 48 GPUs of 16 GB each. Just so we understand the dimension of this, the previous OpenAI model, GPT-2, had a weight of 40 GB and analyzed 45 million web pages. The difference is immense because, while GPT-2 had 1.5 billion parameters, GPT-3 has 175 billion parameters.

Shall we do a test? I asked GPT-3 to define itself and this was the result:

f9ZctORDMALtXzvC02W3gsucfwO7MH ZQLJEjHpfmanNKsS2DMF47zgboXefuz7dQSJToj cUim9XQR1w2fUz5835Y3uOoAMGLITUbfGwD1HE2QEoo3HAhTCZClaRqDB772ZAyYRX9M7lTIU5aHSNcLZSsGxom639l bf4RZ3akXMALRuzKdKNujjUuBFg

How to use it in Playground

To be able to play with GPT-3 and make our tests, the only thing we will have to do is to go to their website, register, and add the typical personal information that we usually use in any registration. In one of the sections, it will ask you what you are going to use this AI for, and for these examples, I have selected the personal use option.

I would like to point out that, from my experience, the results have been better in English. That does not mean that in the other languages it works badly, in Spanish in fact it does it very well, but I prefer the result that it provides us in English, the reason why the tests and results that I will show from now on will be in that language.

GPT-3 gives us a free gift when we enter. Once you have registered with your email and phone number, you will have $18 to spend totally free, without the need to enter a payment method. Although it may not seem like much, the truth is that $18 is quite a lot. To give you an idea, I have been testing with the AI for five hours and I have only spent $1. Later I will explain the prices so that we understand this better.

Once we are inside the website, we will have to go to the Playground section. This is where all the magic is going to happen.

V4P5BN4LW4c6GhCr8yL5xWNDNCqX9pDWVayrW3D31lISISvWsGzl6bL5x7kq XHqf8kQrtbqgrfow3ihgz3M7BcjlOGFgaDiz q0EOPDiiYcA7pzDawN08wh 45V4xilp9aAANJcA5mIEtGPvA mE7N9dC1LusVqFovxfrl7T64V9h20sLKrrXF2t7E Ag

Prompt + Submit

To begin with, what stands out the most on the web, is the big text box. This is where we can start typing our prompt (remember, these are our requests and/or instructions) to the AI. It is as simple as typing something, a question in this case, and hitting the submit button below to have GPT-3 answer us and write what we have asked for.

bc2szfev8vpzd5w3QMAvklJ5ZlnQYTulsGuf74zDnBQBRyti5vuUl5tKH67VfdbFS kTPcYCE

Presets

Presets are ready-to-use presets for different tasks. They can be found at the top right of the text box. If we click on a few of them, “more examples” will open a new screen where we will have the whole list available. When a preset is chosen, the content of the text area is updated with a default text. The settings in the right sidebar are also updated. For example, if we were to use the “Grammar correction” preset, we should follow the following structure to get the best result.

5z60vRB25w0b2b25I9w2nh anaprQ3HCXvT2BHLrBu 4dm475lPQLQHo P4Fz2Y4cO4ocqvDGTWdDsEuY4KUJZeRkVFvtkttWQW8tzAnOwUlvtYhddLunPY03CUxKPq5xqWDGnGxNK NsFEp4TBBXkZ 0T rQONBZSrb4JuleSX8HaTQr

Models

The massive dataset used to train GPT-3 is the main reason why it is so powerful. However, bigger does not always mean better. For those reasons, OpenAI offers four main models. There are others, but they themselves recommend that we use the most updated versions, which we are going to talk about now.

The available models are called Davinci, Babbage, Curie, and Ada. Of the four, Davinci is the largest and most capable, as it can perform any task that any other engine can perform.

We will give an overview of each model and the types of tasks that might be best suited for each. However, keep in mind that while the smaller engines may not be trained with as much data, they are still general-purpose models and for certain tasks are very viable and cheaper.

  GitHub Copilot: How AI is Transforming Software Development

Davinci

As mentioned above, it is the most capable model and can do everything that any other model can do, often with fewer instructions. Davinci is able to solve logical problems, determine cause and effect, understand text intent, produce creative content, explain character motives, and handle complex summarization tasks.

Curie

This model attempts to balance power and speed. It can do anything that Ada or Babbage can do, but it is also capable of handling more complex classification tasks and more nuanced tasks such as summarization, sentiment analysis, chatbot applications, and Q&A.

Babbage

It is slightly more capable than Ada but not as efficient. It can perform all the same tasks as Ada, but can also handle slightly more complicated classification tasks, and is ideal for semantic search tasks that classify how well documents match a search query.

Ada

Finally, this is usually the fastest and least expensive model. It is best for less nuanced tasks, e.g., parsing text, reformatting text, and simpler classification tasks. The more context you provide Ada with, the better it will perform.

CTA Software

Engine

Other parameters that we can adjust to get the best response to our prompt are below the most important of them. Exactly, the model. Let’s explain some of the ones that seem most interesting.

One of the most important settings to control the output of the GPT-3 engine is the Temperature. This setting controls the randomness of the generated text. A value of 0 makes the engine deterministic, meaning that it will always generate the same output for a given text input. A value of 1 makes the engine take the most risks and use a lot of creativity.

You probably noticed that, in some of the tests you have been able to run yourself, GPT-3 stops in the middle of a sentence. To control the maximum amount of text we will allow to be generated, you can use the “Maximum length” setting specified in tokens. We will explain what this token thing is later.

The “Top P” argument is an alternative way to control the randomness and creativity of the text generated by GPT-3, but in this case in relation to the tokens (words) within the probability range depending on where we place it (0.1 would be 10%). The OpenAI documentation recommends using only one function between Temperature and Top P, so when using one of them, make sure the other is set to 1.

On the other hand, we have two parameters to penalize the answers that GPT-3 gives us. One of them is the frequency penalty which controls the tendency of the model to repeat predictions. It also reduces the probability of words that have already been generated and depends on how many times a word has already occurred in the prediction.

The second penalty is the presence penalty. The presence penalty parameter encourages the model to make novel predictions. The presence penalty reduces the probability of a word if it has already appeared in the predicted text. Unlike the frequency penalty, the presence penalty does not depend on the frequency with which words appear in past predictions.

Finally, we have the “Best of” parameter which generates several answers to a query. Playground then selects the best one and sends it to us. GPT-3 itself warns us that this will generate several completions to our prompt, and that may cause us to spend more tokens than we had in mind.

Historical

To finish this section, the third icon next to the “Submit” button, will show us a history of all our requests to GPT-3. This way, if we do not remember a prompt we used to get a very good response, this makes it much easier.

Costs and tokens

Once the free $18 is over, GPT-3 provides us with a way to continue using its platform, and it is not any kind of monthly subscription or anything like that. The price is directly related to the use we want to give it. In other words, we will be charged by tokens. It is a term used in artificial intelligence related to the cost of generating outputs. A token can be anything from a letter to a sentence. So it is very difficult to know exactly the price of each use we give to the AI. But given that they are usually cents, we will only have to try it a little and we will soon see what each thing costs us.

  Digital luxury strategy tips

Although OpenAI only shows us a dozen examples of GPT-3 usage, we can see the tokens that have been spent on each of them to get a better idea of how it works.

These are the versions and their respective prices:

J1GlCDDsOZlqufd 5wanO8ikZbsGKsAU7sTxLSfE3dWxdKpxON9ttWNowubItF8x495vboVUFUo2yVj YtH BfyPkhb vauSQNe3v1uP8tu818d3QBvxr7Sr hRLx0yX3DPZzMMRwudY0 CEF9 4We4vGG0T b US4MJ QuSg7cmoZFOZ9WQQj g4V7wNw

To give us an idea of what a certain number of words might cost us or to give us an example of how tokens work, we have the following tool called Tokenizer.

It informs us that the GPT family of models processes text using tokens, which are common sequences of characters found in the text. The models understand the statistical relationships between these tokens and are chosen in the production of the next token in a sequence of these.

In the end, this is at a very low level, so let’s use your example and see how much that same example would cost us.

teexelGnlp8wcPPolSZfIQt5U3OcLvOVrjlf1HudI7lobJVKHhIiQY7RYM uolvMeS2tu5CcPFrxHgkGDpilgM3ZPTG5F

The example provided by the tool itself generates 64 tokens, which in characters is 252. This means that the cost of this small task would be as shown below according to the GPT-3 model used:

  • Ada: $0.0000256
  • Babbage: $0.000032
  • Curie: $0.000128
  • Davinci: $0.00128

It is a very affordable price since we would not have spent even 1 cent on it. This gives us the possibility to do a lot of tests and even to develop our own projects, something we will talk about in the second part of this series of articles.

Conclusion

From my point of view, it is a tool that you have to know how to use correctly. As I mentioned, GPT-3 does not have to give you the correct data. That means that if you want to use it to do work, answer certain questions or do homework, for example, you have to give a good context (prompt) to the AI so that the result is really close to what you are looking for.

One thing that some people may be concerned about is whether this is going to change education or if certain jobs that exist today related to writing are going to disappear. In my humble opinion, it’s going to happen. Sooner or later we will all be replaced by an AI. This example is about an AI related to writing, but they exist for programming, drawing, audio, etc.

On the other hand, it opens up a lot of possibilities for many, many jobs and projects, both personal and professional. For example, have you ever wanted to write a horror story? Well, in the list of examples where we find the grammar checker, we have one specifically for it.

With all this, I want to say that we are in an early version of AI, and this world still has a lot to grow and offer, but that does not mean that it is not useful right now. We just have to learn how to use it and train it to give us the best possible response. If you are interested in learning more about GPT-3, keep an eye on Apiumhub’s blog; I will soon post the second part of this article.

Author

  • IsaacAlvarez

    I consider myself a proactive, responsible, understandable person who works well in a team. In my work I need challenges and be constantly learning. I want to grow personally and professionally.

    View all posts

One Comment

  1. Cameron Forward

    Awesome thank you for your insight I am very interested in learning about GPT 3

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Subscribe To Our Newsletter

Get updates from our latest tech findings

Have a challenging project?

We Can Work On It Together

apiumhub software development projects barcelona
Secured By miniOrange