Economics, Geography and Other Arts

Some reflections on GPT

23/06/2024

This essay portrays some reflections on GPT and how we can use it to improve the speed of our workflow minimizing mistakes and revisions.

First, even though chatting with GPT may seem like a conversation, it is not. In short terms, each time you write something to GPT you are requesting a prediction of how, given a certain training, an answer would look like. You give orders to GPT which predicts answers. GPT, cannot say “I don’t know” to your request, as humans can. GPT may seem confident, but remember that with predictions, errors also come. Just as in your typical econometric model, if the prediction is far away from the average of the observations, the error increases fast.

If we think that when we ask GPT to expand ideas given a certain short text we have provided it we will not have mistakes, we are wrong. GPT will fill the gap with its training. We don’t know how it was trained. Therefore, it is better not to ask GPT to expand ideas, but to summarize ideas. We may expect that the probability of mistakes or hallucinations in a summary of a specific chunk of text provided to GPT is by far lower than the probability of having mistakes of asking it to elaborate on ideas. A summary would be like a prediction within the sample, whereas the elaboration would be a prediction out of the sample. Correcting mistakes is also time consuming so elaborations may not be useful. At least not with GPT 3.5 with all topics. Same happens with asking about something it has not been trained about: the analogy remains. Asking about something it has not been trained about is like making a prediction out of the sample.

After the prediction of GPT has been provided to us, we should always check if it’s correct. From my short experience interacting with GPT, if I stick to the rule of not asking it to elaborate ideas, but just to summarize what I give it with specific and clear prompts (instructions) almost no mistakes are found. On the other hand, when I ask it to elaborate on an idea, or, even worse, ask paper recommendations, GPT makes fatal mistakes. So, this is the first principle I would like to propose: use AI to summarize, not to elaborate.

Always read papers abstracts and the strategic summary you develop with the help of AI instead of the entire paper if the paper itself is not a Key Paper. Our time is limited. We cannot read all the papers we would’ve wanted to read, but we can read well developed summaries if we can be sure that they have not so many mistakes or no mistakes at all. Sometimes our minds could get blocked, and we could use GPT to get version 0 of a certain email, for example, but again, the quality of the first output will depend on the quality of our first prompt. So, this is the second principle: use AI to save time and try to avoid wasting time with the AI. Read strategically. Don’t waste reading what is not important. Life is short and your eyesight will eventually get reduced.

The same happens with coding. GPT may provide some initial ideas, but GPT will not create a new version of your favorite video game from scratch with just one of your prompts. However, it could be useful to have a conversation with it. This is the idea behind the usage of bots that are specialists in certain topics. And recognize what comes from the AI and what is yours. Be creative yourself. Think. Don’t delegate that ultimate task to any AI.