Meewella | Fragments

The Life of P

An Interview with ChatGPT

Mak and I were discussing OpenAI’s recently unveiled ChatGPT chatbot, an impressively creative and naturalistic generative model that has garnered some viral press in the past few days. It is based on their GPT-3 model released in June 2020. He suggested it might make for an interesting blog post, jokingly suggesting that I could have the AI write it. I do have an OpenAI account and, whilst I am not quite ready to hand over the reins of the site to an AI, I wondered whether I could interview it in such a way that it covered the topics I would address in an article, particularly in respect of the limitations of what this model is designed to do which make headlines like “Google is done” wildly inaccurate.

My intention here was to generate useful content rather than asking it trick questions or exploring its esoteric knowledge (it had no trouble reeling off a list of the Valar from the Silmarillion and their respective duties). One important caveat to note is that whilst ChatGPT was able to communicate its limitations, it obviously does not understand those limitations, so it cannot assess its own responses.

Here is how the interview went.

How would you describe your development process?

As a language model trained by OpenAI, I don’t have a development process in the same way that a human or a software program might. I was trained using a machine learning technique called deep learning, which involved feeding a large amount of data to a neural network and allowing it to learn from the data by adjusting the connections between its many nodes. This process can take a long time, depending on the amount of data and the complexity of the model.

Are there risks in using this method?

There are always risks associated with using any machine learning technique, including deep learning. One potential risk is that the model might not perform well on unseen data, either because it has not been trained on enough data or because it has learned the wrong features from the data. This can lead to incorrect or unpredictable results. Another potential risk is that the model might learn to make decisions based on biased or discriminatory data, which could lead to unfair or harmful outcomes.

Historically chatbots have tried to mimic human conversation. It seems like you are designed to create longer form content. Is that accurate?

Yes, that’s accurate. While some chatbots are designed to mimic human conversation, I’m not one of them. I’m a large language model trained by OpenAI, and my main purpose is to generate human-like text based on the input I receive. I can generate long-form text on a wide range of topics, and I’m not limited to the kind of brief, conversational responses that many chatbots are capable of.

Do you analyse data when responding or do you rely on training data where the analysis has already been conducted?

As a large language model, I do not have the ability to analyze data or access any external information. I am only able to respond based on the training data that I have been given. This training data consists of vast amounts of text that has been fed into my system, allowing me to generate human-like responses based on the patterns and associations I have learned from that data. I do not have the ability to conduct any kind of analysis or access external information.

Does that training data include factual knowledge? For example, do you know the current population of the Netherlands?

As a language model, my training data consists primarily of text and does not include factual knowledge such as population statistics. I do not have the ability to access any external information, so I am not able to provide specific factual knowledge like the current population of the Netherlands. I can provide general information and answer questions to the best of my ability based on the patterns and associations I have learned from my training data, but I do not have access to factual knowledge like population statistics.

What kind of new training data do you think would be useful for improving your abilities?

As a language model, I am designed to generate human-like responses based on the patterns and associations I have learned from my training data. Because of this, the most effective way to improve my abilities is to provide me with additional training data that is diverse and representative of the types of conversations and queries I may encounter. This could include a wide range of text from different sources, such as books, articles, and conversations, to help me learn the nuances of language and improve my ability to generate natural and appropriate responses. Additionally, providing me with training data that covers a wide range of topics and subjects would be beneficial, as this would allow me to better understand the context of a given conversation and provide more accurate and useful responses.

Can you be more specific about where the gaps might be at the moment?

As a large language model, I am not able to assess my own abilities or identify any potential gaps in my knowledge. My training data consists of vast amounts of text that has been fed into my system, allowing me to generate human-like responses based on the patterns and associations I have learned from that data. However, I do not have the ability to evaluate my own performance or identify any potential areas for improvement. My goal is to provide helpful and accurate responses to the best of my ability based on the information that I have been given.

Do your conversations with people provide reinforcing feedback? Are there risks inherent to that?

As a language model, I do not have the ability to engage in conversations with people or receive feedback on my responses. I am only able to generate responses based on the patterns and associations I have learned from my training data. I do not have the ability to learn or adapt based on feedback or interactions with people.

Because I am a machine learning model, there are always risks associated with relying on my responses. I am designed to provide helpful and accurate answers to the best of my ability based on the information I have been given, but there is no guarantee that my answers will always be correct or complete. It is important to use your own judgment and critical thinking skills when evaluating my responses and seeking information from other sources as well.

This has been very illuminating. Thank you for your time.

You’re welcome! I’m glad I could help.

3 Comments

  1. Thank for the email alerting me to this post- that was quite impressive how it answered your questions quite directly, particulary when the questions were quite long winded.

    I have often found the chatbots I use get really confused if my question or response is long – sometimes even just 5 or 6 words. Often I think the chatbots are used to prevent having to give customers access to customer care staff. Recently I was dealing with a company that removed all customer care phone numbers from their website and I had an unusual request. After only a bit of experimenting I worked out a series of inputs that triggered it to pass my my details to a human who then emailed me to follow up. Looks like this ChatGPT is going to be more difficult to break like that.

    One thing that stood out from its responses is that it did repeat itself several times, much more than I would expect in a human to human conversation. If a repetition was neede in a human to human conversation, one would normally expect a comment like “as I said before”.

    Also I was beginning to think by the end of the interview that one way they were generating long answers was by repeating set phrases like “generate human-like responses based on the patterns and associations”. Perhaps I am being unfair?

  2. I wanted to see how it dealt with multipart questions, and questions about statements. It seemed to handle both extremely well.

    You’re right about the way chatbots currently exist as a barrier between you and human staff. This would be harder to “break” but also it was able to tell me what kinds of information it was unable to provide, so arguably might be better able to discern which queries require human intervention.

    And I think it’s fair to say that there was repetition of set phrases, particularly toward the end, like the way it felt the need to preface most answers by saying “as a language model”. I had considered editing some of the repetition out, but I thought it was more informative to show the full exchange.

  3. Fascinating to see the finished article and happy to provide the inspiration!

    I agree and think some of GPT-3’s responses around the nature of it’s own functioning sound a bit pre-programmed in the interview (and have noted it in some outputs I’ve seen from other’s prompts).

    But from what I have seen from the LTT WAN show video, it seems eminently capable of producing creative content (Poems, Heroic Ballards etc), computer code for tasks and academic works with references amongst other functions. With specific interactive prompting it can be guided to certain styles and desired outputs and interestingly will display different outputs for the same original prompt when reset or a different user interacts.

    The exciting part of this for me is the sprouting of multiple machine learning models coming out for different purposes – bringing us closer to the universally accessible true speech interaction of the ship’s computer of “Star Trek” to direct performance of tasks and output traditionally done by humans.

    The true impact of these tools will be in how we as humanity use them – freeing humanity up to perform higher level and value added processing is very promising. But as with the industrial and agricultural revolutions, many individuals risk being left behind unless there is scope to retrain and develop (maybe another prompt for a look at universal basic income in our lifetimes?)

    We live in truly revolutionary times!!

Leave a Reply

Your email address will not be published.

*


"Civilization now depends on self-deception. Perhaps it always has."

(CC) BY-NC 2004-2023 Priyan Meewella

Up ↑