GPT stands for Generative Pretrained Transformer.
Highsounding and even somewhat disturbing terms that “extend their hand” introducing themselves mellowed by the chat prefix.
There is a lot of talk about this “conversational” artificial intelligence able to chat and answer in-depth questions.
The official website lists among ChatGPT’s features the ability to admit mistakes, challenge incorrect premises, and reject inappropriate requests.
All of this is done through machine learning using an algorithm trained with “phenomenological data” that is, data collected from interaction with language in a given environment.
This algorithm is identified by another acronym: NLP short for Natural Language Processing.
Natural language would be “human” language as opposed to text data that no longer relies on predefined patterns but evolves flexibly.
Artificial Intelligence learns from us.
I don’t know about you, but I would have an immediate point to make in this regard.
OpenAI, creator of this system tells:
We launched ChatGPT as a research preview so we could learn more about the strengths and weaknesses of the system and gather user feedback to help us improve its limitations. Since then, millions of people have provided us with feedback, we have made several major upgrades, and we have seen users find value in a wide range of professional use cases, including writing and editing content, brainstorming ideas, helping with programming, and learning new topics.
Let’s try to dwell on the listed features:
– Content drafting and editing: indeed, this system can write text, surely better than me who never turn out to be good to the infamous SEO analysis 🙂
– Brainstorming ideas: at the level of creativity, I think to the possibility to create images by inserting only a few words.
In this sense the storm can occur with the results, as the creators themselves explain in this video
– Learning new topics: it also winks at education by presenting the chances as interactive and accessible to students.
On Feb. 1, however, a “pilot subscription plan” is released with this premise:
We love our free users and will continue to offer free access to ChatGPT. By offering this subscription price, we will be able to help support the availability of free access to as many people as possible.
But aren’t users teaching?
I was also struck by another clarification posted on the official page ChatGPT Optimizing Language Models for Dialogue, a link leads to “aligning language models” and specifies the following:
We have trained language models that are much better at following user intentions than GPT-3, making them also more truthful and less toxic, using techniques developed through our alignment research. These InstructGPT models, which are trained with humans in the loop, are now deployed as predefined language models on our API.
Less toxic … I suppose toxicity refers to how previous projects have learned even elements let’s say not politically correct.
The difference between man and machine is just that: imperfection.
Am I wrong?
Do you think we will get to the point where we will be the ones learning from AI and not vice versa?