the blog that walks on eggshells

Category: Tutti quanti

Sutor, ne supra crepidam!

The Buzz about ChatGPT – To infinity and beyond!

No need to introduce ChatGPT, as this online service has become the new technological star of the media and social networks in a flash.

Since then, we have been told of a new world where the slightest of our desires will be fulfilled literally in the second.

Like a genie out of his lamp, ChatGPT invites us: “Ask and you shall receive!”

You have to write a text on any subject, history, geography, politics, culture? ChatGPT will write it for you.

A legal opinion? Here it is.

A complex software? Here it goes.

What else? Oh yes, you need a picture, say in the style of H.R. Giger, for the cover of your next novel; GPT will produce it in an instant. And if you ask nicely, it will even write your novel in the process. It will be enough to tell it a few things about the characters and the context. Ditto for your next summer hit: ask GPT to write you a song, say in the style of Ed Sheeran, and you’re nominated for an Emmy.

Of course, every coin has its other side: goodbye journalists, lawyers, writers, programmers, musicians, graphic designers! Your hard-won skills are now obsolete. You are obsolete, because your slowness has become an embarrassment!

The great replacement is coming tomorrow, possibly tonight, who knows.

Even some of the usually cautious media are telling us that this is the beginning of the end and that we are fast approaching the advent of technological singularity. While you are sleeping peacefully, GPT is constantly assimilating new knowledge, is progressively acquiring a general intelligence, just like humans, and will soon be capable of empathy and creativity far superior to ours, faster and more relevant.

Then we, poor humans, will be left with nothing but tears in our eyes to admire our apprentice take off and inexorably overtake us.

Back to earth

Oh, this masochistic pleasure of the humans who love to play with fear.

All this fuss about ChatGPT is symptomatic of a paradox: new technologies flood our lives and our conversations, but everything happens as if there was no point in understanding how they work when it comes to assessing their potential, their limits, and the dangers they carry.

Somehow, this way of approaching artificial intelligence is in line with today’s mindset, where feelings prevail over reason and where no argument, however rational, can challenge these feelings, in this case, fascination and fear.

However, before we start building castles in the air and making predictions about an upcoming new world, or worse the end of humanity, it might be worth asking how ChatGPT really works.

Contrary to what one might think from the media coverage of this technology, understanding how it works, and therefore its possibilities and limits, is within the reach of almost everyone. And as we will see, the genius (if there is any) at the heart of ChatGPT is not where you might expect it.

How does it work?

ChatGPT comes in the form of an artificial conversational agent (ChatBot), meaning you interact with this agent as you would with a human through any chat application, such as Apple or Google Messages, Whatsapp, Snapchat, etc.

What stands out immediately when you chat with ChatGPT is the fluidity of its conversation and its ability to take into account the context of the discussion. For example, you can ask it to clarify an answer or to broaden its argument.

The problem is that this fluidity and sensitivity to context can easily give us the illusion that ChatGPT knows what it is talking about. However, this is not the case, and here is why.

In a nutshell, ChatGPT can be seen as an on-steroid version of the algorithm that suggests the next word in a sentence you are writing when using one of the chat applications mentioned above (Whatsapp, Snapchat, etc.).

That is, when it comes to generating a response, ChatGPT will produce sentences with the most likely sequence of words, based on a large number of texts it has previously ingested and analyzed, and of course, based on the context of your discussion with it so far.

Where ChatGPT differs from the algorithm that suggests the next word in your message is essentially in :

  • the size of the context taken into account for its response,
  • the volume of text analyzed for its initial training,
  • the number of parameters controlling the process generating its responses.

In fact, when GPT-3 interacts with you, it keeps about 3,000 words in its memory, allowing it to take your previous exchanges into account in its responses. The number of parameters of the language model on which it is based is staggering: 175 billion, which requires 800 GB of storage.

Finally, the training of ChatGPT’s language model was performed on a text corpus containing about 375 billion words, from various sources including Common Crawls, which archives the Internet, and Wikipedia, the famous online collaborative encyclopedia. The model’s training was performed on a version of this corpus of texts frozen in September 2021. In other words, any changes to these sources after that date are not part of the information available to GPT-3. This is the reason why ChatGPT does not know that Elizabeth II has passed away. If we consider GPT-4, the numbers mentioned above are even more impressive, but the basic principle remains the same.

From Elisabeth Holmes to ChatGPT

In a way, ChatGPT behaves like an interlocutor who would constantly try to finish your sentences or keep the conversation going on a given topic by regurgitating more or less accurately what it has read on that topic, without understanding it. An extreme example of this probabilistic model of interactions can be found in the game of idioms. For instance, if I say “All things come…”, you will answer “…in threes”. If I say “Like father…”, you will answer “… like son”.

However, it is crucial to understand that this is a purely linguistic model: there is no deduction or logical reasoning behind ChatGPT’s responses. For example, it can assert false and even contradictory claims in the same conversation, as long as its answers conform to the underlying probabilistic model. To put it another way, ChatGPT has no notion of what is true or false, only what is probable in relation to the body of texts it has been trained on and the context of your conversation with it.

If ChatGPT were human, we could say it is a fraud, just like Jean-Claude Romand, who passed himself off as a doctor and researcher at the WHO to his relatives for eighteen years. To understand the effectiveness of a plausible discourse to make us believe in the competence of our interlocutor, it is worth noting that during a dinner at the home of a doctor friend, a cardiologist who had discussed with Romand during the evening said of him at the end of the dinner: “Next to people like him, we feel very small.

The fraud for which Elisabeth Holmes, founder of Theranos, was sentenced to 11 years in prison provides another example of the confusion that can exist between plausibility and truth. Here again, the scientific polish acquired by Elisabeth Holmes during her two years of study at Stanford University allowed her to elaborate a plausible narrative around the possibility of performing a battery of blood tests from a single drop of blood taken from a fingertip.

Based on this narrative, Holmes was able to convince an impressive number of personalities to join her, either as investors (Rupert Murdoch, Larry Ellison) or as members of Theranos’ board of directors (Henry Kissinger, George Shultz).

However, from the beginning, some people were not fooled.

For example, many medical laboratory experts were skeptical of Holmes’ promises because some large molecules, such as proteins and lipids, are not present in uniform concentrations throughout our body. For this reason, blood taken from a fingertip is not the same as blood taken directly from a vein. In May 2016, Warren Buffett actually criticized Theranos for essentially appointing political and military leaders to its board of directors, rather than blood testing experts.

And when in September 2018, Theranos finally went out of business, everyone had to face the fact that truth and plausibility cover very different realities.

Form and substance

According to a common belief, ChatGPT is constantly learning and thus refining its “knowledge”.

Here again, this is plain wrong, and for a very good reason.

Indeed, if you let a conversational agent feed on what it finds on the Internet, especially on its conversations with Internet users, it will soon become racist, insulting, and peddling all sorts of conspiracy theories. This is precisely what happened in 2016 when Microsoft plugged its artificial conversational agent called Tay into Twitter.

Yet, Microsoft had great expectations: the more Tay would interact with Internet users via Twitter, the more intelligent and relevant it would become. The reality was quite different: in less than 24 hours, Tay turned into a hateful, racist, and misogynistic troll, stating for example “I fucking hate feminists!” or “Hitler was right, I hate Jews!”

The experience was so disastrous that Microsoft pulled the plug on Tay just two days after its launch and apologized for the offending tweets. Again, nothing surprising here: an artificial conversational agent simply regurgitates the most likely statements in the set of texts it is trained on, in this case, messages posted on Twitter. And Twitter in particular, but more generally the Internet, is known to carry floods of hateful and offensive messages.

But then, how does ChatGPT resist this kind of drift and produce answers that are often reasonable, even correct?

The answer is confoundingly trivial: humans, many humans in fact, paid less than $2/hour, manually cleaned the data on which ChatGPT was trained.

In a way, ChatGPT is like a very young child repeating what he hears at home, without understanding what he says. If his parents are racist and conspiratorial, his statements will be racist and conspiratorial. If his parents are humanistic and reasonable, his statements will be too.

This is where the ingenuity, bordering on cunning, of ChatGPT’s designers lies: by training their conversational agent on data cleansed from the most outrageous and dubious content, the latter usually produces only reasonable, even correct, statements.

The other side of the coin is that it is impossible for ChatGPT to learn without being strictly supervised by humans who guarantee the orthodoxy of its sources, under penalty of the same drifts as Tay, its abusive colleague from Microsoft. This is also the reason why the “knowledge” of GPT-3 does not extend beyond September 2021.

Ultimately, if ChatGPT doesn’t say too much nonsense, even if it does lie through its teeth on a regular basis, it is not because it is able to distinguish right from wrong but simply because it has been trained only on scrupulously controlled data. For example, ChatGPT does not know that the earth is round: it has just been trained on a corpus of texts containing no sentence of the type “the earth is flat”. Therefore, there is a zero probability that it will end the sentence “the earth is…” with the adjective “flat”.

The spark that…broke the camel’s back

To understand the difference between probabilistic natural language generation and semantic understanding, let’s go back to the example of idioms, and suppose that you have trained ChatGPT on all the idioms of the English language.

After this learning, if you say “this is the straw that…”, it will answer “…broke the camel’s back”, whereas if you say “this is the spark that…”, it will answer “…ignited the fire”. Yet ChatGPT will have no idea what a spark or a straw is, let alone what these metaphors mean.

A contrario, any human can understand the meaning (and the humor) of the two literally improbable idioms: “this is the straw that ignited the fire” and “this is the spark that broke the camel’s back“, even if they have never heard them before.

Behind this understanding by human intelligence, several semantic levels are intertwined. Firstly, we know that a spark can cause fire and that one more straw might inevitably break the already overloaded camel’s back.

Secondly, we understand that these two idioms metaphorically express the fact that sometimes a seemingly insignificant element (a drop, a straw) is enough to trigger an otherwise more significant result (fire or fracture). There exists, of course, a nuance between these two idioms, since the one about the straw also contains the idea of an accumulation prior to the catastrophe.

Finally, by intertwining these two idioms, we understand that the metaphorical meaning has not fundamentally changed, but that a touch of absurd humor has simply been added.

Nothing like that in ChatGPT, unless of course for its next version, it has been trained on a text corpus containing the article you are reading right now. And even in this case, if it is then able to produce the sentence “this is the spark that broke the camel’s back”, it will only be because this sequence of words has become probable in a specific context.

What does all this say about us?

Besides having trained ChatGPT on data whose orthodoxy was strictly controlled by humans (methodically exploited), the other innovation of ChatGPT is to have been made available to the general public. Indeed, similar systems already existed in a number of artificial intelligence research laboratories around the world, but these systems were only accessible to experts in the field.

Given this public access, a question arises: what does the astonishment that accompanied the launch of ChatGPT and the resulting confusion surrounding the notions of intelligence, creativity, and even empathy say about us?

In an attempt to answer this question, let’s first recall that computer science is defined as the automation of information processing by computational automata and that artificial intelligence is no exception to this definition. Moreover, let’s remember Jean Piaget’s definition of human intelligence: “Intelligence is not what we know but what we do when we don’t know.”

But then why do we confuse notions as distant as human intelligence and the automatic production of texts on a statistical and algorithmic basis?

One possible answer lies in the fact that in many circumstances, we behave like automata who simply reproduce, with minor variations, standardized and therefore predictable speeches and behaviors.

The question is thus not so much whether automata are (or will be one day) capable of behaving like humans, but to what extent humans behave like automata in many circumstances.

After all, we probably mobilize our intelligence, empathy, and creativity much less often than we think.

In all fields, including those where creativity is supposed to reign supreme, such as scientific research or art, conformity has taken hold, as it significantly increases the probability of producing results that are perceived as valuable by the majority.

For example, when a film or series that breaks with the genre codes is a great success, a flurry of similar movies and series is produced in the following years. And what was initially original is gradually becoming mainstream.

Similarly, research was never as prolific in scientific papers as in the past twenty years, especially in the academic world, but most researchers know that if one wants to publish quickly, the best strategy is to write an n-th article on a hot topic than to risk launching a radically unconventional idea. This is especially true for researchers at the outset of their careers.

So much so that some researchers are already looking forward (more or less openly) to being able to delegate to ChatGPT the automatic generation of entire parts of their next scientific papers.

Where will this lead us?

If artificial intelligence is indeed a risk for humanity, it is not in the great replacement of humans by machines. It is safe to assume that in the short and medium term, humans will not be replaced by intelligent machines, but by other humans, in much smaller numbers, who know how to use tools like ChatGPT in a meaningful way. As a result, the real danger probably lies in the demise, or at least the significant decrease, in a large number of humans, of their capacity to construct their own vision of the world, to express it in an articulate way, and of course to use their creativity to make it evolve.

Let’s take a few examples.

If tomorrow, most graphic designers change jobs and the overwhelming majority of images published on the Internet are generated by artificial intelligence on the basis of images previously produced by humans, there will inevitably come a time when artificial intelligence will re-ingest its own productions and generate ad nauseam bland variations of what already exists.

Now, just like the texts produced by ChatGPT, the images generated through artificial intelligence are in fine only the result of a statistical average calculated from various parameters, based on the most desirable by the majority of users. Put differently, artificial intelligence can certainly produce a very large number of variations from a set of images, but when the major part of this set is comprised of images generated by artificial intelligence, the system ends up going in circles and biting its own tail.

As we have seen, this tendency to ramble also exists when we produce artistic or scientific content ourselves, because of our need for approval by the public or by our peers. Fortunately, from time to time, genuinely original productions appear, allowing the various artistic or scientific disciplines to progress.

With the advent of tools like ChatGPT, however, this rambling may radically change in scale, moving from a craft to the industrial production of redundant content.

And Shannon’s information theory teaches us that when all is redundancy, there is no more information: if we only rehash what we already know from having expressed it over and over again, we learn nothing.

Thus, from the moment that most literary, graphic, cinematographic, or even scientific productions would only be the result of a statistical reconfiguration of existing content, human creativity would eventually suffer a serious setback.

One can of course object that a solution to this problem could be to regularly inject some randomness into the process of automatic content generation, but again, this is a gross oversimplification of the way our human intelligence advances art and science.

In music, can we seriously believe that Romanticism is a random variation of Classicism? And one can ask the same question of the impressionist painting in relation to the so-called realist painting, which precedes it historically.

Artistic and scientific revolutions are not born from a random injection of ideas or concepts, but rather from a deep understanding of the world by the artists and scientists at the origin of these revolutions.

Moreover, if creativity is indeed the process of having original ideas that have value, as defined by Ken Robinson, then the central question is who determines what has value and on what basis. Here again, only a deep understanding of the world, and of the disciplines in which our creativity develops, can resolve this question.

A sense of déjà-vu?

While the hopes and fears generated by ChatGPT may seem unprecedented, in reality, with each significant advance in computing, we revisit the same myths and fantasies.

When the first electronic calculators became available to the general public, there was a concern about whether young children should be allowed to use them. Was it still necessary to teach the basic rules of arithmetic, such as the commutativity or associativity properties of certain operations?

The answers to these questions were nuanced, but in general, it was decided that calculators were fabulous tools for those who knew how to calculate and had assimilated the basics of arithmetic. Therefore, the principles of arithmetic continued to be taught, although the use of calculators was allowed from a specific level of study.

This approach was based on common sense: how could our educational system produce people capable of advancing our knowledge of mathematics, or any other science that uses mathematics as a language, if we replaced the understanding of its basic principles with the mere ability to press buttons?

More recently, a contrario, during the COVID period, we saw how much the mastery of digital tools by the so-called “digital natives” was a myth. Contrary to the use of the calculator, it had been assumed for several years that digital natives only needed to have been exposed to computer tools from a very young age to acquire a solid and thorough understanding of the fundamental principles of computing.

However, we realized that the digital natives had, in fact, only developed contingent and superficial knowledge, essentially related to the use of certain specific tools. For example, many young people today have no concept of how data is stored, to the point where the notions of file, disk, or storage server are most often unknown to them.

A good servant but a bad master

Like all tools, artificial intelligence in general, and ChatGTP in particular, are fabulous servants for those who know how to use them, but very bad masters for those who are totally dependent on them.

Again, this situation is not new: automatic translators, spellcheckers, integrated development environments, and the Internet itself, are formidably effective tools in the hands of informed users, but can be extremely dangerous when handled carelessly by candid users.

In the end, it is up to us to decide whether artificial intelligence will follow in the footsteps of computers, which can be seen as a form of bicycle for the mind, which does not replace our muscular strength but allows us to multiply it and thus to go further, according to Steve Jobs’ metaphor.

With artificial intelligence, we are somehow moving to the electric bike: the multiplication factor is undoubtedly greater, but the principle remains the same, since the cyclist provides the basic movement without which the electric bike remains still.

The alternative to this scenario would be for artificial intelligence to become a kind of autonomous SUV, crushing everything in its path: the infinite nuances of our human intelligence, our intuition, our creativity, our sensitivity, and our empathy with the world.

If we want to avoid this second scenario, it is thus up to us to determine not only when and how to use artificial intelligence, but also and above all when and how not to do so.

Optimist and pessimist, two sides of the same coin!

According to a quote often mistakenly attributed to Winston Churchill, an optimist sees an opportunity in every difficulty, while a pessimist sees a difficulty in every opportunity.

This statement crystallizes the fact that it is very common to oppose optimism and pessimism. According to this vision, these two mindsets would be at the exact opposite of each other.

On the one hand, there would be people with unshakable confidence in humanity, the world, and the future, positive that everything will work out for the best in the end. On the other hand, there would be people who are deeply skeptical, convinced that the worst is yet to come, whether in the form of betrayal, catastrophe, or even the end of time.

This opposition is supported by the fact that when an optimist meets a pessimist, their respective speeches usually collide in a brutal way.

The optimist usually sees the pessimist as a hopeless and depressing person. As a result, he tries to prove to his counterpart that there are reasons for hope and for looking forward to the future.

Conversely, the pessimist sees the optimist as a naive person, even a little foolish, who did not yet realize the dramatic state of the world around him. It is also interesting to note that the pessimist often refers to himself as realist, because, unlike the optimist who is blinded by his candor, only he is capable of objectively perceiving reality. This is why the pessimist usually tries to open the eyes of the optimist to this supposedly objective reality.

A superficial antagonism

This antagonism seems so obvious that it has become part of popular wisdom and is rarely questioned. However, on closer inspection, these two existential postures that seem to contradict each other are in fact two extreme variants of the same fundamental posture: fatalism.

Indeed, the optimist invites us to look at the future with absolute confidence and to consider that everything will be fine, whatever our deeds. Of course, our actions can sometimes accelerate the movement, or even allow what will happen to be even more positive than we had hoped, but in the end, there is no doubt that a happy ending to any difficulty will come about.

In the same way, the pessimist urges us to abandon all hope and warns us that no matter what we do, tragedy and catastrophe will ensue. Here again, our actions can at best alleviate some of the pain, but the tragic outcome is not in doubt.

But then, if optimism and pessimism are really just two sides of the same coin, namely fatalism, what mindset is radically opposed to it? What deeper dichotomy should be proposed that is not ultimately a continuum whose extremes meet?

Fatalism vs. voluntarism

Since fatalism postulates that our will has fundamentally no control over reality, nor over our human existence, voluntarism is logically its exact opposite. Indeed, if the fatalist defers to fate, the voluntarist is convinced that his determination is the essential driving force of his existence.

Being a voluntarist essentially implies thinking that the best way to predict the future is to invent it, according to the famous quote attributed, rightly this time, to Alan Kay. The latter knows what he is talking about, since he worked for many years at Xerox PARC, a renowned computer research laboratory in Palo Alto, where he contributed to developing many innovations that are the cornerstone of computing today.

Reasonable man versus unreasonable man

More generally, we can also relate the opposition between voluntarism and fatalism to the antagonism expressed by George Bernard Shaw between the reasonable man, who adapts to the world, and the unreasonable man who persists in wanting to adapt the world to himself. From this antagonism, Shaw concludes that all progress comes ultimately from unreasonable men.

Of course, voluntarism has its own limits and not acknowledging them can lead to problematic, even catastrophic, excesses. Indeed, the refusal to accept any limit to our will, in particular the limits imposed by nature, is the source of many of today’s challenges, especially in the environmental field.

One can nevertheless try to reconcile Shaw’s voluntarism with these challenges, by noting that in his world vision, typical of the 19th century and the first half of the 20th, man is ontologically dissociated from the rest of the world. However, this conception is radically challenged today. We humans are not separated from the world, especially from nature, which has provided the conditions for our existence until now.

If we consider ourselves to be an integral part of the world, adapting the world can also imply profoundly transforming ourselves, through our modes of production and consumption, and more broadly, our ways of life.

Yet it would be a mistake to consider this as a new form of fatalism: passively accepting a situation imposed by nature or deeply adapting our behavior to cope with it is a radically different approach. Indeed, the will to deeply transform our vision of the world and our way of life requires a determination that has nothing in common with fatalism. On the contrary, the ambition to achieve such a transformation requires not only a solid will, but also a certain degree of folly.

So ultimately, who can say whether this ambition is that of a reasonable man or that of an unreasonable man?

© 2024 omelette

Theme by Anders NorenUp ↑