Recently, a peculiar phenomenon has taken the internet by storm. Users of the popular language model, ChatGPT, have noticed a consistent pattern when asking it to roll a dice.

Surprisingly, some claim that the resulting number is almost always 4. This revelation has sparked curiosity and led to a series of experiments to determine whether ChatGPT struggles with probability

Does ChatGPT AI struggle with probability?

The first indications of this dice rolling anomaly appeared on social media platforms such as Reddit and Twitter (1,2,3,4).

ChatGPT AI struggle with probability
Source (Click/tap to view)

Users shared their experiences, where they consistently received the number 4 whenever they asked ChatGPT to roll a six-sided dice.

Naturally, this peculiar behavior has led to questions regarding ChatGPT’s understanding of probability.

Rolling a standard six-sided dice should yield a fairly equal distribution of numbers, with each face having a 1/6 probability of appearing. However, ChatGPT’s consistent output of 4 seems to defy these expectations.

Who needs real dice when ChatGPT can just “roll” a 4 for you?
Source

One plausible explanation for ChatGPT’s consistent dice rolls of 4 could be attributed to the model’s tendency to default to a specific value when presented with an ambiguous or undefined task.

This behavior might be considered a form of laziness, as ChatGPT chooses the most straightforward option without considering the principles of probability.

To explore this hypothesis further, users began experimenting by repeatedly rolling the dice and noting the number generated until it matched the previous result.

Surprisingly, the number of attempts required to reproduce the same number was significantly low:

Chatgpt dice same number
Source (Click/tap to view)

It’s possible that ChatGPT AI is likely rounding up any number above 3.5 to 4, since the mathematical average of 1-6 is 3.5:

i believe this is due to it “believing” that the mathematical average of 1-6 is 3.5 (which it is) since you cant roll a 3.5 any extra would be a 4
Source

However, it is essential to note that these experiments or theories do not provide definitive proof but rather anecdotal evidence.

This viral phenomenon surrounding ChatGPT AI has captivated internet users and sparked intriguing discussions about the model’s understanding of probability and its struggles with it.

Either way, it remains unclear whether this behavior is a result of technical limitations or the quirks of its training data.

As the field of AI progresses, it is crucial to unravel such mysteries to ensure the reliability and accuracy of these powerful tools.

It will be interesting to hear what ChatGPT devs have to say about this revelation. We’ll be here to let you know if and when they respond.

PiunikaWeb started as purely an investigative tech journalism website with main focus on ‘breaking’ or ‘exclusive’ news. In no time, our stories got picked up by the likes of Forbes, Foxnews, Gizmodo, TechCrunch, Engadget, The Verge, Macrumors, and many others. Want to know more about us? Head here.

Karanjot Sidhu
975 Posts

A computer science engineer who loves tech and won't stop talking about it. Here at Piunikaweb, I mostly cover Google Pixel deals and how-tos, though you may find me covering Pixel news as well sometimes. Apart from being a nerd, i love gaming and watching movies in my free time.

Next article View Article

[Updated] Instagram crashing on all Android phones, but there are workarounds

Here's the crux of the article in video form: New updates are being added at the bottom of this story……. Original story from (June 5, 2018) follows: We're...
Jul 10, 2023 6 Min Read