A redditor named JohnnyCupcakes did a simple experiment: He scored an own goal in 80 matches. He then compared his win ratio during those matches to his win ratio in previous matches. Based on the fact that his win ratio grew by 30 %, he concluded that scripting / momentum exists. Seems legit? Don’t worry, it isn’t.
In a small series of articles, we scrutinize home brewed experiments aimed at testing or proving scripting, handicapping and momentum.
In this article, we take a look at JohnnyCupcakes’ post, which published on Reddit in January 2018. JohnnyCupcakes didn’t exactly write a full thesis on his method. His full post and pretty much the total amount of information he has made available is quoted below:
“80 games deep into this experiment and I can actually say the momentum turns your way later on in the game. I ended up winning about 30% more games when I did this. And that’s even with starting down a goal. The games an absolute joke. I even noticed sometimes If I score 2 own goals about 40 mins into the game I can probably score 3 within 5 minutes at some point. I don’t think FIFA is about being better then your opponent but about knowing the scripting more then your opponent. And it actually shows.”
(–Post on Reddit)
Attempts to get further insight into the basis of the conclusion unfortunately haven’t produced any results.
The experiment is an example of how easy it is to convince people who already agree with you, that you have evidence proving their point, when you in fact have absolutely nothing.
So, what generated that 30 % increase?
The first explanation that leaps to the eye is bias. JohnnyCupcakes studies his own match results, meaning that there is a risk that his results are influenced by his own predefined beliefs.
Even though JohnnyCupcakes perhaps doesn’t acknowledge that his predefined beliefs influenced his results, numerous studies have confirmed the existence of an observer-expectancy bias in humans. Observer-expectancy bias is a psychological mechanism which causes researchers to unconsciously influence their observations in accordance with their own beliefs. This problem is of course particularly prevalent when you are studying your own match results because you are in very direct control over the observations.
When I raised the issue of observer-expectancy bias with JohnnyCupcakes, he claimed that he came “from a very non bias stand point hoping [momentum] wouldn’t be true”. However, his later comments would seem to suggest the opposite:
“FIFA has come to the point that scripting isn’t even up for discussion whether it exists or not. I personally believe anyone who thinks it doesn’t exist is a fucking idiot.”
(– JohnnyCupcakes comments to my post)
It is definitely possible that JohnnyCupcakes won more matches after scoring an own goal because he expected to do so. And the failure to address and remove that risk is basically all that is needed to reject this study completely.
But this is not the only problem with his experiment.
The alleged 30 % increase in win rate
JohnnyCupcakes might have won 30 % more matches during his 80 matches trial. But that information alone does not allow us to conclude that he in general wins more matches when he has scored on himself. And by “in general”, I mean outside the narrow scope of a sample.
In statistics, we talk about significance: An increase from 3 in 4 to 4 in 4 constitutes a 30 % increase but it’s not a statistically significant increase, because the sample size makes it impossible to rule our that the change was a random fluctuation, i.e. sampling error.
Sampling error is a concern when you a dealing with an 80 match sample. That concern is accentuatd by the fact that we don’t know JohnnyCupcakes’ normal win rate.
If JohnnyCupcakes’ “normal” win ratio was 20 %, we would expect him to win 16 in 80 matches under “normal” circumstances. A 30 % growth would correspond to him winning 4 more matches than expected. Even to the naked, untrained eye it is very clear that +4 isn’t a statistically significant increase.
On top of that, the fact that we don’t know the number of matches played prior to the 80 match trial is an issue in it’s own right.
Was there a 30 % increase?
Johnny Cupcakes might not realize this but his study involves two samples: (1) A sample covering his matches up until the point where he started scoring on himself and (2) a sample covering the 80 matches where he scored on himself.
Essentially, JohnnyCupcakes neither knows his exact win ratio (the probability of winning a random match) before or after he started scoring on himself because both samples involve statistical uncertainty.
Both samples are uncertain, so what we would need to do here is to calculate the confidence intervals for both samples and ensure that they don’t overlap.
“But how likely is it that a sample showing a 30 % increase in fact covers a decrease?”, you might ask.
The correct answer is that we can’t rule that out with any reasonable degree of certainty. But it’s definitely possible:
In a sample of 80 matches, we may find the win rate to be 38 %. This means that the general win ratio is between 27 and 49 % with 95 % certainty. The confidence interval is this wide because of the small sample.
But since we are dealing with two samples, we need to calculate the confidence intervals for both samples to ensure they don’t overlap like below, as this will make the study inconclusive.
We don’t know how many matches, Johnny Cupcakes played prior do his 80 match trial and we don’t know the absolute increase in win ratio. Without that information, it’s impossible for us and for him to conclude that the 30 % increase couldn’t be a pure coincidence, because we can’t calculate confidence intervals and check whether they overlap.
Wrapping it up
Doing a study on your own matches is – bluntly put – a waste of time because it will be inconclusive almost no matter what you are trying to research due to confirmation bias.
We previously examined the exact same claim that JohnnyCupcakes is studying here: Namely that it’s an advantage to be trailing because matches are being leveled by big, bad EA. Unlike JohnnyCupcakes, we (a) didn’t base the experiment on our own matches, and (b) did use a sufficiently large sample.
We ended up concluding that there was no basis for the claim that matches are being made even. So, now you have an undocumented, methodically flawed study saying that momentum exists – and a fully documented, methodically correct study that says it doesn’t. Which sounds more credible?