Jump to ratings and reviews
Rate this book

Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science

Rate this book
There is a logical flaw in the statistical methods used across experimental science. This fault is not a minor academic quibble: it underlies a reproducibility crisis now threatening entire disciplines. In an increasingly statistics-reliant society, this same deeply rooted error shapes decisions in medicine, law, and public policy with profound consequences. The foundation of the problem is a misunderstanding of probability and its role in making inferences from observations.

Aubrey Clayton traces the history of how statistics went astray, beginning with the groundbreaking work of the seventeenth-century mathematician Jacob Bernoulli and winding through gambling, astronomy, and genetics. Clayton recounts the feuds among rival schools of statistics, exploring the surprisingly human problems that gave rise to the discipline and the all-too-human shortcomings that derailed it. He highlights how influential nineteenth- and twentieth-century figures developed a statistical methodology they claimed was purely objective in order to silence critics of their political agendas, including eugenics.

Clayton provides a clear account of the mathematics and logic of probability, conveying complex concepts accessibly for readers interested in the statistical methods that frame our understanding of the world. He contends that we need to take a Bayesian approach―that is, to incorporate prior knowledge when reasoning with incomplete information―in order to resolve the crisis. Ranging across math, philosophy, and culture, Bernoulli’s Fallacy explains why something has gone wrong with how we use data―and how to fix it.

368 pages, Hardcover

Published August 3, 2021

Loading interface...
Loading interface...

About the author

Aubrey Clayton

2 books14 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
157 (46%)
4 stars
129 (38%)
3 stars
38 (11%)
2 stars
13 (3%)
1 star
2 (<1%)
Displaying 1 - 30 of 51 reviews
Profile Image for BlackOxford.
1,095 reviews68.9k followers
January 6, 2022
Spotting Scientific Method on the Hoof

Will I make (or lose) any money betting heads on a coin flip 100 times? Probably not.

On the other hand, if I get a positive result from molecular genetic testing for FGD1 gene mutations, does this mean I probably have Aarskog syndrome? Almost certainly not.

The difference in these two situations is critical. In the first I already know the probabilities involved. Half the coin flips will be heads, the other half tails. In the second, I want to know the probability that I have the disease given that I tested positive for it.

My likely first reaction to the test results is ‘But how accurate is the test?’ Wrong question. Even if the test has a very high accuracy rate, the occurrence of Aarskog syndrome in the population - estimated at less than 1 in 25,000 - is so rare that it is far more likely, given no other information, that the test is wrong than I am a victim.

The overall incidence of the disease in this example is called a prior probability. Prior probability is like the knowledge that there are only two equally likely outcomes - heads or tails - in the first example. It forms the background to the practical situation. At some point in the past some researcher estimated this prior probability based on some previous guess about ever ‘more prior’ probabilities, about the incidence of the disease, perhaps through the number of published papers about it (there have been about 60 worldwide).

In other words, what we already know is extremely important in interpreting new information. The more unexpected, strange, or novel an event, the more evidence we need to take it seriously. This is a common-sensical idea but it has profound implications, one of which, according to Aubrey Clayton, is that “It is impossible to ‘measure’ a probability by experimentation.” And this is another way of saying that “There is no such thing as ‘objective’ probability.” And therefore that “‘Rejecting’ or ‘accepting’ a hypothesis is not the proper function of statistics and is, in fact, dangerously misleading and destructive.”

And yet ignoring what we already know is exactly what most researchers do, especially (but not only) in the social sciences. This mistake is not trivial. According to Clayton:
“These methods are not wrong in a minor way, in the sense that Newtonian physics is technically just a low-velocity, constant-gravitational approximation to the truth but still allows us successfully to build bridges and trains. They are simply and irredeemably wrong. They are logically bankrupt… [Since] the growth of statistical methods represents perhaps the greatest transformation in the practice of science since the Enlightenment. The suggestion that the fundamental logic underlying these methods is broken should be terrifying.”


Of course these claims will be controversial. I take Clayton as authoritative when he says that “No two authors, it seems, have ever completely agreed on the foundations of probability and statistics, often not even with themselves.” If so, this book is yet another example of the inevitable (and necessary) instability of what is casually referred to as ‘scientific method.’ If such a thing exists at all, it is demonstrated in this kind of critique of established procedures, a sort of intellectual self-immolation.
Profile Image for J Earl.
2,151 reviews95 followers
May 20, 2021
Bernoulli's Fallacy by Aubrey Clayton is a well-argued case against what has passed for probability over the past century plus. While his explanations are straightforward and the math is presented in a clear manner, it is still a read that will, and should, take more effort than many other books. The reward, however, is well worth the effort.

While some may mistakenly think this is just some feud within academia and so doesn't really matter beyond those walls, that is wrong and Clayton makes that clear with many of the social science as well as science examples he cites. When people's lives can be harmed if not ended at least partly because of improper use of data expressed as probability, then this is anything but an ivory tower debate. It takes place largely within those walls because that is where these theories are taught and because the "experts" who pronounce the so-called probabilities on policy issues are still pulled from academia's ranks.

While my first undergraduate degree was a mathematics heavy degree (EE) it has been very long ago and my subsequent degrees were all humanities and social science. So I am not going to try to explain what Clayton goes through. To put it as basically as possible, what passes for probability is often just frequency, with little or no predictive or explanatory value. Yet it is used to predict and to explain, which then becomes part of future policy, which more often than not fails.

A good example of a Bayesian approach is an article Clayton wrote for the Boston Globe in June of last year about the statistical paradox of police killings. Without taking prior information into account when assessing limited or skewed information, a faulty and quite deadly conclusion can be made which seems, on the surface, to be based on sound scientific information. That article, quite short, is well worth looking up to offer a real world glimpse of what Clayton is arguing against.

While the book is dense, it is accessible to most readers who either have some math background (especially if you still use it frequently) or is willing to read a bit slower and wrestle with the concepts. Clayton's explanations and examples, as well as the history lesson, can be read largely without too much concern for understanding the nuance of every formula he shows. If you understand that if a figure in a particular place in a formula can have an outsize effect on the result, then understanding the nuance is less important since Clayton explains what we need to understand for the big picture argument. In other words, if you're interested or concerned about the reproducibility crisis in science as well as the social sciences this book will be well worth any effort you might have to put into it. But it is, bottom line, accessible to most who want to understand.

Using my experience as an example, I had to progress rather slowly and make an effort to understand each bit of information, each aspect of the history as well as of the mathematics. I feel like I managed to do so at a reasonable level for a first read. What I haven't yet done but anticipate doing with subsequent readings is connecting these still, in my mind, separate pieces into a better understanding of the whole. Clayton's explanations allowed me to understand the big picture without every detail being in perfect focus. Now I can connect the dots (my small pieces of understanding) to make the big picture come into sharper focus. Okay, maybe I didn't help with this paragraph, but maybe someone will understand what I am trying to say.

Quick aside, ignore the "sky is falling" people who imply that all statistics and all that we do with them is pointless, that is throwing the baby out with the bath water, and probably makes the screamer feel smart. This is a wide ranging problem and touches almost every aspect of policy making as well as research, but it is not a case of "everything that has been done before is now meaningless." Keep the data and use it better, don't panic and throw everything out and hyperventilate.

Also, to clear up some misunderstandings, the review copy I had, both the Kindle version and the one I read on Adobe Digital Editions, had substantial notes (many of which were bibliographic in nature) as well as several pages of a bibliography. So anyone interested in checking Clayton's sources can do so. Not sure why the mix up, but rest assured, this is both well-researched and well-documented.

Reviewed from a copy made available by the publisher via NetGalley.
Profile Image for s.
51 reviews
July 16, 2023
There's a readable pop-science history running through the book that was somewhat interesting, but the overall concern here is a strong polemic against frequentist methods. All feels a bit cheap, superficial, even tasteless - specifically the strong implications throughout that eugenics was somehow a result of frequentism, as if the methodological tweak of including prior probabilities would have redeemed the horrific course of history. There's a very interesting epistemological difference between the two sides that remains philosophically and historically under-explored here but I don't buy at all that it's a case of one being correct and the other racist.
Profile Image for Amirmansour .
84 reviews5 followers
March 12, 2022
A fantastic read, talking about an old problem.
Frequentism vs. Bayesian
Profile Image for Alberto.
302 reviews12 followers
April 9, 2023
I wanted to give it a higher rating for making me think about statistics deeply, but ultimately the book is too flawed for that.

1) Acts as if Bayes's Theorem exists solely in Bayesian analysis and is not in the frequentist toolbox. In particular, he implies that the famous Monty Hall Problem cannot be solved using frequentist methods.
2) Ignores the problems with priors.
3) Outright mocks the frequentist approach to calculating the probability of getting a bridge hand of 13 clubs and replaces it with nothing but hand-waving arguments about understanding the nature of a card shuffle (i.e., the chaotic physics involved) to determine a prior without giving us any way to actually estimate such a prior.
4) More generally in many of the situations he describes, even attempting to determine a prior is risible compared to the frequentist methods he derides. His appeal to "intuition" is actually nothing but a hidden appeal to frequentist methods.
5) Is incorrect in claiming Bayesian analysis is not in widespread use.
6) Most damning of all, he blames frequentist statistical methods for the replication crises in social science, which has well-documented causes far beyond the statistics used when writing papers.

Ultimately it's worth noting that the worst thing about this book is that it's trying to reach a wider audience when it is actually accessible only to those with some training in probability and statistics. A reader without prior training is likely to walk away from this book thinking he has learned something profound when in fact nothing of the sort has happened.
Profile Image for Dominik.
10 reviews
February 13, 2022
Bernoulli's fallacy consists in thinking that one can answer questions about the probability of a hypothesis by thinking only about the probability of an observation. This is not news to anyone familiar with Bayesian statistics, but Clayton lays the foundations for understanding why it is a fallacy so carefully that I would recommend this to anyone with an interest in scientific thinking. The different interpretations of probability are laid out very clearly, but instead of simply distinguishing between the Bayesian and frequentist views, Clayton takes a Jaynesian position and treats probability theory as logic under uncertainty. This allows us to enjoy some of the classic paradoxes of probability like the boy-girl paradox from a fresh perspective. The history of why frequentism became the dominant view starting in the 19th century sometimes leans a bit too heavily on Stigler's work but then becomes interesting again at the turn of the 20th century where Stigler's "The History of Statistics" left off. Bringing a political dimension to the dispute about different interpretations of probability, Clayton convincingly argues that Galton, Pearson, and Fisher were such staunch advocates of frequentism because it made their eugenicist agenda seem like objective science. The rest of the book shows how the Bayesian view solves many problems of the so-called reproducibility crisis. Again, this is nothing new but very succinct and accessible for a wide audience. The weakest part of Clayton's argument is that he presupposes that what everyone actually wants to know is the probability of a hypothesis. This view is certainly intuitively true for most people, but it rests on the possibility of inductive inference, which is not without opposition. Clayton touches on this when he mentions Hume, but he makes it seem like Bayes' theorem proves Hume wrong and the only reason why frequentist statistics became popular is the misguided view of people like Galton, Pearson, and Fisher. This simply ignores much of what has happened in philosophy of science in the 20th century (Popper, Lakatos, etc.) and which seems equally as important for explaining the dominance of frequentism. Showing how these philosophical ideas are entangled with the history of frequentism as presented by Clayton would make the book even stronger, but maybe that is a bit much to ask.
Profile Image for Alfie.
10 reviews
January 23, 2022
Frequentism is still taught in the majority of undergraduate stats courses. There needs to be a top down approach from associations to change the curriculum.
Profile Image for Philemon -.
339 reviews15 followers
April 13, 2022
What is probability? We know the expected distributions of dice throws, card shuffles and deals. But how about real-world future events, each of which is bound to have a unique set of circumstances? How well does a simplistic counting methodology map against inevitable real-world complexities and ambiguities that resist easy sampling? Aubrey Clayton's answer: not very.

This book offers amazingly complete documentation of the century-long war between the "frequentists" and the Bayesians. Frequentists are the ones who think mere counting is adequate for any kind of probability computations. Bayesians use the formula invented by Thomas Bayes, an 18th-century priest (!), which incorporates into probability calculations the notion of "priors," estimates of our beliefs based on how confident we should be according to the current state of our knowledge. Bayesian calculations feed chainlike into each other, adjusting the confidence level per each latest batch of data. Frequentists rail against the idea of priors, which they revile as allowing a subjectivity to infiltrate. Bayesians defend priors as tools that allow us to leverage the knowledge we've already gained, noting that any subjectivity risk progressively diminishes as each new wave of data nudges the probability in progress into closer accordance with empirical research.

One takeaway from any serious study of probability is that doing the calculations involves dangerous traps and subtleties. In the hands of less than expert statisticians, the great mass of us who think we know how it works but don't, calculated odds are likely to be seriously off if not flat-out wrong. As Clayton shows, the failures of frequentism show up in the incidence of peer groups failing to reproduce scientific results produced by counting alone. The use of significance levels of 95% or lower also feed into this reproducibility problem.

This book probably contains too much dense information for the non-expert (and I confess to being an amateur at best). But I give it five stars because of its amazing scholarship and completeness in presenting the history, theory, practice, and scientific criticality of this long-running methodological war; a war that has affected not only science, but the very epistemological framework of what we think we know.
Profile Image for Jeff.
1,398 reviews127 followers
April 5, 2021
Lies, Damn Lies, and Statistics. On the one hand, if this text is true, the words often attributed to Mark Twain have likely never been more true. If this text is true, you can effectively toss out any and all probaballistic claims you've ever heard. Which means virtually everything about any social science (psychology, sociology, etc). The vast bulk of climate science. Indeed, most anything that cannot be repeatedly accurately measured in verifiable ways is pretty much *gone*. On the other, the claims herein could be seen as constituting yet another battle in yet another Ivory Tower world with little real-world implications at all. Indeed, one section in particular - where the author imagines a super computer trained in the ways of the opposing camp and an unknowing statistics student - could be argued as being little more than a straight up straw man attack. And it is these very points - regarding the possibility of this being little more than an Ivory Tower battle and the seeming straw man - that form part of the reasoning for the star deduction. The other two points are these: 1) Lack of bibliography. As the text repeatedly and painfully makes the point of astounding claims requiring astounding proof, the fact that this bibliography is only about 10% of this (advance reader copy, so potentially fixable before publication) copy is quite remarkable. Particularly when considering that other science books this reader has read within the last few weeks have made far less astounding claims and yet had much lengthier bibliographies. 2) There isn't a way around this one: This is one *dense* book. I fully cop to not being able to follow *all* of the math, but the explanations seem reasonable themselves. This is simply an extremely dense book that someone that hasn't had at least Statistics 1 in college likely won't be able to follow at all, even as it not only proposes new systems of statistics but also follows the historical development of statistics and statistical thinking. And it is based, largely, on a paper that came out roughly when this reader was indeed *in* said Statistics 1 class in college - 2003. As to the actual mathematical arguments presented here and their validity, this reader will simply note that he has but a Bachelor of Science in Computer Science - and thus at least *some* knowledge of the field, but isn't anywhere near being able to confirm or refute someone possessing a PhD in some Statistics-adjacent field. But as someone who reads many books across many genres and disciplines, the overall points made in this one... well, go back to the beginning of the review. If true, they are indeed earth quaking if not shattering. But one could easily see them to just as likely be just another academic war. In the end, this is a book that is indeed recommended, though one may wish to assess their own mathematical and statistical knowledge before attempting to read this polemic.
134 reviews12 followers
October 16, 2022
I'm not how to take this book or who the real audience is. The book assumes a lot more familiarity with probability theory than I have so I can't speak to the underlying theme (Bernoulli is wrong and Bayes is right). The issue is that it reads like a rant, with Clayton offering E.T. Jaynes' interpretation of Bayes as the one truth, while making blanket statements like "all challenges to the fact of systemic racism in the US justice system are wrong". It's hard to accept the message when the messenger comes across so hardnosed and doesn't follow his own advice when giving examples. I was expecting a little more historical info on the development of the field, and a balanced treatment of what's right and what's not (and why). That isn't what we have here.

Recommended for students of probability theory who want exposure to other viewpoints.
Profile Image for Richard Marney.
579 reviews31 followers
November 22, 2021
The book can be read on two levels. The first, more high level, is to grasp the indictment of academics, financiers and politicians’ use (misuse) of statistics and probability. The second addresses the underlying “why” of this question. The latter requires a combination of topic knowledge and perseverance to recreate the author’s thought pattern and proofs underlying the results. Worth the read.
Profile Image for Galen Weitkamp.
144 reviews5 followers
March 15, 2022
Bernoulli’s Fallacy, by Aubrey Clayton.
Review by Galen Weitkamp.

Since early geometers and philosophers started inventing and sharing arguments we understood that deductions are drawn from postulates and premises. Inference is impossible without hypothesis. Yet, ever since Francis Galton, Karl Pearson and Ronald Fisher developed frequentist statistics, their followers have taught that statistical inferences can be drawn from the data alone without background assumptions, premises or prior judgments. The data, they insist, speaks for itself.

One practitioner of this school argued in a court of law that the odds against a mother having a male baby who died of SIDS and than a year later another who also died of SIDS are so tiny, she must have, with near certainty, killed them herself. The mother served three years before the case was overturned. The practitioner’s argument mirrors logical proof by contradiction. However, there is no valid statistical counterpart to proof by contradiction. The conditional probability that two male siblings died of SIDS in different years, given the mother is innocent of their murder has no effect on the conditional probability that the mother is guilty given that her two male babies were found dead in their cribs in two separate years. The two are independent of each other in the sense that they can be any size whatsoever relative to each other. One cannot use one to determine the other without invoking prior assumptions or prejudices.

According to Bernoulli’s Fallacy, this willful error (and others to which frequentists frequently fall prey) exist to lend a patina of objectivity to the various studies supported by this sort of statistical analysis. It is argued the school of frequentist statistical analysis was originally developed to lend an aura of objectivity to the field of eugenics to which Galton, Pearson and Fisher were all three devoted when developing their methods.

In his book, Bernoulli’s Fallacy, Aubrey Clayton explains the primary fallacies that underlie what he calls orthodox frequentist statistics, traces its history, describes the harms that it has done in various fields of science, medicine and law, and discusses the replication crises that has plagued statistical studies over the last few decades. Throughout the book, but especially in the last chapter, he discusses how an old eighteenth century theorem of Thomas Bayes points to the way out of these statistical traps and troubles.
Profile Image for Philbro.
6 reviews6 followers
January 12, 2022
For anyone who's ever been bothered by the arbitrariness of statistical significance testing (p > 0.05, etc.), this book is for us. More broadly, this book makes the case for the Bayesian understanding of probability (as opposed to the usual frequentist view, which includes significance testing). In other words, let's ask how probable is a hypothesis given some data, not how probable is some data given a hypothesis.

This book has solidly convinced me of the Bayesian view. That is, although frequency is objective and measurable, this is not to be conflated with probability, which is only ever subjective. Admitting as much does not rob probability of an objective interpretation, it only makes plain the hidden subjectivities which have always plagued the frequentist interpretation.

If that is a "downside" to such an admission (it isn't; making your assumptions plain never is), the benefits are enormous. Adopting the Bayesian interpretation clarifies cases such as the Monty Hall problem. Divesting scientific studies of significance testing in favor of Bayesian analysis with appropriate, explicit priors would also improve public perception of medical and psychology studies. More importantly, the author claims, convincingly, such a switch in statistical posture could eliminate the reproducibility crisis facing those disciplines as well.

In addition to all this, Bayesianism has very interesting implications for philosophy of science and epistemology as well. In particular, the idea of "probability as logic" creates a bridge from deduction to induction not ordinarily present in such discussions. In sum, this book is a must-read for all STEM and philosophy practitioners alike.
Profile Image for Gijs Limonard.
569 reviews12 followers
July 17, 2023
Probability is not frequency. But what is probability? The author brilliantly and succintly deconstructs the problem, including the guiding influence the frequentist approach has had on the reproducibility crisis in science. Proposed instead is the widespread adoption of the Bayesian framework, which allows for new information to be taken into account and for updating your prior beliefs as a function of obtained results. These updated beliefs are only to be fed into new probability estimations as the process of knowledge acquisition/formation rolls on. As such this is one more admonition to stay away from the 'ludic fallacy' (or 'the map is not the territory' phenomenon) as coined bij Taleb; the mistaken belief that real life situations can be modelled as if it were a game. In real life decisions are made in situations of incomplete information, and acquiring knowledge entails continually updating beliefs as new information arises; the definition of probability then becomes 'deductive reasoning with uncertainty'. As a practicing physician myself the utility of the Bayesian approach is all too familiar, just as the trappings and pitfalls of the frequentist approach. Highly recommended reading.
Profile Image for Robert Muller.
Author 12 books27 followers
January 3, 2022
A clear and cogent history and argument for Bayesian thinking in science. More examples and fewer repetitive statements of the arguments would help tighten up the book, but it's really the best overall explanation of the argument and argument for explanation I've seen.
Profile Image for David Hirsh.
6 reviews
December 29, 2021
A note to physicians

Quite possibly the most important text any physician may read outside of their actual specialty. To not incorporate the lessons here taught, is to proceed as if in a dimly lit corridor of the hospital.
Profile Image for Lloyd Earickson.
177 reviews6 followers
March 12, 2022
If you took a statistics class, you probably have some of the same memories I do of sitting in that class and thinking that what was being said did not quite make sense and align with my understanding of the world, but not being able to develop an entirely cogent argument for why what was being taught didn’t sum.  Whether it was a statistics class or a standardized test, statistics and probability were always frustrating to me.  Now, I fully admit that I am not a mathematical savant, but I have taken a lot of math classes in my time, studied a lot of different mathematical topics, and use and think about math more than most people would think is healthy.  Statistics and probability, though, inevitably trip me up when I start trying to study them.



There are so many reasons why you should read Bernoulli’s Fallacy, many of which we will be addressing in this review, and finally understanding why statistics and probability didn’t make sense back in school is just one of them.  Within the first chapter, Clayton had laid out in rigorous terminology and mathematical logic the problems with probability that had been struggling to come to light from the edges of my consciousness for years.  Specifically, the logic of probability as frequency, which is what is taught as orthodox statistics and has been employed for almost all purposes of probability for the past century, is fundamentally flawed, with a flaw that goes all the way back to elementary logic as described in Aristotle’s Art of Rhetoric.  Probability as a derived frequency attempts to treat the probabilistic argument as a syllogism, when in reality probabilistic arguments are enthymemes.





The impetus for the book is a problem in the institutions of science to which you may have heard reference: the replication crisis.  In the past few years, projects to redo classic experiments in psychology, economics, and even in medicine and biology have failed to produce results that align with the original conclusions.  If you’ve ever been skeptical of the results and conclusions drawn by researchers in the soft sciences that seemed contrary to common sense or lived experience, you might have been right to be, because a lot of those results have now been found to be, in a word, wrong.  For someone like me who always suspected that, but had never taken the effort, resources, or mathematical tools necessary to make a coherent argument as to why the results from a seemingly valid experimental setup should be dismissed, having the mathematical, logical, and experimental backing to affirm that lingering suspicion is quite gratifying.





For a book with only eight chapters, there is a lot to unpack within it.  It covers everything from the history of probability and statistics, to its uses through the centuries, to examples, exercises, implementations, and mathematical derivations.  Plus, it is dense: do not go into this expecting to whip through it.  I found myself lingering over passages, rereading sections, and spending hours after a reading session pondering and ruminating over what I had just read.  It’s a deeply thought-provoking, and directly applicable book, because probability is everywhere.  Probability has always been everywhere, but it is even more so today, with the rise of concepts like Big Data and the Information Age.





At its heart, Bernoulli’s Fallacy is an argument for a certain understanding of probability.  It introduces the reader to two dominant schools of probabilistic thought: the frequentist school, which we have already referenced as the school of orthodox statistics and the source of many problems more serious than student struggles on the ACT, and the Bayesian school, which we could also call inferential probability.  Clayton takes us through the history of both statistical methods, and shows the arguments for each, but this is not the sort of book that attempts to present an unbiased argument: Clayton is out to convince you that Bayesian probability is the cure to the fundamental flaws on frequentist statistics, and he does not hide that fact, nor pull his punches.  In places I found this a little over-the-top, but it did not detract from the book’s credibility, and he did attempt to explain frequentist methods in the best possible light before exhibiting their deep-seated flaws.





If there is a place where Clayton’s arguments grow a little excessive, it is in his discussions of the intertwined history of frequentist statistics and eugenics.  Nothing he said was false, or misleading, but it at times diverted the book more into a condemnation of eugenics than a condemnation of frequentist statistical methods, and an effort to incite in readers an instinctive revulsion for frequentist methods based solely on its emotional and historical ties to eugenics.





Here is the heart of the argument.  Frequentist methods attempt to provide probabilities as objective truths based on observed frequencies and notional concepts of infinite trials.  Bayesian methods treat probabilities as subjective entities that provide quantification of the contributing factors and outside information through the integration of prior probabilities.  That is to say, whereas a frequentist method will look at a dataset and tell us what the chances were of getting that data, a Bayesian method will inform us of the probability of an effect existing based on the data collected and any prior information we might have about the likelihood of the effect.  If all of that seems a little confusing, I suggest you read the book, because it will explain it much better than I can in a post that is supposed to be a book review and not a detailed essay on probabilistic methods.





I’m torn over how much more to discuss in this review.  On the one hand, there was a lot of important content in the book that would be worth discussing, but on the other hand, you might be better off reading the book itself rather than listening to me regurgitate and ruminate over its contents here.  At a high level, what Bayesian methods accomplish that frequentist methods don’t is to better align probability with reality, instead of with an imaginary mathematical contrivance of infinite trials and notional experiments.  As humans, we work with Bayesian-style probabilities all the time, even if we’re not doing rigorous and ugly mathematics to do it, like .  When we make risk-benefit decisions, we’re doing Bayesian probabilities.  When we make assumptions about how the world is going to work from day to day, we’re doing Bayesian probabilities.





Look at the problems that surround us and all of the circumstances to which statistics are applied (or attempted applications, at least).  Statistics can be made to say anything you might want them to, which is why they can’t be trusted.  Bayesian probabilities help, in some small way, to address these problems by forcing everyone to acknowledge that there is no such thing as an objective conclusion from a dataset, and that data never “speaks for itself.”  Data is data, and it is up to us as thinking, rational, moral humans to interpret it and infer conclusions.  They also remind us that morality is independent from science, from statistics, and from data.  Information can help suggest causes, correlations, maybe even solutions, but it is no substitute for morality.





There are very few books that I think everyone needs to read, but Bernoulli’s Fallacy might be one of them.  Whatever kinds of books you normally read, whatever your background, whatever your usual interactions with data and statistics, this book matters.  In fact, I calculate that there is a 100% probability that you will find this book valuable.  If nothing else, you’ll know why a 100% probability makes no sense at all.  Go read Bernoulli’s Fallacy.

Profile Image for Annie.
3,886 reviews71 followers
October 4, 2021
Originally posted on my blog: Nonstop Reader.

Bernoulli's Fallacy is an expository academic comparison of the statistical methods and accepted methodologies used by modern empirical scientists, analyzed and presented by Dr. Aubrey Clayton. Released 3rd Aug 2021 by Columbia University Press, it's 368 pages and is available in hardcover, audio, and ebook formats.

This is an esoteric book with urgent, potentially catastrophic, foundational implications for science (and society). The way we interpret, group, and present data has fundamental connections to what we see as "objective truth" and "facts". This is especially frightening when considered in the light of recent crises such as systemic racism, alleged election/voting fraud, and pandemic/public health methodology and data.

This is a deep dive into the subject material and will require a solid background in mathematics and statistical methodology at the very least. I have a couple degrees in engineering sciences (and a real love for bioinformatics), and it was significantly above my pay grade. I could understand much, but by no means all, of the author's exposition and there were tantalizing glimpses of deeper information which I simply couldn't grasp. Readers should expect to expend some effort here to even make an informed decision on the veracity of the author's claims.

It's an academic book, the author is an academic, and it reads very much like an academic treatise. The language isn't *quite* as impenetrable as many academic volumes. The text is well annotated throughout and the annotations will make for many hours of background reading enjoyment. I get the distinct impression that the author has made a herculean effort to use accessible language to make it more easily understood, but there is a basic level of understanding which will render it inaccessible to many readers. That being said, the author writes with style and humor and tries to make the read minimally pedantic. I can well imagine that he's a talented and popular lecturer.

At the end of the day, Disraeli wasn't wrong when he decried "lies, damned lies, and statistics". I am not strong enough in this particular field of study to say where on the above spectrum Dr. Clayton's exposition falls.

Five stars (readers should keep in mind that the subject will require significant effort). I would enthusiastically recommend that people in education and policy expend the necessary effort. It would be a good selection for public/university library acquisition, as well as for more academic settings in philosophy of mathematics and science and allied fields of study.

Disclosure: I received an ARC at no cost from the author/publisher for review purposes.
Profile Image for Aaron Schumacher.
174 reviews7 followers
December 11, 2021
Bernoulli's Fallacy is that the likelihood of data given a hypothesis is enough to make inferences about that hypothesis (or others). Clayton covers historical and modern aspects of frequentist statistics, and lays crises of replication at the feet of significance testing. I find it largely compelling, though perhaps it neglects problematic contributions of non-statistical pressures in modern academic life. I'm generally on board re: Bayesian methods.
7 reviews1 follower
April 20, 2022
Im going to leave the letter I sent the author here as my review. It discusses what I loved about this book and some interesting discussions on some points in the book I would contest.

Hello Aubrey Clayton!


I have just finished reading your incredible book, and have been holding back until just this moment when I closed the book to write to you. I would like to say first, thank you for writing this book. This book really needed to exist. In fact, I have read or bought every book on statistics I can find in search of one that makes the Bayesian argument and acknowledges the real history of stats in the way you did. When I found your book, I was very excited, to say the least, and a little scared my high hopes would let me down. They did not- the book that sounded perfect exceeded my expectations, and as I read I imagined my future self reading it over and over again.


I have just graduated from university with a bachelor's in mathematics and a concentration in stats. My experience in the subject so far has made it such that your book seemed to have been written for someone just like me. I began my degree somewhat naively, obsessed with mathematical certainty and rigour and believing very much that they were the key to understanding or proving anything. In my second year I became very inspired by the CLT and normal distribution, and the apparent ability it had to give mathematical support for any argument that involved a generalization/taking of the average. At this point, I was 18 and thought it was a godsend, and even set out to try and solve philosophical problems with it. I decided to write a fun kind of illustrated book using the CLT to 'prove' if anything mattered or not. Shortly after this, I realized that it was not true that every problem could be tied up neatly in mathematical proof. I set out on a new creative project to write a book about the role subjectivity plays in mathematics and science, which I am still working on now. This is when I discovered Bayesianism, and I resonated strongly with the subject. Up until now, I have not found any book on a similar subject as well written and researched as yours! Thank you, thank you, again!


As I was reading your book, I stopped many times to think through the examples and to look into the sources for further learning. The examples you included were endlessly interesting. This led to a very dynamic and deep reading experience. In doing this, I found some arguments you might find interesting to discuss. I have different points of view on two problems mentioned in the book that I would love to hear your perspective on.


Firstly, in your example of the Boy-Girl paradox, I am not sure the paradoxical element is reconciled through the Bayesian argument you offered. Where I have a different point of view is right at the start, when we choose our priors. You listed four situations, four ways there could be different gendered pairs, where order mattered, and so had four cases. The way I see it, we could just as easily have started off identifying three cases: 2 boys, or a boy and a girl, or 2 girls, where order didn't matter. I think what is going on in this problem, what makes it a paradox, is that it is unclear whether order should matter or not. (I think this is what Gardner meant when he was talking about randomizing principles). It took me a long time, and much talking through the problem, to figure out if it is necessary for some reason to decide one way or the other if order should matter, and further if the decision that it does is necessary to the Bayesian argument.


I have landed on it not necessarily mattering, and in this way, there are two possible answers and the problem is a paradox, not simply solved with a Bayesian argument. I imagined a sort of analogous situation with coin flips. If Mr. Smith tells us he has two coins and at least one of them is heads, we would be in a similar state of knowledge. Then we can start to set up our problem and imagine it in at least two ways. We can imagine the situation as being the result of two consecutive coin tosses, so we have the four cases HH HT TH and TT, as in so many problems we are used to seeing in school! However, there is another way of looking at this situation. Maybe we should not necessarily think of the problem as entailing two consecutive coin flips. Rather, we can imagine Mr.Smith is holding two coins behind his back, their orientation already fixed, and it is up to us to determine the odds that both are heads given the information that at least one of them is. In this way of seeing things, we have no reason to account for the possible ways he could have flipped the coins in order to get the fixed coins behind his back. Order does not necessarily matter. It is true that the kids must have been born in some order and that the coins may have had their orientation fixed in consecutive coin tosses. But it is not necessarily relevant information to the problem which asks nothing about order, and with the given state of knowledge/ignorance, it is unclear whether it is necessary to account for it or not.


The consequences/interpretation of further information as you gave in your book would be different, but sensible if we look at it the other way, meaning it would depend on how you looked at it. Given the additional piece of information that Mr.Smith's older child is a son seems to narrow down the possibilities and give the same answer we would get had we started with the three possible cases. In your interpretation, it seems like information about a particular has allowed us to narrow down a possibility and update our probability assignment. From the other point of view, particularity based on order was never assumed to have mattered, and so this provides no new relevant information to affect our probability assignments. Given the additional piece of information that at least one child is a boy who was born on a Tuesday shifts the probability from 50 to 48 percent, and in your interpretation this suggests a general pattern that getting closer to a particular will narrow things down to more specific/individualized probability assignments, as if the odds go from 1/3 to 48% after learning this detail. After reading the rest of the book and the other arguments you make along these lines, which I agree with and found very interesting, this seemed to me to be the most convincing argument for your interpretation. I am not sure if the argument made elsewhere is the same as the one here. I am still not convinced however, because a different but still sound interpretation is possible from the other side. In the other way of looking at things, we start with a probability assignment of 1/2 right away, and so adding the information that one boy is born on a Tuesday adds a kind of restriction we need to take into consideration, that they can't both be born on a different day, and so we have to knock (6/7)^2 off of our 50% probability assignment to exclude that case.

At this point, I believe the problem is ambiguous and is a paradox in the way that your probability assignment depends on how you look at it.


I find this infinitely interesting to think about though, and would love to hear any thoughts or rebuttals from you!


The second problem I would like to bring up is, to me, even more interesting! At some point in the text, you mention that Jaynes had finally solved one of the Russell paradoxes. Surprised to hear this, I had to check out the source you listed. Reading through his argument was exciting. I had to disagree with him, and this point of contention is so interesting to get into.


His argument depends, at least as I understand it, on assuming that there is one correct probability assignment- on assuming there is no paradox and that there is one right answer, and so invariance in probability assignments. This premise reminded me of an idea I had had when I’d paused to think while reading about a different problem in your book- the one where we are given that a square has side lengths between 0 and 10 units, and our application of uniform ignorance to the side lengths or the area would give different answers. I had stopped and wondered about what would happen if we assumed that the information we were given, just that the length was between 0 and 10, entitled us to an invariant probability assignment. I thought about how if we assume there is a correct answer we can discern/are entitled to in our current state of knowledge then the cases where we applied uniform ignorance to the length and to the area would have to agree, ie. we could realize that the side length must be 1 unit, where L^2=L. This doesn't represent our state of knowledge, though, as if my friend is holding a square with side length of 5 behind their back and give me the information that the length is between 0 and 10, then assuming I can come up with an invariant way to set up priors will lead me to falsely deduce that the side length is 1. Jaynes' argument is similar- he assumes that there is just one right way to get an answer and that this right way is entailed by the state of knowledge we are in...I feel like this kind of argument and the addition of that assumption corrupts/misrepresents the state of knowledge, and in a somewhat ironic way pushes for objectivity where it does not exist.


Sometimes incomplete information necessarily entails ambiguity, I think this is one thing I learned from both of these problems.


Again, I really appreciated your book, which stimulated so much thought on the topics discussed, and cultivated and refined my passion for philosophy and statistics, and most of all Bayesianism. I especially appreciated your accurate portrayal of history and the way you explained the connection between eugenic ideas and the development of statistics, a retelling I have found to be missing from all the books I have read on the subject, which ignore the connection or the eugenic components entirely! Thank you for reading, and I hope to hear if you have any thoughts on the problems above.

-Victoria Hynes (fellow Bayesian and aspiring author!)
Profile Image for Jacob.
131 reviews16 followers
March 28, 2023
One of the rare books I give 5 stars to yet would not recommend to most of my friends, given the dense subject matter! In any event, Clayton came with a clear point and made it quite well. The thesis is simple: the frequentist approach to statistics, whose methods are commonly found in schools and in research papers, is inherently flawed, and we would be better suited using Bayesian methods that factor in prior probabilities.

Bayesian approaches more accurately reflect how we actually reason about the world and let us declare the types of conclusions we hoped to. See below for an example that helped this really click for me:


—————————

A = coin is fair
B = get > 58 heads from 100 tosses

Let’s say the thing we want to know is P(A|B). The problem with the frequentist approach is that it conflates P(A|B) with P(B|A). The frequency-based approach to this answer is to simply use P(B|A). It’s unlikely you would get 58 heads given that the coin is fair (let’s say p = 0.01), therefore it’s unlikely the coin is fair, given that you got 58 heads.

These are not the same thing! P(A|B) does depend on P(B|A), but also depends on your prior, P(A), since P(A|B) = (P(A)*P(B|A))/P(B). Is the coin tosser from a family of scammers? Is it hard to construct a fair-looking but loaded coin? To calculate P(A|B), you need to consider all of these factors, and just swapping P(A|B) for P(B|A) is not sufficient.

Priors like these are hard to estimate correctly. No doubt about that. But does that mean leaving them out entirely is the better approach? Probably not. The exact same problem comes up in A/B testing for website changes and in countless other examples.

—————————

Clayton goes through a bunch of examples where Bayesian approaches make more sense, with the most famous being the likelihood that you have a rare disease, given that you tested positive for it. One thing I found fascinating was how famous frequentist statisticians like RA Fisher bragged that their approaches are less subjective, when in reality, they lead to many counterintuitive and non-replicable results.

It wasn’t the focus of the book, but I am curious to learn more about how we should calculate priors and the different tests we should use, e.g., Bayes Factor. Whether I learn more Bayesian methods will depend on my career path and their acceptance, but regardless, this book was quite thought-provoking and an accessible introduction.
It’s sure to shine light on why it’s so hard to remember a clear definition for p-values, confidence intervals, etc. I’m also eager to read critiques by some frequentists, which are bound to be spicy!

All in all, a great book.
76 reviews
October 27, 2021
This book is a strident and quite convincing polemic against frequentist statistics (and in favour of Bayesian statistics). The core point is Bernoulli's fallacy: we cannot conclude based on the probability of obtaining our sample data (or more extreme) given the null hypothesis (this is expressed by the p-value) whether that null hypothesis, or any other hypothesis should be rejected (or worse, accepted). This does not make sense - we need to know what alternative hypotheses there are and we need to incorporate any prior information that we have about the relative plausabilites of these hypotheses. In other words: we must use a Bayesian framework and abandon frequentist statistics (and null hypothesis significance testing with p-values) altogether. This point bears repeating in different ways, although perhaps not quite as often as the author does. Also I was not quite sure of the relevance of the eugenicist views of the founders of frequentist statistics - the fallacy of the method can be established on logical grounds alone, and this is what is primarily relevant. Nevertheless, a book written with considerable clarity which had the unfortunate side-effect of providing me with a moral incentive for completely restructuring the statistics course I teach (again).
Profile Image for German Chaparro.
338 reviews31 followers
October 13, 2022
I loved the arguments the author makes, loved the review on eugenics and statistics (Galton, Pearson, Fisher) and loved the math through and through. However, the language could have been a lot clearer. Also, this is unfortunately not a book for a general audience! Which is a shame because the issues raised here have a significant effect on everyone.
Profile Image for Amanda Comi.
31 reviews
September 21, 2021
Clayton does a great job of conversationally explaining abstract concepts continuing in the tradition of Asimov and Stephen Jay Gould. This narrative assumes that you recognize the math you learned in high school - ie you’ve seen “y=mx + b” or did a regression analysis once in a spreadsheet but DOES NOT require you to actively do calculations or precisely remember all the procedural details.

In terms of impact: wow! I was already a Bayesian because I’m a lazy analyst and it seemed easier to remember only one formula for every problem but I’m now convinced that sharing the philosophical view of probability as information is an important mission that may improve all types of communication but especially academic science. I’ve gone from not really caring, to wondering how I can help, and then knowing which organizations are working on this issue.
Profile Image for Marco.
16 reviews
May 2, 2023
A book with an important and disturbing message, but I found it to be quite repetitive, and some of the arguments did not seem convincing.

Although the historical remarks on Galton, Pearson, and Fisher seemed out of place initially, I was glad that the author included them in the end, as they helped explain the appeal of frequentism and why it has become so dominant in science.

The final chapter on the crisis of reproducibility was particularly alarming. In essence, roughly half of published research, even from top journals, is likely to be incorrect. Needless to say, this should be a great cause for concern for anyone.
329 reviews8 followers
April 12, 2021
Note: I received an ARC of this book from NetGalley.

I thought this book was really good. I love the Clayton didn't shy away from including equations and calculations in the book (even though I couldn't see them because of the atrocious pre-publication formatting). Clayton writes from a very specific point of view, but it's one I found persuasive. I thought this was a good explanation of a lot of the philosophical and scientific issues surrounding statistics and probability.
Profile Image for Chris DuPre.
16 reviews
July 30, 2023
This is a fantastic book exposing frequentist thought and frequentist methods as being inconsistent and very problematic for solving problems of practical interest. In particular, the titular Bernoulli's fallacy essentially boils down to the claim that P(A|B) being large for every B implies that P(B|A) should be large for every A. This is absolutely not the case, despite being incredibly tempting when phrased in the correct way. This is also referred to as Diaconis and Freeman as the "transposed conditional fallacy" for exactly this structure of confusing P(A|B) for P(B|A).

The author does a great job of exposing this as being incorrect, as well as pointing out the consequences of this mistake for frequentist and therefore orthodox statistics. The author also does a great job of addressing the history of frequentist statistics, in particular the drive of earlier eugenicists to remove priors from statistics to bolster their terrible ideas in the face of totally reasonable criticism. This unfortunately was then picked up by psychologists,sociologists, economists, and other fields which also felt the need to develop legitimacy in the face of noisy and complicated systems. The author also does a great job of connecting these issues to the replication crisis and shows neatly how these problems in general do not occur.

The biggest fault of the book is actually in its aggressive stance. If you decide to swing at most of modern statistics, you better not miss. Unfortunately, the author has several blunders. For one thing, while they occasionally mention the problem, they never treat seriously the problem of prior choice. They seem to claim the prior choice should be based on "information" as opposed to "feelings" as if certainty is not a feeling itself. They also blatantly ask the reader to just get over it. This is akin to the "shut up an calculate" experience from some physicists towards quantum mechanics. This is inappropriate, as without some form of agreement in the asymptotics, the probabilities of each individual mean nothing. If I can show Jerry and Tom infinite data and they will still disagree, how should I treat a probability of 30% from Jerry or Tom? The author also completely fumbles at the section on contingency tables. For one thing, they confuse the notions of "independent" and "equiprobable". In fact, they obtain the computation of the probability given the frequencies as if they were independent. Ironically, they then immediately commit something like a Bernoulli's Fallacy by saying that P(equiprobable| independent) = 0 implies that P(independent) = 0. They also vastly underestimate the need for probabilists and statisticians. The implication seems to be that if we teach Bayes' theorem, which is nothing but a definition under the Bayesian framework or at most a small mathematical theorem, then we have taught all of statistics . This is ridiculous. All of the methods of inference and computational difficulties that the author brings up are exactly the questions that probabilists and statisticians can and should work on.

Overall, I really do recommend the book. If you don't know the history of frequenist thought or the claim that P(A|B) large for every B does not imply P(B|A) large for every A is confusing to you, you will gain quite a bit from this book. However, be aware that the author really is attacking frequentism as opposed to bolstering the Bayesian perspective. There are several questions the book will not address, and there is plenty more work to do. Ideally the author will redact or heavily revise the section on contingency tables, but most of the other examples in the book appear reasonable.
Profile Image for Ramnath.
15 reviews1 follower
December 31, 2022
I have been reading E. T. Jaynes’ “Probability Theory: The Logic of Science”, which presents a fantastic explanation and formal derivation of probability as a system of logic (built on plausibility rather than certainty, unlike predicate logic). What I hadn’t known was the historical context around the Bayesian vs frequentist approaches to probability that made Jaynes’ work such an important masterpiece.

Bernoulli’s Fallacy provides this context, starting with Bernoulli’s contributions to the field, working all the way through the development and use (rather, a perversion) of statistics to meet the eugenics agenda, and finally the present day “crisis of replication” that is plaguing research across a variety of fields due to their reliance on statistical significance and p-values as a measure of evaluating hypotheses.

As such, this book, in its initial chapters, presents its core set of ideas. These are not novel ideas, but they are nevertheless poorly understood by the community today, and this book does a great job explaining them in depth. I would summarize these ideas as follows:

- Probability represents a subjective belief in a hypothesis based on information / knowledge that you possess, it is not an objective fact. Any statement that the probability of an event IS some number is incomplete; you must always state your assumptions (knowledge that you possess). All probability is conditional on these assumptions. (Jaynes does a good job of making this explicit via notation.)

- You cannot draw inferences from data alone. What you CAN do is convert prior probabilities (existing degrees of belief) to posterior probabilities through the act of observation (incorporating new data). Data doesn’t ever tell you the whole story; it can only alter the story you already have in terms of its plausibility.

- Unlikely events happen. You cannot infer the truth or falsity of a hypothesis based on the likelihood of an observation. Rather, you can only use an observation to alter your subjective belief in the plausibility of a hypothesis, and that too, relative to OTHER hypotheses that support the same observation. Again, unlikely events do occur (e.g., someone always wins the lottery), and so it’s really the relative likelihood of different hypotheses that you adjust as you learn more (by making more observations). Of particular importance here is the idea that it is up to YOU (not the data) to exhaustively formulate the relevant hypotheses, and assign suitable priors. As Pierre-Simon Laplace supposedly put it (paraphrasing), “extraordinary claims merit extraordinary evidence”, and so new data should alter your belief one way or the other toward a hypothesis based on the RELATIVE priors associated with all potential hypotheses. The more you believe in a hypothesis relative to others, the harder it should be to displace.

One idea this book clarifies is that Bayesian and frequentist are not two “equally valid” schools of thought, but that the Bayesian method underpins the whole idea of probability, whereas the frequentist approach is simply a special case (a sort of unhappy accident of history).

Overall, a well-argued, interesting, and balanced book, despite the seemingly extraordinary conclusion. The evidence is extraordinary and well-presented, though occasionally repetitive and dense.
Profile Image for Jake.
32 reviews3 followers
November 9, 2023
This is an awesome polemic against frequentism and its dominance in the statistical sciences. As someone with their head in phil of sci and stats all day, I enjoyed each part of this book, and the ample references in the chapters have further enlarged my to-read list. Given that the goal of this book is more critique and deconstruction, I'd like to see a follow up with a constructive project, covering Cox's Theorem or Dutch Books or so on upwards - A Paradiso for this Inferno, though there are plenty of existing resources as well.

That's not to say that the book is perfect. There are a few overstated points. For example, tying frequentism to empirical frequencies perhaps is historically and thematically motivated, but at least superficially it shoots wide of Aris Spanos' model-based frequentism. (I certainly wouldn't have expected Clayton to address pet theories, but he did snipe the "error theorists" a few times, and Spanos is one of the two prime members of that club.) Some of the points seem a bit repetitive as well, but some of this is perhaps justified as showing various guises of the same problems. The sinister side of the history of statistics was quite interesting, but the attempt to tie modern statistical concepts and techniques to moral repugnancy so as to justify certain conceptual or nominal changes is weak. If there's a case to be made, it wasn't made here.

Clayton is clear at several points that even if there are bandaids and workarounds for some of the warts of frequentism, the underlying problem remains: Bernoulli's fallacy. And while small-scale frequentist statistics was able to get away with approximately Bayesian correctness, the 21st century and big statistics cannot, as demonstrated with replication crises across the academy. While some entrenched frequentists are *still* claiming "ur jus not doin it rite", it's time to move on to probabilistic consistency as the logic of statistical inference and away from Bernoulli's foundational fallacy.
27 reviews
June 29, 2022
This is an all-out attack on the frequentist paradigm in statistics. The attack is from various fronts, including (1) the early pioneers of frequentism being eugenicist, (2) various conceptual problems of frequentism, (3) the replication crisis in modern science. As the way forward, the author suggest all of us embracing the Bayesian paradigm in statistics, removing frequentist language such as p-values, significant etc., changing the stats curriculum and so on.

While I agree on the problems of frequentism, I am not sure if the approach of the book is the way forward. I found the book too divisive and "militant". As to point (1) above, were the early pioneers of Bayesian stats sinless saints? As to (3) do you think if Bayesianism was the mainstream paradigm in modern science we wouldn't see similar problems? Would daily scientists not bend or misapply the rules to get by? Is it not incentives in science that ultimately create most of those replication problems than the statistical paradigm? The author champions Ioannidis as an early iconoclasts tearing down the frequentist paradigm. But how about Ioannidis's recent blunder during the Covid-19 pandemic? Why did Bayesian thinking not prevent the most likely incorrect claim that C19 is just like the seasonal flu?

I think the way forward is not raging abrasive attacks on certain paradigms, be it frequentism or Bayesianism, but leading by practice. Do better science using Bayesian tools. Show that the Bayesian paradigm is superior and that it prevents falling into the same traps that frequentism could not avoid. Reach better, more robust results using Bayesian tools. Then people will follow and update their statistical arsenal. Otherwise, it will be unfortunately just another polemic, and this is not we need in modern science, I believe, for we have enough of them.
Displaying 1 - 30 of 51 reviews

Can't find what you're looking for?

Get help and learn more about the design.