Jump to ratings and reviews
Rate this book

Science Fictions

Rate this book
A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science.

So much relies on science. But what if science itself can’t be relied on?

Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more.

While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. From widely accepted theories about ‘priming’ and ‘growth mindset’ to claims about genetics, sleep, microbiotics, as well as a host of drugs, allergies and therapies, we can trace the effects of unreliable, overhyped and even fraudulent papers in austerity economics, the anti-vaccination movement and dozens of bestselling books – and occasionally count the cost in human lives.

Stuart Ritchie was among the first people to help expose these problems. In this vital investigation, he gathers together the evidence of their full and shocking extent – and how a new reform movement within science is fighting back. Often witty yet deadly serious, Science Fictions is at the vanguard of the insurgency, proposing a host of remedies to save and protect this most valuable of human endeavours from itself.

368 pages, Paperback

First published July 21, 2020

Loading interface...
Loading interface...

About the author

Stuart Ritchie

3 books62 followers
Stuart James Ritchie is a Scottish psychologist and science communicator known for his research in human intelligence. He has served as a lecturer at the Institute of Psychiatry, Psychology and Neuroscience at King’s College London since the summer of 2018.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
1,005 (50%)
4 stars
778 (39%)
3 stars
183 (9%)
2 stars
21 (1%)
1 star
7 (<1%)
Displaying 1 - 30 of 281 reviews
Profile Image for BlackOxford.
1,095 reviews69k followers
December 15, 2021
Scientific Meta-Hype

Here’s a rough summary of Science Fictions:

1. There is no officially established procedure called ‘scientific method,’ by which to judge the quality of research results.
2. The process by which the results of scientific research are validated for consideration by the scientific community cannot ensure the reliability of these results either.
3. Consequently what circulates at any given time as scientific fact is mostly wrong or misleading. It takes time to discover errors.
4. Steps can be taken, mostly by non-scientists and a new kind of science, to reduce if not eliminate the amount of junk science currently being produced.

In other words science works, when it does, not because of how experimentation, theorisation, and analysis are carried out, or how the findings of individual scientists are publicised or criticised by colleagues, or because these findings are proven wrong, but because most of what is publicised will eventually be ignored as irrelevant. This Ritchie finds disturbing.

A key word in the above is ‘eventually.’ For science to be science, everything that is known is tentative. And centuries of scientific experience shows that everything known at any time will be ignored at some future time except as a kind of intellectual fossil. This is as close to an accurate existential definition of Science as one is likely to get.

I don’t think Stuart Ritchie would disagree with this assessment. Science, like politics, is extremely messy. That is to say, Science is inherently inefficient (I use caps to designate the modern institution). It does not progress according to any definable logic since it is constantly reviewing the logic it has previously adopted. Therefore, looking back from any point in time, the resources engaged in scientific efforts - money, talent, time, administration - have largely been spent in a demonstrably fruitless way.

This waste is essentially what Ritchie is writing about. A large part of his book is devoted to the errors, frauds, and bloopers in scientific research ranging from his own field of psychology to cancer research and molecular physics. Eventually these mistakes mostly are not refuted but buried by further research. In the meantime the scientific community has wasted effort. And, he says, this waste has serious impact because of delay in acquiring important knowledge for health, social policy, and the general well-being of society. The waste can be reduced, he says, and he has suggestions about how to do that.

Ritchie calls our current situation a “crisis.” He believes the existing institutions of Science are “corrupt.” He cites compelling evidence that “any given published scientific article is more likely to be false than true.” There have been, he says, “over 18,000 retractions in the scientific literature since the 1970’s,” largely due to forgery, conflict of interest, self-promotion or even criminal intentions. In cancer research Ritchie cites a study in 2017 which:
“… scoured the literature for studies using known misidentified cell lines found an astonishing 32,755 papers that used so-called impostor cells, and over 500,000 papers that cited those contaminated studies”

So serious business. Perhaps the UK government, which was purportedly “following the science assiduously” at the outset of the COVID pandemic in 2020 should have read Ritchie’s book immediately it was published. That might have saved some lives, relieved marital strife during lockdown, and avoided the immensely costly track and trace boondoggle. So what is it that the world should do to lessen the incidence of junk science, avoidably stupid science, not to mention criminal science? Surely this is an issue deserving of further investigation by the proper authorities.

Well a part of Ritchie’s solution is something somewhat more trivial than the problem he describes. Essentially his first recommendation is that SCIENTISTS MUST BECOME MORE VIGILANT. In more detail, this means brushing up a bit on their statistics, taking their job as peer reviewers of professional articles more seriously, mitigating the hype surrounding unusual research findings, being more watchful for professional fads, and being a little more suspicious of whatever they read in print. Hardly revolutionary, and somewhat condescending.

“Become more vigilant” is about all he can say to fellow scientists if he wants to maintain credibility. Anything else, like government supervision or professional regulations about how to conduct proper science, would destroy science itself. So he directs his next directives to non-scientists - universities, research institutes, journal editors and foundations. He would like them to stop providing incentives to scientists and academics that promote a lack of vigilance - number of published articles, citation intensity, implicit funding demands to overstate expected research results, organisational promotion etc.

But it is at this point that Ritchie’s ship of a new science runs aground and founders. He admits that scientists themselves are complicit in the web of incentives he abhors. In fact they want them:
“What’s particularly disconcerting is that the people entangled in this thicket of worthless numbers are scientists: they’re supposed to be the very people who are most au fait with statistics, and most critical of their misuse. And yet somehow, they find themselves working in a system where these hollow and misleading metrics are prized above all else. ”


Of course they are. So Ritchie’s killer app is an extraordinary proposal for the establishment of an essentially new profession of the “meta-scientist,” that is a group of scientists who study the work of other scientists. Part of this proposal are suggestions for new journals devoted to this meta-science, including the reporting of results of research flops, so called null result studies, which didn’t lead anywhere. He also wants public “pre-registration” of research intentions and expectations, as well as “Open Source” free access to registered research and its results. He thereby cleverly keeps scientific regulation in the family, as it were, away from politicians, government bureaucrats, and the un-lettered masses.

Ah yes, Dr. Ritchie, may one ask who controls the controllers? Will the world need meta-meta-science in a few years time. And isn’t your idea of pre-registration just a teensy bit bureaucratic and of unproven scientific worth. It’s an idea that may be suitable for big government-funded drug studies simply because of the fortunes to be gained. But for evaluating the reaction of mice to increased testosterone, for example, such regulation seems highly inappropriate. Then there’s the issue of the scientific police who would enforce the registration and supervision of research. Would their approval be necessary for changing a study’s direction mid-stream? And would the penalties for non-compliance be civil or criminal do you think?

Is it too much to assert that the condition in which science finds itself today is no different than it found itself when the Royal Society was founded in 1660, or for that matter in the ancient groves of Grecian academe. In fact I’m willing to bet that there are proportionately fewer scientific hacks in the world today than there has ever been thanks to modern procedures of accreditation and the spread of information through modern technology.

So what is the point of Ritchie’s proposals? Every example of error or malfeasance that Ritchie cites is an instance of the current community of scientists exposing and discounting flakey results. More will certainly be uncovered. Isn’t that the important fact - they will be uncovered? Not as fast as Ritchie would like apparently. But then can he demonstrate scientifically how much quicker good results will be available? And at what cost? And given that eventually all scientific conclusions will be subject to correction, is it possible that he’s just blowing smoke?
Profile Image for Gavin.
1,114 reviews415 followers
July 25, 2020
Wonderful introduction to meta-science. I've been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?)

Science has gotten a bit fucked up. But at least we know about it, and at least it's the one institution that has a means and a track record of unfucking itself.

Ritchie is a master at handling controversy, at producing satisfying syntheses - he has the unusual ability to take the valid points from opposing factions. So he'll happily concede that "science is a social construct" - in the solid, trivial sense that we all should concede it is. He'll hear out someone's proposal to intentionally bring political bias into science, and simply note that, while it's well-intentioned, we have less counterproductive options.

Don't get the audiobook: Ritchie is describing a complex system of interlocking failures. I need diagrams for that sort of thing.

Ritchie is fair, funny, and actually understands the technical details. Supercedes my previous fave pop-meta-scientist, Ben Goldacre.
Profile Image for Julia.
98 reviews
August 13, 2020
This is one of the most important books I’ve read in the past few years. Ritchie skillfully examines the problems plaguing modern science, looks at the motivations that cause them, and posits solutions. Science Fictions drives home the importance for skepticism in all things, even science.
Profile Image for Andy.
1,615 reviews526 followers
April 17, 2021
This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompetence, of which bad science is just one symptom. Detroit: An American Autopsy

Nerd addendum:
With nutritional research, for example, he makes a good point that the news media do a bad job of hyping all these small or shoddy or irrelevant studies. His immediate solution is to teach us all how to read a scientific paper, and then whenever you hear about an interesting study in the news, you should go and somehow (even illegally) get a copy of the study and analyze it for validity. That seems nuts and unfair. According to the book, doctors and scientists and editors of scientific journals are widely incapable of this, so how is every citizen going to master this skill? And why should you?

I think if people (scientists, doctors or otherwise) are really interested in nutritional epidemiology, they should go deep and read Gary Taubes, e.g. That gives you an understanding of the research literature going back decades, explaining what is wrong with the original studies that are often cited, and giving the implications in plain language. Then if you want, you can look up a few of the studies that he has detailed and you’ll be able to know what to look for and to verify whether they say what he says they say. You have to know stuff to learn stuff.

What matters is not the latest news item, but the overall weight of the best available evidence.

Another problem with his commentary on nutritional epidemiology is that he goes on from there to warn in general about all observational epidemiology, without pointing to when observational epidemiology does supply robust actionable evidence (trans fats, lung cancer, SIDS, etc., etc.).

Other books to consider:
Good Calories, Bad Calories Challenging the Conventional Wisdom on Diet, Weight Control, and Disease by Gary Taubes Food Politics How the Food Industry Influences Nutrition and Health by Marion Nestle Crisis of Conscience Whistleblowing in an Age of Fraud by Tom Mueller Bad Blood Secrets and Lies in a Silicon Valley Startup by John Carreyrou
Bad Science by Ben Goldacre . Bad Pharma How Drug Companies Mislead Doctors and Harm Patients by Ben Goldacre Clinical Epidemiology The Essentials by Robert H. Fletcher How to Lie with Statistics by Darrell Huff Why We Get Fat And What to Do About It by Gary Taubes Histoire d'un mensonge Enquête sur l'expérience de Stanford by Thibault Le Texier Investigating Disease Patterns The Science of Epidemiology by Paul D. Stolley
Profile Image for Alvaro de Menard.
93 reviews112 followers
July 28, 2020
In 1945, Robert Merton wrote:

There is only this to be said: the sociology of knowledge is fast outgrowing a prior tendency to confuse provisional hypothesis with unimpeachable dogma; the plenitude of speculative insights which marked its early stages are now being subjected to increasingly rigorous test.


Then, 16 years later:

After enjoying more than two generations of scholarly interest, the sociology of knowledge remains largely a subject for meditation rather than a field of sustained and methodical investigation. [...] these authors tell us that they have been forced to resort to loose generalities rather than being in a position to report firmly grounded generalizations.


In 2020, the sociology of science is stuck more or less in the same place. I am being unfair to Ritchie (who is a Merton fanboy), because he has not set out to write a systematic account of scientific production—he has set out to present a series of captivating anecdotes, and in those terms he has succeeded admirably. And yet, in the age of progress studies surely one is allowed to hope for more.

If you've never heard of Daryl Bem, Brian Wansink, Andrew Wakefield, John Ioannidis, or Elisabeth Bik, then this book is an excellent introduction to the scientific misconduct that is plaguing our universities. The stories will blow your mind. For example you'll learn about Paolo Macchiarini, who left a trail of dead patients, published fake research saying he healed them, and was then protected by his university and the journal Nature for years. However, if you have been following the replication crisis, you will find nothing new here. The incidents are well-known, and the analysis Ritchie adds on top of them is limited in ambition.

The book begins with a quick summary of how science funding and research work, and a short chapter on the replication crisis. After that we get to the juicy bits as Ritchie describes exactly how all this bad research is produced. He starts with outright fraud, and then moves onto the gray areas of bias, negligence, and hype: it's an engaging and often funny catalogue of misdeeds and misaligned incentives. The final two chapters address the causes behind these problems, and how to fix them.

The biggest weakness is that the vast majority of the incidents presented (with the notable exception of the Stanford prison experiment) occurred in the last 20 years or so. And Ritchie's analysis of the causes behind these failures also depends on recent developments: his main argument is that intense competition and pressure to publish large quantities of papers is harming their quality.

Not only has there been a huge increase in the rate of publication, there’s evidence that the selection for productivity among scientists is getting stronger. A French study found that young evolutionary biologists hired in 2013 had nearly twice as many publications as those hired in 2005, implying that the hiring criteria had crept upwards year-on-year. [...] as the number of PhDs awarded has increased (another consequence, we should note, of universities looking to their bottom line, since PhD and other students also bring in vast amounts of money), the increase in university jobs for those newly minted PhD scientists to fill hasn’t kept pace.


By only focusing on recent examples, Ritchie gives the impression that the problem is new. But that's not really the case. One can go back to the 60s and 70s and find people railing against low standards, underpowered studies, lack of theory, publication bias, and so on. Imre Lakatos, in an amusing series of lectures at the London School of Economics in 1973, said that "the social sciences are on a par with astrology, it is no use beating about the bush."

Let's play a little game. Go to the Journal of Personality and Social Psychology (one of the top social psych journals) and look up a few random papers from the 60s. Are you going to find rigorous, replicable science from a mythical era when valiant scientists followed Mertonian norms and were not incentivized to spew out dozens of mediocre papers every year? No, you're going to find exactly the same p<.05, tiny N, interaction effect, atheoretical bullshit. The only difference being the questionable virtue of low productivity.

If the problem isn't new, then we can't look for the causes in recent developments. If Ritchie had moved beyond "loose generalities" to a more systematic analysis of scientific production I think he would have presented a very different picture. The proposals at the end mostly consist of solutions that are supposed to originate from within the academy. But they've had more than half a century to do that—it feels a bit naive to think that this time it's different.

Finally, is there light at the end of the tunnel?

...after the Bem and Stapel affairs (among many others), psychologists have begun to engage in some intense soul-searching. More than perhaps any other field, we’ve begun to recognise our deep-seated flaws and to develop systematic ways to address them – ways that are beginning to be adopted across many different disciplines of science.


Again, the book is missing hard data and analysis. I used to share his view (surely after all the publicity of the replication crisis, all the open science initiatives, all the "intense soul searching", surely things must change!) but I have now seen some data which makes me lean in the opposite direction.

Ritchie's view of science is almost romantic: he goes on about the "nobility" of research and the virtues of Mertonian norms. But the question of how conditions, incentives, competition, and even the Mertonian norms themselves actually affect scientific production is an empirical matter that can and should be investigated systematically. It is time to move beyond "speculative insights" and onto "rigorous testing", exactly in the way that Merton failed to do.
Profile Image for Scott Lupo.
426 reviews6 followers
August 8, 2022
AVOID. Here are my reasons:
-From the beginning, this author lost my trust. In the preface, the author mentions how he and some colleagues wrote a null paper on the psychic experiments by Daryl Bem and was "unceremoniously rejected from the journal that published the original." That leaves the reader thinking that he never got that study published and moves on to the next subject. WRONG. Read the notes and you will find it did get published, just not to his liking.

-Read the notes. It's another whole book back there with some of them paragraphs being really long. Many of them either refute what he originally said or alters the original meaning just enough to realize he's trying to pull something. It would be interesting to know how many people actually read citations or notes at the end of books (couldn't find anything on google). I would venture to say not many, which I think he purposefully relied on for his narrative.

-Do you enjoy abusive relationships? Me neither. However, that is what this book is like. "I come to praise science, not to bury it; this book is anything but an attack on science itself, or its methods." The next paragraph then explains the only "fragile scrap of hope and reassurance that emerges from the Pandora's box of fraud, bias, negligence and hype" is that scientists have uncovered these things themselves. Throughout the book it's 'science is great but science also sucks really, really bad because...'.

-He has a hard on for Daniel Kahneman. He really doesn't like him.

-He conflates social sciences with ALL SCIENCE. Yes, social sciences are muddy and gray because it deals with human beings, who are muddy and gray in just about everything they do. Creating experiments is very difficult and interpreting results even more difficult. But to lump all of science into this category is foolish and leans towards trickery. Throughout the book he switches, sometimes within the same paragraph, from a social science to other sciences.

-Fraudsters, charlatans, flimflammers, and hustlers all use certain phrases in their toolkit of shams. Some of those are things like "you know what I mean", "let's face it", "it should be noted", "that being said". These are the phrases of all those psychics we used to see on tv (John Edward, Miss Cleo, Sylvia Brown). These phrases purposefully leave the door open for interpretation and let the listener/reader fill in the blanks themselves. Yeah, that is fraudster 101 class right there.

-He constantly brings up the oldest cases of science fraud and then tries to compare them to today's frauds. Every case he brings to light ends in one way: they were caught! Because that is what science does. Incredible claims require incredible evidence. He acts like it is the worst thing in the world that science actually caught these things. Apparently, it is never fast enough for the author or he thinks science should be absolutely free of any errors, full stop. I am unsure whether he truly understands the scientific process.

-His conclusions on how to fix these things is paltry at best. In fact, many of his suggestions are already in use today! Others he admits would be impossible to do.

My only conclusion to this book is that it is a thinly veiled hit piece on science. Every fraudster knows that if you include nuggets of truth in your parable, then it will seem like everything is truthful. That is exactly this book. He even talks about this in the book. That scientists have gotten so good at faking their results so that it doesn't look 'perfect' and people will buy it. The irony!! The author grandiosely overstates his hypothesis that there is an epidemic of fraud, bias, negligence, and hype in science. Many times I thought I was reading an Onion article made into a book because he uses all those things in this book.

Okay, I have laid out my reasons but I also want to give credit when it is due. Science is not perfect and the process is not always efficient and it does not always incentivise the proper way. Welcome to the problem with scaling up and money. Sure, it would be great to have science run without any thought to money or resources. Science for the sake of science. Cool with me. Let's shoot for that and do what we can to get as close as we can to that ideal. But this is not the message in this book. I truly believe this author has problems with two things: social sciences and the philosophy of science (epistemology). He should consider writing on those subjects instead of attacking the whole of science. Especially in a dishonest way like this book. It gets a star for actually writing a book and a star for at least shedding light on some of the issues with scientific research sometimes. That's two stars. The same I gave to Michelle Malkin. Enough said.
Profile Image for muthuvel.
257 reviews151 followers
January 8, 2021
Nobel Laureate Economist Daniel Kahneman, in his work targeted to public audience 'Thinking, Fast and Slow (2011)' talks about the certainty of Priming effects through citing various psychological studies and thereby claimed certain stimulus can be produced without conscious guidance or intention and that which can be patterned. It was one of the widely read popular bestsellers in the genre but things of uncertainty were likely after a few years when the studies he cited were failed to replicate or published with inadequate data. He even acknowledged that the fact that he was wrong about his certainty. What happened here?

"The books we’ve just discussed were by professors at Stanford, Yale and Berkeley, respectively. If even top-ranking scientists don’t mind exaggerating the evidence, why should anyone else?"


Following Kahneman, we have similar claimed by NASA, pop science books like Why We Sleep, studies of austerity, mediterranean diets, publication biases and issues of hacking p-values, cherry picking, salami slicing, self citations, self plagiarism, creating ghost citations and review from ghost peers, coercive citations from accepting journals.

Most of the people who's already in the field would know most of the replication crises discussed in the book but I guess mostly their guides would have calmed them down that it's okay to not being able to replicate scientific study due human error among other factors. It's a conditioning that's been practiced contradictory of the objectives set by the founding figures of scientific publishing community like Boyle.


Afterall the practitioners of science in the end has the susceptibility of becoming more of an organized cult working for their incentives of various kinds from academic survival, personal fame and status to achieve the bureaucratic standards forgetting the basic tenets of what scientific research is all about.


The last book I read was a work of a Wittgenstein student showcasing how Social Science was massacred by Social Scientists (Sociologists, Social Psychologists, Economists, Political Scientists to name a few..) where as this one does the same in the Natural Sciences.


But rather not going philosophical, it's limited to how science is practiced today than what science actually is. Maybe there are no better methods to understanding the world but as Winch said it's better to stay vigil and question 'the extra scientific pretensions' of scientific communities which creates its own norms and beliefs in its culture of practicing Science.


Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) ~ Stuart Ritchie
Profile Image for Yuri Krupenin.
114 reviews341 followers
April 24, 2024
Хорошее дополнение к Голдакру и Талантову, проходящее по слабым местам современного научного процесса. ОЧЕНЬ много сносок и цитирований, вместе образуют отдельную книгу дополнительного материала.

При хорошем обзорном фактическом наполнении (с некоторыми спорными моментами, на которых я сейчас не буду зацикливаться поскольку они освещены здесь в соседних ревью) моя главная большая претензия это... настрой, что ли, книги: после прочтения сложно избавиться от впечатления, что академическое сообщество — сборище идиотов и шарлатанов, за которыми нужен глаз да глаз, из-за чего всё поле абсолютно дисфункционально, со сквозной мыслью о том, что только автор знает, как разрешить проблемы.

Голдакру как-то изящнее удавалось это балансировать, что ли, показывая при серьезных проблемах и серьезный прогресс.
Profile Image for J.J..
16 reviews
January 24, 2021
First of all the title slaps, this is the kind of word play you want in a popular science book title.

Ritchie grabs your attention with some spicy cases of scientific fraud, but follows up with other pernicious problems that lead science astray. He goes on to suggest changes to the way research is conducted, funded, reviewed and published to right some of these wrongs. A worthwhile read (or listen) for researchers or mere muggles like myself.
Profile Image for Sophia.
227 reviews89 followers
November 29, 2020
I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It's a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it's a humbling reminder of what we're doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime.
The book is exceptionally well structured, very clear writing, very engaging, switching between as much information as needed to understand a given concept, then compelling examples, and discussion as to why it matters, what people might object to, etc. Really really good.
However, the author fails to give proper due to the main strength of science: it's ability to self-correct. This book is described as an "expose", but in reality all of what he mentions has been known for decades, and in fact every single example he gives of fraud, negligence, bias, or unwarranted hype was not uncovered by external journalists but rather other scientists. It was the peers who read papers that looked suspicious and did more digging, or whole careers built around developing software and tools for automatically detecting plagiarism, statistical errors, etc. It was psychology itself that "wrote the book" on bias that was fundamental to exposing the biases of scientists themselves. And more often than not, it was just a future study that tried something better that should have worked but didn't that disproved a flimsy hypothesis. Sure; fraud, hype, bias, and negligence are dragging science down, but science isn't "broken", it's just inefficient. Wasting a lot of money on bad experiments and scientists needs to be avoided, but in the end, a better truth tends to bubble up regardless. Anyone who has had to defend science against religious diehards will be particularly aware of this.

Also missing is proper consideration as to why these seemingly blindingly obvious problems have been going on for so long. As an insider, here are some of my answers:
- All this p-hacking (trying different analyses until something is significant). Scientists are not neatly divided into those that immediately find their results because of how fantastically well they planned their study, and those that desperately try to make their rotten data significant. Every. Single. Study. has to fine tune their analysis once they get the data, not before. Unless you are in fact replicating something, you have absolutely no idea what the data will look like, and what's the most meaningful way to look at it. This means you can’t just tell scientists "stop p-hacking!", you need an approach that acknowledges this critical step. Fortunately, an honest one exists that can be borrowed from machine learning: splitting your data into a "training" and "testing" dataset, where you fine-tune your analysis pipeline on a small subset, then you actually rely on the results applied to a larger one, using only and exactly the pipeline you previously developed, without further tweaking.
- The file drawer problem (null results not getting published). I think especially in the field of psychology, statistics courses are to blame for this; we don't reeeally understand how the stats work, so we rely on Important Things To Remember that we're taught by statisticians, and one of these is that "you can't prove a null hypothesis". This ends up getting interpreted in practice in "null results are not real results, because nothing was proven". We are actively discouraged from interpreting "absence of evidence as evidence of absence", but sometimes that is in fact exactly what we should be doing; for sure not with the same confidence and in the same way with which we interpret statistically significant positive results, but at some point, a study that should have found something but didn't is a meaningful indication that that thing might not in fact be there. A useful tool to help break through this narrow-minded focus on only positive results is a new statistical tool called similarity testing, where you test not whether two groups are different but whether they are statistically significantly "the same". This is a huge shift in mindset for many psychologists, who suddenly learn that you can in fact have a legitimate result that there was no difference to be found. Knowing this I suspect will make people less wary of null results in general.
- Proper randomization (and generally the practicalities of data collection). The author at some point calls it a mistake that a trial on the Mediterranean Diet had assigned to the same family unit the same diet, thus breaking the randomization. For the love of God, does he not know how families work? You cannot honestly ask members of the same family to eat differently! Sure, the authors should have implemented proper statistical corrections for this, but sometimes you have to design experiments for reality, not a spherical world.
- Reviewers nudging authors to cite them. This may seem like a form of blatant self-promotion, but it's worth mentioning that in reality, the peer reviewers were specifically selected as members of the EXACT SAME FIELD, and so odds are good that they have in fact published relevant work, and odds are even better that they are familiar with it enough to recommend it. That is not to say that some of it is for racking up citations, but this is not true unless proven otherwise, because legitimate alternative explanations exist.

Other little detail not mentioned by the author is that good science is f*cking hard. For my current experiment, I need a passing understanding of electrical engineering to run the recording equipment, a basic understanding of signal processing and matrix mathematics to clean and analyze the data, a good understanding of psychology for experimental design, a deep understanding of neuroscience for the actual field I'm experimenting in, a solid grasp of statistics, sufficient English writing skills, separate coding skills for both experimental tasks and data analysis in two different languages, and suddenly a passing understanding of hospital-grade hygiene practices to deal with COVID! There's just SO MUCH that can go wrong, and a failure at any point is going to ruin everything else. It's exhausting to juggle all that, and honestly, it's amazing that we have any valid results coming out at all. The only real solution to this is to have larger teams; focus less on individual achievements. The more eyes you have on scripts, the fewer bugs there will be; the more assistants available to collect data, the fewer mishaps. The more people reading the paper beforehand, the fewer mistakes slip through. We need publications from labs, not author lists; it can be specified somewhere the exact contribution of each, but science needs to move away from this model of venerating the individual, because this is not the 19th century anymore: the best science comes from groups. On CVs, we shouldn’t write lists of publications, we should write project descriptions (and cite the paper as “further reading”, not as an end in and of itself).
~~~
Scientists need the wakeup call from this book. Journalists and interested laymen also greatly benefit from understanding why a healthy dose of scepticism is needed towards any single scientific result, and how scientists are humans too. But the take-home message that can transpire from this book and is not actually true, is that scientists are either incompetent or dishonest or both. The author repeatedly bashes poor science and science communication that has eroded public trust in science, but ironically this book is essentially highlighting this with neon letters and making sure trust in science is eroded. To some extent it is warranted, but the author could have done more to defend the institution where it is deserved, and as an insider, could have done more to talk about the realities an individual scientist faces when they make these poor decisions. It's worth mentioning that science has not gotten worse, we're still making discoveries, still disproving our colleagues, and still improving quality of life. We could just be doing it more efficiently.
Profile Image for A.
432 reviews43 followers
April 4, 2022
8.5/10.

Quite the revelatory look at "The Science", this book is. Ritchie shows how the ideal of objectivity in scientific publishing is not matched by actual practice. The hallmark of science is its replicability. If a finding is not replicable, then it must be presumed to be due to randomness or error — either of which is not reliable for understanding the world, treating illnesses, understanding the human mind, or diagnosing economic ills. The problem is that scientific studies are not replicating. ~60% of psychology studies do not replicate. If a make a bald proposition about the mind and flip a coin, I am more likely to be right than if I read a psychological study. In general, scientific studies have only a 50% replication rate. That is not good. Why is this the case?

Firstly, no one can even attempt to replicate most studies. 99% of all pre-clinical medical trial papers do not provide enough information to be replicated. When attempted, only 11% of the trials could replicate. But oftentimes, no one even tries to replicate them. Only .1% of economics studies and 1% of psychological studies have subsequent replication attempts. So we essentially have a head over heels rush to publish, with no one verifying anyone else's results. If verification is attempted, there is a 50% chance that it fails.

This wastes at least $50 billion of research money per year. Useless studies, useless results. The drive for these results continues, however, because of academic incentives. Academics get promoted based on how many studies they publish and how much they get cited (or in China, get paid directly based on publication count). This leads to the statistical finagling of results to publish as much as possible, as well as citation rings where academics make contracts to cite one another. "I'll publish your paper if you cite three of mine". One can also choose one's peer reviewers, thus allowing you to choose your friends to simply get published.

Statustical finagling works thuswise. Let's say you do a study with a large manifold of variables and get a null result for your initial hypothesis. This is bad, as scientific journals don't like null results. But you have lots of variables to work with! So you look at every possible combination of variable type and effect size, and finally find one that is statistically significant (p < .05 = 5% chance that without your treatment, a similar result would have been found). But the problem is that the more you look for statistical significance in every configuration of data, the more likely you will find one below the cutoff that is due to pure randomness. I tell the journals, "look, I found someone who shares my same birthday, how rare!", but if I talk to one million people, of course I will find someone with my same birthday. If this is not the case, then I can verbally spin my null result to seem as if I found something novel, impactful, or interesting.

There are also many blatant cases of fraud, many committed by researchers at institutions of the highest prestige. Paolo Macchiarini found an amazing way to transplant tracheas. By reading his journal articles, you knew that he was having great success. He had done it on five different people and it was to revolutionize medicine. But then it turned out that four of them had died . . . one of them seven weeks before he published his paper. It turns out that Macchiarini didn't let people know. From publishing in the top Lancet journal and being lionized by his countrymen, Macchiarini fell from grace. This story is not an outlier. 15% of scientists admit to knowing someone who has faked data. 10% of cell line images are copied from old trials and studies. Medical research (1/3 of which is funded by pharmaceutical companies) can be spliced up salami style to make it seem like many papers support X treatment. 45% of currently used medical treatments, when properly reviewed, do not have sufficient evidence to warrant believing in their efficacy.

Many ills abound in current-day science. As a practical measure, you should not trust popular science books and headlines about science. They will be exaggerated and hype studies out of all proportions, applying them to disparate sectors where their data do not apply. Act on old wisdom and passed-down maxims as opposed to the immediate diktats of "The Science". The Greco-Roman classics will get you far. Use Stoicism to master your passions and become stolid in your mind. Understand the Spartan and Platonic teaching that the mind's health reflects the body's health. Be at one with your nation and heritage, for your destiny is theirs. Keep natural — in all aspects. Man has always consumed meat. Eat fresh and chemical-free meat, as much as that is possible. Revive old advice from long-lost books. Those before us may have lived for a shorter time span, but they certainly were stronger and healthier (in both mind and body). Get outside. Bask in the sun. Talk with people without a mask. Be a human and do human things. Get away from modern "innovations" and revive tradition. Trust in the sages of old and ignore the charlatans of today. Most definitely, be wary of science and the fury of delusional passion about its supposed findings. It is much more corrupt than you think.
146 reviews
April 15, 2021
the audience of this book is clearly lay people with no idea what's going on in science and you know what, fine, this is probably decent for that audience, in the same way that middle school teaches you a lot things that aren't really correct but whatever, you have to start somewhere. this is a sloppy presentation of the mainline narrative that emerged from the replication crisis that is uncritical and myopic. the presentation of statistics is particularly painful. it is hard to take seriously an account that presents psychology's problems as if they are universal or new; that sees science as producing results that either correct or incorrect, rather than subject to uncertainty; that centers p-hacking and replicability as the fundamental problems; that offers no analysis that hasn't already been rehashed umpteen times; that fails to cover the vast, exciting and recent meta-scientific literature (from within psychology itself!); that... the list goes on. this book irritated the shit out of me
Profile Image for Liquidlasagna.
2,350 reviews77 followers
August 9, 2022
I'm not happy with his interpretation of the Stanford Prison Experiment by Zimbardo

thinking it fell apart under closer scrutiny

---

Ritchie is a critic of the conformity priesthood
who i think doesn't realize that he is still a part of the conformity priesthood

And i'm really really unimpressed with his explanations where science goes wrong:

a. fraud
b. bias
c. negligence

Stuff every seven year old would realize

and

d. hype

Doesn't every mother tell every five year old
"Don't believe everything you hear?"

I think the basic theme of the book makes it a two star book
but on further thought, oh how dismissive he is on whatever particular topic he agrees or disagrees with, i have to give this a one star review.

All I can really say is that Ritchie is a man of thousands of opinions, and you'll shrug at half of them. Aside from some very basic basic themes, there's really not much interesting in his writings, or interviews, no matter how hard i look.

Now, Philip Zimbardo is interesting, and tackles deep subjects like authority and shyness, and sure he's got a minority of critics too...

i guess the Stanford Prison Experiment goes into his cookie cutter approach of bias and fraud

page 29
"Despite the enormous attention it’s received over the years, the ‘results’ of the experiment are scientifically meaningless."

Yet i've watched the documentary of the Stanford Prison Experiment and i'm still not terribly convinced of the critics. All the new criticisms don't seem to be all that revealing if you actually watch Zimbardo talk to his students during the running of the experiment.

And all the commentaries afterwards

---

The more interviews i read or watch with Ritchie, just disappoints me even more.
And i just think increasingly diminishing returns, sums it up well.

Even worse is the blurb
"will take a Freakonomics-style look at the implications..."

Considering i think Freakonomics was an entire set of books of the authors confirmation bias and cherry picking, it's even more depressing.... since i didn't think they were very good books either.

---

amazon review

Disappointing

On the plus side, this book includes some well-written summaries of major scientific fraud events over the last couple of decades. But even taken collectively along with an assumed much greater amount of undetected fraud, it's hard to argue that these "undermine the search for truth."

The author does not provide any support for the proposition that fraud has more than a negligible impact on the overall progress of science.

The bulk of the book discusses other issues such as replication failures, errors, statistical malpractice, gaming and bad incentives. My biggest gripe is that the author does not even mention, let alone reply to, people who disagree with him. You'd think a guy who spends 352 pages lecturing everyone else on honest reporting of results in all their messy reality would take his own medicine.

There is a huge literature on the questions the author discusses, much of it opposing his analysis, but no dissenting voices are mentioned in the text or cited in the notes.

Another issue is the discussion applies to a small sliver of science, but is presented as having broad application. In most fields, Ernest Rutherford's attributed dictum applies, “If your experiment needs statistics, you ought to have done a better experiment.”

Published results either do not depend on statistics at all, or significance values too low to worry about. In most fields that can only produce results of marginal statistical significance, investigators only test hypotheses for which there is strong theory and prior evidence to believe true.

The author's discussion applies mainly to fields like nutrition where there is little prior reason to think factors like "eating more potatoes" are either good or bad for you, and there are so many confounding factors and so much measurement uncertainty that the sample sizes required for meaningful answers are impractical.

I would argue that testing essentially random hypothesis on inadequate data really isn't experimental science in the first place.

I would call it an exploratory initial effort to understand things well enough that useful experiments can be run.

Just as hard cases make bad law, fields in which meaningful experiments are hard to run in the first place are not the right examples to think about problems and solutions for all of science.

One example of this unmentioned assumption is the author's insistence, without discussion, that null results should be published with the same attention as positive results.

So if Kepler had published Epitome Astronomiae in a journal, using Tycho Brache's exceptionally accurate measurements to establish heliocentric astronomy, the journal should have also published articles by hundreds of traditional astronomers asserting that their (inferior) data could not reject the Ptolemaic system.

The author's implicit assumption is that the only difference between a positive and null result is random statistical noise. But good scientists get positive results by carefully considering which hypotheses to test and using rigorous methodology to minimize noise. Many null results come from less careful workers.

Of course, a negative result should be published. That is, if one investigator finds strong evidence for a hypothesis, another investigation finding strong evidence, or even moderate evidence, against the hypothesis should merit publication.

But a null finding, an experiment that is consistent both with the hypothesis and its alternative, is generally of little value.

A confirming result is of some value, of course, but mainly if it adds something to the original. For example, if the original had marginal statistical significance, more data supporting the same result is useful. Or if the new result changes some of the original paper's conditions. perhaps is done on different kinds of subjects, it adds to the credibility.

But an exact replication of a study that was already well done and did not rely on statistics or had extreme significance, is only useful as a check against fraud.

My final objection is the author lets his political opinions creep into the book, which is inappropriate when "bias" is one of his declared enemies.

He cheerfully and without comment inverts his principles for climate change and police shootings, and he dismisses without discussion positions considered anti-scientific like objections to GMOs or combination single-strain vaccines.

I'm not taking the opposite side of these issues from the author, but I do think he should apply the same standards to all questions, or explain the differences.

My guess is that the author thinks there are some topics that should be off-limits to lay public discussion because the damage from people taking the wrong position outweighs his general preference for openness and rational discussion. But this position is not stated nor defended.

Overall, I can't recommend this book to anyone. If you're well-read in this field, you won't find enough new to be worth the effort.

If you're not well-read in this field, the one-sided and narrow account is a poor introduction.

I will say that there's nothing false in the book, and it's reasonably pleasant to read. You will learn some things and, if you keep in mind that there's another side to the case, you might find the book useful.

Aaron C. Brown
Profile Image for Chris Boutté.
Author 8 books215 followers
August 1, 2022
2nd read:
I read this book when it first came out in 2020, and Stuart was actually the first guest on my podcast. I learned about issues with what’s presented to us as science from Last Week Tonight, and I wanted to learn more, so Stuart’s book enlightened me to all of the issues happening within the scientific community. Two years later, this book holds up, and now that I know more about the topic, a lot of what he discusses in the book makes much more sense. I really think this should be read by the general public because we blindly trust most of the research that comes our way, and we should be much more skeptical.

Conspiracies have become much worse since this book was first published, and at the end of the book, Ritchie discusses how people questioned whether or not he should be writing this book. Their concern was that this would give people like conspiracy theorists a way to further discredit science, but I think Stuart does a fantastic job explaining why that should be the least of our concerns. So, go get this book if you haven’t yet, and start teaching others how to be better at spotting bogus studies.


1st read:
Incredible book that I binged in a day. As an influencer who often references psychological studies but also knows how much bad science is out there, I’m always trying to learn more about this subject.

This author did a great job not just giving examples of bad science, but he explains WHY it’s happening and offers solutions. Absolutely loved this book and hope some journalists read it as well before they keep reporting on hyped up science.
Profile Image for Agne.
506 reviews17 followers
December 6, 2020
Extremely informative and well argued. I would suggest it to anyone who has any contact with science in their daily life (so... everyone). I loved the examples and statistics and that it's at the same time really approachable. For the layperson, it's pretty shocking to hear how null results and replication studies have been treated by even reputable journals.

There are a bunch of solutions offered at the end.

The only downside is that if an aspiring scientist were to read this book, they might throw in the towel before they even start. It sort of implies that it is basically impossible to do anything worthwhile in the sort of sets of study participants that junior level scientists have access to (alas, the dreaded P-value). Not being able to use your own ideas before you get that million-dollar grant after years of being small cog may discourage some. It's not the romantic ideal. But I guess it's for the best.

***
“Science, the discipline in which we should find the harshest scepticism, the most pin-sharp rationality and the hardest-headed empiricism, has become home to a dizzying array of incompetence, delusion, lies and self-deception.”
Profile Image for Hanka Jirovská.
99 reviews4 followers
October 10, 2021
highly recommend to anyone interested in the behind-the-scenes of science.

the scientific process is riddled with human flaws: there's fraud, bias, and negligence. the means of publishing papers to share the findings with the general public has in fact become the sole objective of scientific endeavours. we're hyping up the results instead of being humble about our knowledge. at some point in the book, Ritchie refers to "the natural selection of bad science" which is a pretty fitting description for the on-going feedback loop.

now, this all sounds very pessimistic. but next to skilfully describing the context surrounding all the flaws in the scientific system, Ritchie also shows a very strong notion of what science should stand for and how we can start getting closer to it. "Society takes science remarkably seriously. Scientists need to reciprocate by holding themselves to far higher standards."

(also, a bunch of fun facts included)
Profile Image for A.E. Bross.
Author 7 books43 followers
January 14, 2022
This book isn't for the faint of heart. And I don't mean that anything is too intense or anything along those lines. Ritchie, who is also the narrator/reader in this audiobook, does an excellent job of explaining his perspective, the research behind it, and varying ways that the problem he is outlining can be alleviated (though there's no magic bullet method that can correct all the ills of fictions in scientific research/publishing). No, what makes it not for the faint of heart is that, once reading it, you can't un-know the fact that many, many scientists are taking more than their share of shortcuts in the world of academia, in order to placate the great beast that is 'publish or perish.' A really excellent book on the topic, the reader can't help but have their eyes opened to what's going on, and read all the more closely when they see the newest, hottest scientific paper making the rounds in the media.
Profile Image for Pete.
980 reviews64 followers
January 11, 2021
Science Fictions : The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) by Stuart Ritchie is an excellent book that looks at the many problems in science and what can be done to improve the situation. Ritchie is a Psychologist at King’s College London.

Science Fictions goes through how science currently works and then details the replication crisis, where the replication of studies, particularly in psychology but also in other fields demonstrated serious problems with science as it stands.

The problems of outright fraud, bias, negligence and hype are then examined. P-hacking, dropping studies with null results, self-citing and chronic hype are all well described. The book has many examples of these problems.

Ritchie also describes the Mertonian Norms that science should seek to uphold, those of universalism, disinterestedness, organised skepticism and communalism of sharing results.

The book gets into why scientists engage in dubious activities, namely that they want to succeed and often they believe that their hypothesis is true, it just needs a bit of help. This is Noble Cause Corruption but Ritchie doesn’t use the term. The push to publish and increase scientists h-factor and for journals to up their impact factor is also outlined.

Ritchie also describes how science can be improved. Automated checking for statistical errors, pre-registration, open data, publication in free access journals and credit being given for replication and for well obtained null results can also help. Ritchie also points out that science has had lots of success recently and even with the current problems it still achieves incredible things.

Science Fictions is a fine book that is well thought through, well written and fun to read. It would be very hard not learn something from reading it.
Profile Image for Steve.
1,048 reviews60 followers
August 15, 2020
Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed.

He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news.

This book has been well reviewed in general publications, but I was curious how science journals would review it. The only review I found in a professional science periodical (in the ultra-prestigious Nature) was basically positive with a few criticisms.
Profile Image for Analia.
108 reviews8 followers
July 2, 2022
As a scientist in training, this book simultaneously made me feel incredibly validated and threw me into a deep crisis that left me googling grants, bookmarking open source statistical software, and looking up alternative careers. I think that's a sign of how important this book is though. I highly recommend this to anyone that ever interacts with research findings and their interpretation (meaning literally everyone, especially with COVID), because it will give you a realistic perspective of how science works. After reading this, you will have a solid idea of how to evaluate research presented to you from any outlet, which I feel is something we all struggle with when we're flooded with as much information as we are today.
Profile Image for Dwayne Roberts.
413 reviews47 followers
August 27, 2023
An important meta-science book about how science is broken and how to fix it.
Profile Image for Amirmansour .
85 reviews5 followers
March 23, 2022
Just thinking about how many BS books that I have read as science, but reality is, they’re fiction, not science.
A great and enjoyable read.
Profile Image for Annas Jiwa Pratama.
108 reviews6 followers
Read
December 11, 2021
Back in around 2017, I think that was when I was introduced proper to the crisis in psychology and the open science movement. I was doing my masters in health psychology, which in a small part was influenced by my then fascination with nudging and Brian Wansink’s research in particular. I felt a little betrayed when I found that he was a fraud, a little betrayed, but was mostly embarrassed. Since then, I’ve grown more and more jaded, knowing less and less about what in my field was true and wasn’t, and I tried to stay updated about the reform movement and other crises in psych.

This book feels like a summary of what I’ve observed (mostly via twitter and papers) over the years afterwards. Ritchie offers a broad overview of symptoms of bad science; frauds, biases, negligence, overselling findings, then presents possible causes and what’s currently being tested and rolled out to try and prevent scientific malfeasance.

It’s a great introduction especially if you’ve heard about ‘replication crisis’ or ‘open science’ and wanted a glimpse of the kitchen. This is probably *the* intro book for meta-science, one that focuses mostly on why it is needed and why it might work out. If you are a practitioner or a scientist, as I (would like) to think I am, you probably won’t find anything too surprising. Nonetheless, it is still a good refresher and there are more practical sections too. (the ‘How to Read a Scientific Paper’ appendix feels like it would make a really good material for teaching students and laypersons how to apply skepticism when reading about science). I’d like to add however one thing the book: organize. I discovered a chapter of scientists here in Indonesia that are laying the groundwork for better science. Policy advocacy, networking with international communities, creating open science platforms, helping each other get on top of new methods, all around amazing stuff. Seeing people actually doing the work really makes you believe that things can get better.


However, it is very important to note as well that we need to take the reform with the same skepticism that the book proposes we have for scientific findings in general. After all, meta-science is also a science, and the people running it have their own stakes and biases. The most popular review of the book in this site suggests that the approach Ritchie suggests are itself a kind of new ‘meta-hype’, just as unproven and that ‘flakey results’ will eventually be subject to correction anyways. I somewhat agree, in that it is important to critically look into meta-science and reforms as well. In psychology for example, the reform movement has been criticized for being overly focused on methodological reforms, ignoring that the field’s actual need is more foundational, and that the reforms themselves are lacking in a formal approach (as well as the fact that reformists are not always open and welcoming themselves). However, I feel that the latter argument is misguided. False findings aren’t ‘automatically’ found. As the book have presented, it takes a lot of work and sometimes invites a lot of pushbacks from powerful individuals and institutions. Corrections don’t magically happen. To dismiss this book on that basis is, to put it bluntly, kind of whack.

Tangents
• I learned that peer review is actually a rather new invention, which kind of makes sense when you think about the logistic. Kind of funny reading how Einstein fumed at the thought of getting his paper reviewed pre-publication.

• Also, this book is actually rather nuanced if I’d say so myself. Did not think the discussion about Trump’s administration’s attack on climate science using the language of meta-science was going to be included but there it was.

• Feels kind of bad to sub-tweet a Goodreads review lmao

148 reviews1 follower
December 15, 2021
A brilliant and timely look at the problems that afflict contemporary science, and thoughtful, inventive solutions to those problems. Stuart Ritchie delves into a central problem in contemporary science, many major, often lauded studies fail to replicate on scale (or at all), and examine key causes of these problems: hype, bias, negligence, and fraud. Ritchie goes into detail about how perverse incentives in our contemporary system feed each one of these problems. The examples he uses to illustrate his case are insightful, informative and unsettling.

Ritchie doesn't merely analyze the problem, he also proposes innovative solutions to the problems he studies, and highlights promising work of many scientists and reformers. If nothing else this book is a useful guide for how to spot dubious reporting on scientific issues, and to sceptically analyze outsized claims. For those entering the field, this book should serve as a primer for how aspiring scientists can be constructive players in an evolving system. I agree with Ritchie that while declining public faith in some scientific fields is troubling; and we can improve how we communicate science; in the long term this is a "physician heal thyself" moment. I hope practicing and aspiring scientists read this essential book.
Profile Image for Amelie.
17 reviews11 followers
Read
August 28, 2023
Great introduction to a lot of the problems plaguing scientific research, especially in psychology. If you already have a background in open science/research (or apparently have attended the same masters program as the author) it might not tell you anything new, but it’s still very coherent and well argued.
Profile Image for Ben.
38 reviews
February 5, 2022
My first ever audiobook. Well written and convincing. A little repetitive, which is sort of the nature of the beast. When he introduces yet another study with incredible results, obviously you know it's about to be debunked.
Profile Image for Cam.
143 reviews32 followers
December 12, 2021
Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them.

Ritchie writes clearly. He's likeable and scientifically and statistically literate, but doesn't take himself too seriously. He's a great science populariser even when he is denigrating science!

Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purported to show that we can study for a test after we have taken the test to improve our test results.

Obvious nonsense, right? No surprises it failed to replicate.

The major problem, as the original authors noted, is that their methods weren't all that different to many of the papers being published in social science.

Essentially social science can't be trusted. Whether a study replicates doesn't correlate with how many citations it has. Truly remarkable.

Ritchie does a nice job explaining to lay-audience concepts like p-value, statistical significance, and the common dodgy statistical methods such as p-hacking and harking. He also outlines how the issues are exacerbated by perverse incentives in academia such as publish or perish, and the need for results to be statistically significant and sexy.

Ritchie also recounts some good narrative non-fiction around some of the most high-profile cases of fraud such as Diederik Stapel (the Bernie Madoff of science) and Paolo Machiarinni (who claimed he was healing people with risky procedures - as opposed to killing them!).

Twitter user Alvaro De Menard is less optimistic than Ritchie. De Menard points out, with a systematic deep dive into social science paper's replicability, that this isn't a recent phenomenon. Any proposed way forward to fix the crisis within academia is unlikely to succeed.

Displaying 1 - 30 of 281 reviews

Can't find what you're looking for?

Get help and learn more about the design.