Truth and lies

Are all scientific articles fraud?

In his new book, Fraud in the Lab, journalist and former lab researcher Nicolas Chevassus-au-Louis explores why cases of scientific misconduct around the world are rising. In this extract, he highlights a systematic dishonesty at the heart of establishment science.

 

Is every scientific article a fraud? This question may seem puzzling to those outside the scientific commu­nity. After all, anyone who took a philosophy course in college is likely to think of laboratory work as eminently rational. The as­sumption is that a researcher faced with an enigma posed by nature formulates a hypothesis, then conceives an experiment to test its va­lidity. The archetypal presentation of articles in the life sciences fol­lows this fine intellectual form: after explaining why a particular question could be asked (introduction) and describing how he or she intends to proceed to answer it (materials and methods), the re­searcher describes the content of the experiments (results), then in­terprets them (discussion). 

 

A Good Story 

This is more or less the outline followed by millions of scientific ar­ticles published every year throughout the world. It has the virtue of being clear and solid in its logic. It appears transparent and free of any presuppositions. However, as every researcher knows, it is pure falsehood. In reality, nothing takes place the way it is described in a scientific article. The experiments were carried out in a far more disordered manner, in stages far less logical than those related in the article. If you look at it that way, a scientific article is a kind of trick. In a radio conversation broadcast by the BBC in 1963, the British scientist Peter Medawar, cowinner of the Nobel Prize in Physiology or Medicine in 1960, asked, “Is the scientific paper a fraud?”As was announced from the outset of the program, his answer was un­hesitatingly positive. “The scientific paper in its orthodox form does embody a totally mistaken conception, even a travesty, of the nature of scientific thought.” 

To demonstrate, Medawar begins by giving a caustically lucid description of scientific articles in the 1960s, one that happens to remain accurate to this day: “First, there is a section called ‘introduction’ in which you merely describe the general field in which your scientific talents are going to be exercised, followed by a section called ‘pre­vious work’ in which you concede, more or less graciously, that others have dimly groped towards the fundamental truths that you are now about to expound.” 

According to Medawar, the “methods” section is not problematic. However, he unleashes his delightfully witty eloquence on the “re­sults” section: “[It] consists of a stream of factual information in which it is considered extremely bad form to discuss the significance of the results you are getting. You have to pretend firmly that your mind is, so to speak, a virgin receptacle, an empty vessel, for infor­mation which floods into it from the eternal world for no reason which you yourself have revealed.” 

Was Medawar a curmudgeon? An excessively suspicious mind, overly partial to epistemology? Let’s hear what another Nobel lau­reate in physiology or medicine (1965), the Frenchman François Jacob, has to say. The voice he adopts in his autobiography is more literary than Medawar’s, but no less evocative: 

Science is above all a world of ideas in motion. To write an ac­count of research is to immobilize these ideas; to freeze them; it’s like describing a horse race from a snapshot. It also trans­forms the very nature of research; formalizes it. Writing substi­tutes a well­ ordered train of concepts and experiments for a jumble of untidy efforts, of attempts born of a passion to un­derstand. But also born of visions, dreams, unexpected connec­tions, often childlike simplifications, and soundings directed at random, with no real idea of what they will turn up—in short, the disorder and agitation that animates a laboratory.

Following through with his assessment, Jacob comes to wonder whether the sacrosanct objectivity to which scientists claim to ad­here might not be masking a permanent and seriously harmful re­construction of the researcher’s work: 

Still, as the work progresses, it is tempting to try to sort out which parts are due to luck and which to inspiration. But for a piece of work to be accepted, for a new way of thinking to be adopted, you have to purge the research of any emotional or irrational dross. Remove from it any whiff of the personal, any human odor. Embark on the high road that leads from stut­tering youth to blooming maturity. Replace the real order of events and discoveries by what would have been the logical order, the order that should have been followed had the con­clusion been known from the start. There is something of a ritual in the presentation of scientific results. A little like writing the history of war based only on official staff reports.

 

The Dangers of Intuition 

Any scientific article must be considered a reconstruction, an ac­count, a clear and precise narrative, a good story. But the story is often too good, too logical, too coherent. Of the four categories of scientific fraud identified by Charles Babbage, the most interesting is data cooking, because it is the most ambiguous. In a way, all re­searchers are cooks, given that they cannot write a scientific article without arranging their data to present it in the most convincing, appealing way. The history of science is full of examples of re­searchers embellishing their experimental results to make them conform to simple, logical, coherent theory. 

What could be simpler, for instance, than Gregor Mendel’s three laws on the inheritance of traits, which are among the rare laws found in biology? The life story of Mendel, the botanist monk of Brno, has often been told. High school students learn that Mendel crossbred smooth­ seeded peas and wrinkle­ seeded peas. In the first generation, all the peas were smooth­ seeded. The wrinkled trait seemed to have disappeared. Yet it reappeared in the second gener­ation, in exactly one­ quarter of the peas, through the crossbreeding of first­ generation plants. After reflecting on these experiments, Mendel formalized the three rules in Experiments in Plant Hybridization (1865). These were later qualified as laws and now bear his name. Largely ignored in his lifetime, Mendel’s work was redis­covered at the beginning of the twentieth century and is now con­sidered the root of modern genetics. But this rediscovery was accompanied by a close rereading of his results. The British biologist and mathematician Ronald Fisher, after whom a famous statistical test is named, was one of Mendel’s most astute readers. In 1936 he calculated that Mendel only had seven out of one hundred thou­sand chances to produce exactly one­ quarter of wrinkle­ seeded peas by crossbreeding generations. The 25–75 percent proportion is accurate, but given its probabilistic nature, it can only be ob­served in very large numbers of crossbreeds, far more than those described in Mendel’s dissertation, which only reports the use of ten plants, though these produced 5,475 smooth­ seeded peas and 1,575 wrinkle­ seeded peas. The obvious conclusion is that Mendel or one of his collaborators more or less consciously arranged the counts to conform to the rule that Mendel had probably intuited. This we can only speculate on, given that Mendel’s archives were not preserved. 

"The history of science is full of examples of re­searchers embellishing their experimental results to make them conform to simple, logical, coherent theory." 

Unfortunately, one’s intuition is not always correct. In the second half of the nineteenth century, the German biologist Ernst Haeckel was convinced that, according to his famous maxim, “ontogeny recapitulates phylogeny”—in other words, that over the course of its embryonic development, an animal passes through different stages comparable to those of the previous species in its evolutionary lin­eage. In Anthropogenie oder Entwicklungsgeschichte des Menschen (1874), Haeckel published a plate of his drawings showing the three successive stages of the embryonic development of the fish, salamander, turtle, chicken, rabbit, pig, and human being. A single glance at the drawings reveals that the embryos are very sim­ilar at an early stage in development. As soon as the book was pub­lished, these illustrations met with serious criticism from some of Haeckel’s colleagues and rival embryologists. Yet it would take a full century and the comparison of Haeckel’s drawings with photo­graphs of embryos of the same species for it to become clear that the former were far closer to works of art than scientific observation. Today we know that ontogeny does not recapitulate phylogeny and that the highly talented artist Ernst Haeckel drew these plates of embryos to illustrate perfectly a theory to which he was deeply attached.

 

The Dangers of Conformism 

Another famous example of this propensity to fudge experimental results to make them more attractive and convincing comes from American physicist Robert A. Millikan, celebrated for being the first to measure the elementary electric charge carried by an electron. Mil­likan’s experimental setup consisted of spraying tiny drops of ion­ized oil between two electrodes on a charged capacitor, then mea­suring their velocity. Millikan observed that the value of the droplets’ charge was always a multiple of 1.592 × 1019 coulomb, which was therefore the elementary electric charge. His work was recognized with the Nobel Prize in Physics in 1923. This story is enlightening for two reasons. 

The first is that Millikan appears to have excluded a certain number of his experimental results that were too divergent to allow him to state that he had measured the elementary electric charge within a margin of error of 0.5 percent. His publication is based on the analysis of the movement of 58 drops of oil, while his lab note­ books reveal that he studied 175. Could the 58 drops be a random sample of the results of an experiment carried out over five months? Hardly, given that nearly all of the 58 measurements reported in the publication were taken during experiments conducted over only two months. The real level of uncertainty, as indicated by the complete experiments, was four times greater. Millikan was not shy about filling his notebooks with highly subjective assessments of each ex­periment’s results (“Magnificent, definitely publish, splendid!” or, on the contrary, “Very low. Something is wrong.”). This suggests that he was not exclusively relying on the experiment’s verdict to deter­mine the electric charge of the electron. 

The second is that we now know that the value Millikan obtained was rendered inaccurate by an erroneous value he used in his calcu­lations to account for the viscosity of air slowing the drops’ movement. The exact value is 1.602 × 1019 coulomb. But the most inter­esting part is how researchers arrived at this now well­ established result. The physicist Richard Feynman has explained it in layman’s terms: 

If you plot [the measurements of the charge of the electron] as a function of time, you find that one is a little bigger than Mil­likan’s, and the next one’s a little bigger than that, and the next one’s a little bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover that the new number was higher from the beginning? It’s a thing that scien­tists are ashamed of— this history— because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong— and they would look for and find a reason why some­ thing might be wrong. When they got a number closer to Mil­likan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off.

"How does one draw the line between what is tacitly accepted and what isn’t, between beau­tification and fraud?"

 

Technological Cooking 

Everyday fudging of experimental data in laboratories cannot ex­clusively be explained by researchers’ desire to get an intuited result in a better­ than­ perfect form, as was the case with Mendel, or to distinguish themselves through the accuracy of their measurements, as with Millikan. It can also be due to the more or less unconscious need to confirm a result seen as established, especially if the person who initially discovered it is the recipient of a prestigious prize. Par­adoxically, another factor leading to fraud is conformism, as we have seen in the case of Jan Hendrik Schön. All these (bad) reasons for fudging data are as old as science itself and still exist today. The difference is that technological progress has made it increasingly simple— and therefore increasingly tempting—to obtain results that are easy to embellish. 

In a fascinating investigation, the anthropologist Giulia Anichini reported on the way in which experimental data was turned into an article by a French neuroscience laboratory using magnetic resonance imaging (MRI). Her essay brings to light the extent that “bricolage,” to borrow her term, is used by researchers to make their data more coherent than it actually is. Anichini makes clear that this bricolage, or patching up, does not amount to fraud, in that it affects not the nature of the data but only the way in which it is presented. But she also emphasizes that the dividing line between the two is not clear, since the bricolage that goes into adapting data “positions itself on the line between what is accepted and what is forbidden.” 

Naturally, the lab’s articles never mention bricolage. According to Anichini, “Any doubt about the right way to proceed, the inconsis­tent results, and the many tests applied to the images disappear, replaced by a linear report that only describes certain stages of the processes used. The facts are organized so that they provide a co­herent picture; even if the data is not [coherent], and this has been observed, which implies a significant adaptation of the method to the results obtained.” 

Cell biology provides another excellent example of the new pos­sibilities that technological progress offers for cooking data. In this field, images often serve as proof. Researchers present beautiful microscopic snapshots, unveiling the secrets of cellular architecture. Since digital pho­tography replaced analog photography in laboratories in the 2000s, it has become extremely easy to tinker with images. To beautify them. Or falsify them. In the same way that the celebrities pictured in magazines never have wrinkles, biologists’ photos never seem to have the slightest flaw. 

When he was appointed editor of the Journal of Cell Biology, one of the most respected publications in the field, in 2002, the Amer­ican Mike Rossmer decided to use a specially designed software pro­gram to screen all the manuscripts he received for photo retouching. Rossmer has since stated that over eleven years he observed that one­ quarter of manuscripts submitted to the journal contained images that were in some way fudged, beautified, or manipulated. These acts did not constitute actual fraud: according to Rossmer, only 1 percent of the articles were rejected because the manipulations might mislead the reader. However, Rossmer did ask the authors concerned to submit authentic images rather than the beautiful shots that grace the covers of cell biology journals. 

Since 2011, the highly prestigious European Molecular Biology Organization has entrusted a layperson with screening the four jour­nals it publishes: Jana Christopher, a former makeup artist at the English National Opera, casts an eye expert at detecting subterfuge on the images of every manuscript accepted by the journals’ scientific reviewers. One out of five proves to be beautified, one out of one hundred to such an extent that the study’s publication has to be canceled, despite the fact that it has been validated by the peer re­viewers. Nature Cell Biology described the problem in an edito­rial detailing the measures taken to combat data beautification: “[The] most prominent problem is that scientists do not take the time to understand complex data­ acquisition tools and occasionally seem to be duped by the ease of use of image­ processing programs to manipulate data in a manner that amounts to misrepresentation. The intention is usually not to deceive but to make the story more striking by presenting clear­ cut, selected or simplified data.”As the editorial’s title, “Beautification and Fraud,” clearly indicates, I am not alone in thinking that it is impossible to distinguish one from the other. In the continuum of data cooking, how does one draw the line between what is tacitly accepted and what isn’t, between beau­tification and fraud? 

Other specialized journals have since followed the examples set by the Journal of Cell Biology and the European Molecular Biology Organization. Most of them now rely on software programs de­signed to detect image retouching. Shortly after acquiring detection software, the editor in chief of the organic chemistry journal Organic Letters realized to his horror that in many of the manuscripts he re­ceived, images using spectral analysis (a method of analysis com­monly used in organic chemistry) had been cleaned up to remove evidence of impurities. It appears that cell biology is not the only discipline affected by the data beautification made so easy by new digital technology. 

It is also clear, unfortunately, that software designed to detect data beautification is not an unbreachable defense against the temptation to tinker with digital images to make them more eloquent. In Jan­uary 2012 the mysterious Juuichi Jigen uploaded a genuine user’s guide to the falsification of cell biology data through image re­touching. Dealing with twenty­ four publications in the best jour­nals, all originating from the institute headed by Shigeaki Kato at the University of Tokyo, this video shows how easy it is to manipu­late images allegedly representing experimental results. It highlights numerous manipulations carried out for a single article in Cell. This video had the laudable effect of purging the scientific literature of some falsified data. And a few months after it was uploaded, Kato resigned from the University of Tokyo. Twenty­ eight of his articles have since been retracted, several of which had been published in Nature Cell Biology— the very publication that had six years earlier proclaimed how carefully it avoided manipulations by image­ retouching software. 

 

Excerpt adapted from Fraud in the Lab: The High Stakes of Scientific Research by Nicolas Chevassus‐au‐Louis, translated by Nicholas Elliott, published by Harvard University Press.
Copyright © 2019 by the President and Fellows of Harvard College. Used by permission. All rights reserved.

Latest Releases
Join the conversation

ida sanka 2 September 2021

i really liked your article. Thanks for sharing such a wonderful post. laser tattoo removal