Physicist Sabine Hossenfelder and philosopher Philip Goff recently argued over whether electrons exist. At the core of the argument was whether scientific theories are true, or whether they are something like useful fictions, as well as the meta-question of whether this argument is worth having in the first place. Philosopher of science Cat Gillen offers a way out and through.
As if times weren’t uncertain enough, the reality or unreality of electrons was recently thrown into question during an X (aka Twitter) debate between philosopher Philip Goff and physicist Sabine Hossenfelder. Although this was the question that left Twitter spinning (though best to leave the concept of spin for now…) the real crux of the discussion was whether science tells us anything about how the world really is, or if it is just a tool for making predictions. Moreover, are such philosophical concerns of any use to scientists? To answer these questions, we must delve into the intriguing scientific realism debate.
The march of progress in science belies an awkward feature of scientific development. When combing through the history of science, we find that it is scattered with previous highly successful, yet ultimately disproven, scientific theories. For example, Bohr’s atomic model, with the electron physically circling the nucleus in fixed orbitals, was able to explain and even predict the emission spectrum of hydrogen to astonishing accuracy. Such was the success of Bohr’s model that Einstein himself proclaimed: “this is a tremendous result. The theory of Bohr must then be right”. Fast forward a hundred years to our fuzzy quantum atomic model and… sorry Einstein, you were wrong!
Bohr’s atomic model is not our only successful, yet false scientific relic. Philosopher of science Larry Laudan produced his notorious list of past bafflingly successful scientific theories: Fresnel’s luminiferous ether (where light propagates in a medium called the ether), the caloric theory of heat (where heat is a fluid that flows from hot bodies to cold bodies), and the phlogiston theory of combustion (where combustible bodies contain a fire-like element) also made the list.
So it seems like science can make fantastically correct predictions from utterly incorrect theories. And herein lies the kicker: if our best and most successful scientific theories of the past have ultimately turned out to be false, how can we have any confidence that our current, most successful scientific theories will not face the same fate? What justification do we have that the posits of general relativity or of quantum mechanics are true representations of the way the world is? The success of these theories cannot ensure their correctness, if the history of science has anything to say about it.
This line of reasoning, known as the pessimistic induction, is the central argument in favour of scientific antirealism. This is the belief that we should not ascribe truth to the posits of scientific theories, just because they are able to garner predictive successes. And there is good reason to be motivated by this argument, lest you make the fatal flaw of holding contradictory beliefs in positing, say, that electrons both physically orbit the nucleus (à la Bohr) and inhabit a probability cloud (in line with the modern quantum atomic model). The antirealist would reject that our theories map onto the real world in any meaningful way and would rather assert that science is a tool to make predictions; it was never meant to provide an accurate picture of the world.
So the antirealist position seems justified in the face of past theories and maintains a sense of modesty about our current achievements in science. But it feels as if we are missing something important. If all our theories, past and present, are simply incorrect, what is the driving force of their successes? Why is it that we can use quantum mechanics to predict what is possible in a cloud chamber, or relativity to predict the discrepancy in time between clocks on earth and GPS? Hence we bump up against the realist’s rebuttal: if we cannot attribute the success of our scientific theories to them having latched onto the true workings of the universe, then we must attribute their success to miracles. We just continually luck out in landing on highly successful nonsense. This argument, advanced by Hilary Putnam and known as the no miracles argument and has huge intuitive appeal.
A middle ground can be found in adopting a more nuanced form of realism that is able to explain the success of scientific theories with at least some element of truth, whilst simultaneously allowing some parts of the theory to be false. This selective realism separates the working parts of theory (those parts which are actually responsible for deriving the correct predictions) from the idle parts (superfluous content which has no play on any predictions). These working parts should be expected to carry through upon theory change and it is these working parts alone which the realist can then ascribe truth to.
Similarly, structural realism is a flavour of realism that identifies the working parts of a theory to be its structure, often taken as its mathematical content. The continuity between the mathematics of Fresnel’s ether theory of light and Maxwell’s equations makes a solid case for a structural realist account of the success of both otherwise contrasting theories. Hence the structural realist believes that it is the underlying structure of a theory that derives its predictive success and thus it is the structure alone that we have correctly identified. The semantic stories we overlay on the structure (e.g. light is oscillations in an ether, or electrons literally circle around a nucleus) are just that, stories.
The debate over which flavour of realism, if any, most accurately and satisfactorily explains the simultaneous success and falsity in the history of science remains unsettled. But is such speculation mere philosophical folly? Should scientists concern themselves with questioning whether the theories they work with day-in, day-out, are actually true, or whether they just work? And more generally: does the realism vs antirealism debate have anything practical to offer scientists?
The answers to these questions depend somewhat on what side (or part) of the fence you fall on within the realism debate. A strict antirealist who believes that science is just a tool would likely say that the debate offers little pragmatic use. Science is about developing more and more successful theories and asking questions about its truth content is to ask the wrong question.
However, such a view may be missing a real opportunity for the advancement of science. If one of the middle ground positions such as selective or structural realism is adopted, then realism may actually be able to make predictions about the future scientific theories currently beyond our reach. If we are convinced that, for example, certain structural components of past theories are the parts responsible for their successes, and it is these structural parts that have latched on to the true composition of the world, then we should expect these same structural features (and nothing more) to necessarily be maintained throughout a future theory change.
To run through an example of such a prediction: let us take the electron. The standard model of particle physics currently tells us that there is an object called the electron that has properties such as mass, charge, and spin. Indeed, these features are considered fundamental and it is tempting to thus consider them as true posits. However, looking back at the history of successful yet false scientific models, we find that a realist commitment to spin just might be undermined by the success of Sommerfeld’s atomic model.
Sommerfeld’s atomic model of 1916 was able to predict the fine structure of hydrogen (the bands of light emitted from a hydrogen atom are actually made up of multiple, finely spaced bands) to such accuracy that, in a letter to Sommerfeld, Einstein returns to say that “Your investigation of the spectra belongs among my most beautiful experiences in physics. Only through it do Bohr’s ideas become completely convincing”. Sommerfeld was able to derive the fine structure by adding elliptical orbits to Bohr’s model and making a relativistic correction on these (leading to orbital precession). The result was perfect and is still used today. The awkward thing however is that today we know the fine structure is actually caused by electron spin – a property not even discovered for another decade after Sommerfeld’s model! This bizarre result has been deemed by theoretical physicist Lawrence Biedenharn a “cosmic joke at the expense of serious minded physicists” and by co-discoverer of spin Ralph Kronig as “perhaps the most remarkable numerical coincidence in the history of physics”.
The structural realist explanation of this cosmic joke is that both Sommerfeld’s model, and current physics landed on the same correct structure, but dressed this structure up differently. Sommerfeld had his orbital precession and today we have spin. They have the same function and have latched on to the same mathematical structure, but the semantic details have manifested very differently. As such, the structural realist might predict that spin, in its present form, may not continue through in theory change, though a feature that functions as spin functions should.
The debate over whether our best scientific theories actually map onto the real world rages on. And if you believe that they do not, and that science is just a tool, then that might just be the end of the discussion for you. But if you ascribe to some kind of realism, then there is real utility in combing through the history of science to identify the thread of truth that has been maintained throughout theory change and which, just might, be maintained across our inevitable next step forward in science.
You can listen to Cat discuss this topic on the Mind Chat podcast here: https://www.youtube.com/watch?v=LVX7IMW2npw