Problems of Complexity—The Space Shuttle Challenger

Abstract

This is the sixth of a set of twelve course notes written in 1993 and revised across 1994 and 1995 for Technology and Human Existence, a half-semester first year option on the philosophy of technology.

Steering Complex Technological Systems—The Challenger Disaster

In the fifth set of notes I provided an example of how increasing technological complexity brings with it a distinct kind of system failure. As we saw, this kind of system failure arises because the whole point of greater technological complexity is to create systems with greater ability to control events and processes. For the kinds of processes and events involved in nuclear power generation or on the modern, electronic battlefield are too rapid and interconnected for ordinary human skills of management, so if the job is to be done at all, it must be done by some kind of computer system. These, however, can, at least at the moment, only do this job by controlling the processes and events involved.1

I want now to provide another concrete example, one which highlights from a different angle the problem of anticipating in advance eventualities which will affect the performance of large-scale technological systems. This example exposes the difficulty of such anticipation by showing how difficult it is to determine in advance the way certain empirical facts will combine together to produce certain results.

The example I want to consider is the launch of the American space shuttle Challenger on January 27th, 1986. I will be basing my account on a paper by the American sociologist Trevor J. Pinch entitled “How do We treat Technical Uncertainty in Systems Failure? The Case of the Space Shuttle Challenger”.2 The Challenger exploded shortly after launch, killing all seven astronauts on board.3 This was a massive blow to the Americans, setting back their space programme by several years and sending shock waves throughout the country.4 The accident itself was a classic case of the kind of system failure to which complex, tightly interwoven systems are subject. For in the Challenger accident, several quite contingent and otherwise unrelated events were able to combine to initiate a chain of cause and effect which culminated in the tragedy. In this respect the Challenger accident was like the accident on Three Mile Island. For as Pinch points out, this latter accident was also a case where “a combination of seemingly unrelated events (such as faulty valves and indicators) conspired together to produce a major system accident …” (p.143, Pinch), namely, exposure of the reactor core and explosion of a hydrogen bubble.

The details of the Challenger accident are as follows: the launch of the Challenger had been delayed immensely by hold-ups in the launch of another shuttle, Columbia, which finally took off on January 12th, after three launch postponements and four aborted launch attempts. The flight of the Columbia could not be cancelled in favour of the Challenger mission because the Columbia was to carry U.S. Congressman Bill Nelson into space, and Nelson was Chairperson of the committee which approves N.A.S.A.’s funding. Moreover, Nelson had brought in 37 busloads of supporters to watch him go up in the Columbia. Last but not least, the Challenger needed spare parts from the Columbia. So the Challenger mission had to be postponed until after the Columbia had returned to Earth. Yet the launch of the Challenger could not be delayed too long: in order to demonstrate how space travel was becoming an everyday affair, a school teacher by the name of Christa McAuliffe had been selected from thousands to fly into space with the Challenger , where she would broadcast a message in time for President Reagan’s State of the Union speech. N.A.S.A. tried to bring the Columbia home early, but bad weather intervened to further lengthen the Columbia’s flight. This led the Challenger’s launch being postponed twice more.

Finally, the date of the Challenger’s launch was set for Sunday, January 25th, a date which would permit George Bush, Snr., then Vice-President, to watch the launch on his way to the inauguration of the new President of Honduras. But Bush needed to know as soon as possible whether the launch would take place or not. So on the Saturday afternoon, N.A.S.A. appealed to the weather forecast for Sunday, which was bad, so the launch was postponed for a day and Bush’s visit was cancelled. Unfortunately, the weather forecast proved to be wrong; Sunday proved to be perfect for a launch. But because Bush had needed an answer, N.A.S.A. had already gone ahead and postponed the flight instead of waiting until Sunday morning.

The next day, Monday, January 26th, the Challenger was tanked up and the astronauts strapped in. But unfortunately a bolt on the main hatch was found to have a burred thread. The bolt had to be replaced. Technicians called for a drill and got a Black and Decker with a battery pack. But the battery proved to be flat, so more were sent for. These, too, all turned out to be flat, except for one which was still too weak to permit drilling out of the faulty bolt. Finally, safety regulations were contravened and an ordinary power drill was used. But by this time, it was too late in the day. The launch was postponed until Tuesday, January 27th.

Now the weather forecast for Tuesday was for low temperatures, so the sprinkler systems were turned on overnight to help prevent ice forming at the bottom of the launch pad. But the weather turned out to be much colder than anticipated; all the water from the sprinklers in fact froze, making the problem of ice worse, not better. The presence of ice led to record low temperatures in the lower part of the right solid fuel rocket booster, which was shielded from the sun. These low temperatures apparently impaired the resilience of certain sealing devices called O-rings. Thus, when the Challenger was launched, these O-rings gave way, allowing gases from the solid fuel booster to escape. But while the Challenger was still on the launch pad, the formation of a temporary seal prevented the further escape of gases. In addition to this problem, ice from the launch gantry fell on the vehicle, causing unknown damage. Nonetheless, the Challenger took off. But thirty-seven seconds into her flight, the Challenger was hit by the most violent gust of wind ever experienced at a space shuttle launch. This caused the whole vehicle to vibrate, which in turn broke the temporary seal; fifty eight seconds after launch, flaming gases began spraying like a flamethrower onto the metal strut connecting the booster to the trunk of the main rocket. At sixty four seconds the external hydrogen tank was somehow ruptured. At seventy two seconds the strut, which may have been weakened by falling ice, broke, allowing the booster to detach itself and in so doing, rupture an oxygen tank. At this point, the rest is, as Pinch says, history.

This case illustrates very well the kind of complex, almost devious causal interactions which lead to unpredictable failures in complex technological systems. You have here causal links which are more or less direct and straightforward, such as the link between the O-rings giving way and the eventual detachment of the rocket booster. And you have here causal links which are much more indirect and subtle, such as the link between George Bush’s desire to witness the launch and the occurrence of the launch on Tuesday, January 27th, rather than Sunday, January 25th. Finally, you have an example of how through quite chance events safety systems acted in an unexpected way to increase rather than decrease the risk of system failure.

Now clearly accounts like the above are essential if further accidents of this kind are to be prevented. For by indicating what happened in the Challenger accident, they indicate what N.A.S.A. must do in order to prevent similar accidents in the future. But how does one know whether the version of events given in this or any other account is correct? The story I have just told of what happened to the Challenger implies that one causal factor in the accident was the pressure put on N.A.S.A. to get the Challenger up into space in time for Ronald Reagan’s State of the Union speech. But that there was any pressure has been challenged, not the least by the White House itself. Clearly, such social and political facts are amongst the hardest to identify as having played a causal role in accidents. The Rogers commission, which was appointed by the President to investigate the tragedy, did not delve much into such things, just as it did not delve greatly into the extent to which procedures for awarding contracts and the vested interests of large companies contributed to the tragedy. Other investigators, however, see such things as making an integral contribution to the causal chain which led to the tragedy.

You might think, though, that if such social and political facts are hard to pin down as causally relevant, at least the technical ones are not. Is it not relatively easy to determine whether, and to what extent, the various facts recorded by technical devices such as cameras, sensors, chart recorders, voice recorders and computer systems played a causal role in producing the accident? Unfortunately, it is not. Not even technical facts speak for themselves; they do not of themselves indicate how important or unimportant they were as causal factors. You cannot just read this off their faces. The significance of even the most technical of facts can be questioned and disputed by experts. The case of the Challenger in fact provides an example of this: one expert, Ali Abutaha, has given a quite different account of what caused the accident. He maintains that the accident was caused by a breach in the wall of the rocket booster, which may have itself resulted from damage to the booster caused by structural deflections during take-off. Like all the other experts, this person is competent and his evidence is credible. So who do we side with and on what basis?5

So it is not at all easy to construct an indisputable account of what caused a given accident. Any account of what caused an accident to occur will be full of facts, theories and weightings of facts, and all of these can be called into question. Let’s assume that an accident X occurs in a complex system. Then it is quite possible that one expert investigator might argue that X was caused by events A, B and C, a second might argue that X was caused by A, B and D, while finally a third might argue that it was caused by E, F and G. And they might all argue their different claims on the basis of apparently good, scientific reasons. So who is right? Clearly, we need to work out who is right if we are to take action to prevent accidents like X occurring again.

The problem is that in any accident or indeed any kind of causal process the causes of events do not stand up and shout at you, “Hey, I helped to make all this happen.” Before you can even begin to identify facts like A, B and C as involved in causing X, you have to make a lot of background assumptions about how in general facts like these causally interrelate with one another. So if two experts make different assumptions in this regard, they will come up with different accounts of what happened. How different these general background assumptions can be is well-illustrated by the accident on Three Mile Island: in the course of the whole disaster a certain valve was closed by an operator. Now afterwards some experts thought that this was an important operator error. But others thought that it did not make much difference at all. Indeed, at least one expert maintained that this had not only made no difference, it had in fact been a good thing to do.6 Clearly, what enabled them to come to such differing opinions was that these different experts made different theoretical assumptions about what in general would happen if a valve of the type were closed in any situation sufficiently similar to the one which actually existed in the reactor at Three Mile Island. In other words, they made all sorts of very substantial assumptions about how such valves in general behave, about how other components of the system behave, what the overall situation was in the reactor and so on. The point here is a general one: in giving an account of what caused any accident, it will be necessary to make comparably strong assumptions about causal relations between facts.7

Pinch calls this problem of working out a definitive account of the causes of accidents the problem of interpretative flexibility. In general, scientific and technological facts do not speak for themselves; they do not of themselves point to their causal significance or relevance. The causal relevance of facts, and indeed just what the facts are, are things the scientist or technologist only sees if in advance of empirical observation he or she makes quite substantial theoretical assumptions about how facts causally interact with and impact one another.8 The problem of interpretative flexibility is not a trivial one because as we have seen, if we are to prevent an accident from recurring, we must first identify its causes. For pretty obviously, what action one recommends to prevent recurrence depends very much on just what one identifies as the causes of the accident. As Pinch says, if you see the Challenger accident as having occurred at least in part because N.A.S.A. had a policy of handing out juicy contracts to a limited number of tenderers, then you will recommend safeguarding fair and open competition for shuttle contracts as a prime safety measure. If, on the other hand, you really do believe, as apparently some people in America did,9 that the Challenger was the victim of a secret Russian gravity wave weapon, you will recommend building elaborate shielding and other such countermeasures.

Now as a matter of fact, the Challenger accident illustrates a particularly important variant of this problem of interpretive flexibility. If working out after the event what happened involves all these powerful assumptions, then so, too, will working out before the event the likelihood of such accidents. In particular, such powerful assumptions will be involved in working out before the event the likelihood that those subsystems or components will fail which after the event are determined as the ones which failed and thus caused the accident. In other words, just as after the event, the experts can differ quite radically and yet still rationally about what caused the accident, so, too, can experts before the event differ radically and yet quite rationally in their assessment of the safety and reliability of components, and thus in their assessment of the likelihood of an accident involving these components. The interesting thing about the Challenger accident is that ultimately both N.A.S.A. and its critics agreed that the main cause of the accident was the failure of the sealing devices known as O-rings. But N.A.S.A. maintained to the end that prior to the accident there was no good scientific reason for doubting the safety of these O-rings and that to this extent it had not behaved negligently in using them. N.A.S.A.’s critics, in particular, the members of the Rogers commission which investigated the accident, did not accept this. They claimed to be able to show retrospectively both that these particular components were unreliable and unsafe; and that prior to the launch engineers from the O-rings’ manufacturer, the corporation Morton Thiokol, had given N.A.S.A. good evidence showing that these components were unsafe. So during the whole investigation one thing that was at issue was whose assessment of the O-rings was justified. Now it is very easy to be wise after the event, when it became clear that the warnings given by the engineers from Morton Thiokol were right. But if we abstract from the benefit of hindsight, if we return ourselves momentarily to a time before the accident, then things look rather different. When seen without the benefit of hindsight, it turns out that both assessments were by no means irrational or crazy. Depending on what kind of background assumptions you make, you can come up with a plausible case both for the assessment given by N.A.S.A. and for the assessment given by the engineers from Morton Thiokol. From one point of view, the point of view adopted by N.A.S.A. officials, the O-rings “… are fail-safe seals with redundant safety features.” (p.149, Pinch). And from another point of view, the point of view of the Thiokol engineers, “they are dangerously flawed.” (p.149, Pinch) If this is right, then the whole notion of coming up with a definitive or assessment of the reliability of systems and thus the likelihood of accidents is subject to the same kind of interpretative flexibility as is the retrospective ascertainment of an accident’s causes.

Now there were thirteen members on the commission and all but Rogers himself had technical expertise related to space travel in one form or another. It has been said that this expertise enabled the commission to expose the weakness of the arguments presented by N.A.S.A. in defence of their safety procedures and in particular in defence of their assessment of the O-rings. Thus, when Lawrence Mulloy, the N.A.S.A. project manager for the booster rocket,

… justified his decision to authorize the launch, despite doubts about the O-rings, by claiming that “tests and analyses suggested that the seals could suffer three times the observed erosion and still block gases at far higher pressure than exists in the rocket” … the Commission was able to discount Mulloy’s explanation, based on a technical evaluation of the relative quality of Mulloy’s testimony about O-rings compared with that of the Thiokol engineers.10

So the claim of the commission was that thanks to the expertise of its members, it had seen through the poor arguments given by N.A.S.A. for its assessment of the O-rings as reliable. But just how bad were the arguments presented by Mulloy, and just how good were the arguments presented by the Thiokol engineers?

The Thiokol engineers apparently warned N.A.S.A. about the O-rings in a tele-conference one night. These engineers had come to believe that the sealing effect of these O-rings would be impaired at low temperatures. As I understand it, the engineers’ employer, Morton Thiokol, wanted to keep N.A.S.A. in the dark about the engineers’ suspicions. But apparently one of these engineers, Roger Boisjoly, blew the whistle on his company, and this led to the above-mentioned tele-conference.11 During the conference, the Thiokol engineers tried to convince officials from N.A.S.A.’s Marshall Space Flight Centre of their claims by demonstrating a correlation between temperature and O-ring performance. But they were forced to admit that they could demonstrate no straightforward dependence. Indeed, one of the worst cases of an O-ring failing to seal in gases from a rocket booster had occurred at high temperatures. Apparently, Mulloy and his boss at Marshall, George Lucas, were sticklers for accurate, quantitative data, and this the Thiokol engineers could not provide. In fact, the Thiokol engineers have reportedly said “that their engineering “feelings” that the O-rings were a problem could not be quantified.” (p.153, Pinch)

According to Pinch, if one follows the intricacies of the debate between the Thiokol engineers and the N.A.S.A. officials, it is not all that easy to say that Mulloy and his colleagues were in any obvious sense wrong. Mulloy stated to the commission that at the time his opinion was that the engineering data provided by Thiokol indicated that the O-rings would provide an effective seal. Furthermore, N.A.S.A. had produced its own mathematical model to fit the data they had on O-ring performance, and this model suggested a safety factor of three. Admittedly, one of the members of the Rogers commission, Richard Feynman, strongly attacked this model by demonstrating how uncertain the parameters used in it were. But Pinch at least implies (p. 151) that the model used by N.A.S.A., for all its undoubtedly uncertain parameters and assumptions, is no better or worse than the mathematical models used very frequently in science. Feynman said that we should not trust mathematical models. But the fact of the matter is the science does very regularly rely on such models.

Feynman, incidentally, made something of a name for himself during the commission. He was himself a physicist who much delighted in debunking crazy ideas. At some time during the commission it occurred to him to conduct a little experiment with the material used to make the O-rings: Mulloy was in the stand testifying black and blue that N.A.S.A. had assessed the O-rings adequately. He was showing all sorts of slides and charts to support his case. Feynman put a piece of the material used to make the O-rings into a clamp and dropped the lot into his glass of iced water and then, at a judiciously chosen moment, Feynman announced that he wanted to speak and exhibited the material. Apparently, the iced water had caused it to change in a manner consistent with what the engineers from Thiokol had been saying. Naturally, this was terribly embarrassing for Mulloy. As Pinch puts it, “Feynman became the instant darling of the media” while “N.A.S.A. was made to look as if they had foolishly overlooked an obvious piece of evidence.” (p.152, Pinch) Yet Pinch claims that it is doubtful whether Feynman’s experiment could really be taken seriously as a contribution to the scientific and technical debate. Basically, Feynman’s experiment was a stunt with great rhetorical impact but of little scientific worth.

So once you carve away rhetorical gestures such as Feynman’s experiment, it is not quite so obvious that N.A.S.A. had been unduly cavalier about the safety of the components used in the space shuttle. It no longer appears quite as obviously the case that on the one side you have the Thiokol engineers with very good arguments to show that the O-rings were defective and on the other, a complacent N.A.S.A. which, because it was too eager to get its space shuttles into space, had overlooked or played down these very good arguments. If you look behind the public image of the accident, the one created in part by the newsworthy rhetorical gestures of people like Feynman and no doubt in part by the need to find a scapegoat, then things begin to appear a little more even-sided. And this is no isolated discovery. In recent years, more and more social scientists and philosophers have become intrigued by the rather disturbing discovery that in general a clear-cut distinction between adequate and inadequate assessments of risk is not all that easy to make. Some writers have responded to this by looking around for some deeper principle which would allow one to give greater weight to one of the sides in a dispute over safety and the behaviour one can expect from system components. One such principle which has been suggested is what one might call the epistemological primacy of the coal face: the testimony of those experts is preferable who are actually at the coal face, that is, who are actually involved in experimental work with the system components whose safety and anticipated future behaviour is at issue. This principle of the primacy of the coal-face has been strongly urged by the sociologist Harry Collins:

Certainty about natural phenomena tends to vary inversely with proximity to scientific work … proximity to experimental work … makes visible the skillful, inexplicable and therefore potentially fallible aspects of experimentation.12

And some writers have actually applied this principle to the Challenger incident itself. This has of course led them to side with the engineers from Morton Thiokol. Donald Mackenzie and his colleagues have taken this line; according to them, the very fact that the Thiokol engineers were the ones who had worked experimentally with the O-rings gives their apparently unquantifiable “feelings” of unease about these components a special significance and weight. Being closer to the experimental work than people like Mulloy and Lucas, they had a better idea of the uncertainties which the O-rings presented, whereas Mulloy and Lucas, because they had no “hands-on” experience with the O-rings, “… placed more confidence in their reliability than the Thiokol engineers.” (p.153, Pinch) Mackenzie and his colleagues conclude by saying that what is worrying about the Challenger accident is that people who decided to launch the Challenger

… had a deep commitment to the technological institution involved [i.e., to N.A.S.A. and its space shuttle programme—C.B.C.], but were insulated from the uncertainties of those with direct responsibility for producing knowledge about the safety of the solid booster rocket.13

In short, because Mulloy and the other decision-makers at N.A.S.A. had never actually been at the coal face studying the behaviour of O-rings, they had never had to face the uncertainties which these components presented. This distance from the coal-face led them not only to place a greater degree of confidence in the O-rings; it also led them to place an unjustifiable degree of confidence in them.

But other writers have proceeded differently. They have pointed out that almost always designing and operating technological systems is not a matter of following formally prescribed rules. They claim that whenever one investigates system failures, or indeed, technological systems operating normally, one finds the same situation:

Beneath a public image of rule-following behaviour, and the associated belief that accidents are due to deviation from these clear rules, experts are operating with far greater levels of ambiguity, needing to make uncertain judgments in less than clearly structured situations. … Practices do not follow rules; rather, rules follow evolving practices.14

In short, technologies operate in a much more ad hoc way than we care to realise. Systems and components rarely perform exactly as they were designed to perform. In implementing a new technology, one will thus have to put up with certain inadequacies, for otherwise one will find oneself glued to the drawing board, ceaselessly designing and redesigning systems and components. A technology will only ever get past the drawing board stage if designers and decision-makers are prepared to accept an inevitable degree of imperfection and uncertainty, if they are prepared to use their good judgement to determine whether the unexpected defects and behaviour of components and systems is tolerable. All in all, designers and decision-makers in technological systems will not be able to proceed according to strict and inflexible rules. Instead, they find themselves forced to continually renegotiate their rules, standards and design specifications in the light of ongoing experience.

We might want to call this insight into the nature of the technological beast the principle of negotiability, that is, of using one’s good judgement to negotiate case by case between the formal design criteria and the actually observed behaviour of components subject to these criteria. Now the decision made by Mulloy and his colleagues at N.A.S.A. to launch the Challenger even though the Thiokol engineers were unhappy about the O-rings can be seen as a decision taken according to this principle of negotiability. People at N.A.S.A. already knew that during previous shuttle flights the O-rings had shown signs of the kind of failure which apparently led to the Challenger accident. On all these occasions, and indeed in tests undertaken on the ground, these O-ring abnormalities had not lead to accident or failure. Thus, when they decided to send the Challenger up, their decision was guided by the consideration that although the O-rings had not performed as well as expected and indeed demanded in the original design specifications, this had had no serious consequences. They in effect assumed on the basis of past experience with the operation of the O-rings that the performance of the O-rings, while not meeting the original standards set for them, was still adequate and that the original standards were presumably stricter than actually necessary. Their only alternative to this decision about the O-rings was to say that because during previous flights and tests the O-rings had shown signs of failure and had not performed in accordance with the original design specifications, the launch of the Challenger and all other shuttles should be halted until the sealing system of which the O-rings were a part had been redesigned. But the O-rings were in fact just one of several subsystems and components which were not performing at the standard set by the original design specifications (see p.154, Pinch). There were many other similarly ambiguous problems of decision and judgement. So to take this latter, more cautious approach would have in effect brought the space shuttle programme to a halt. The general point here is that absolute certainty and absolute conformity to design standards cannot be guaranteed. Thus, to insist that formal procedures, rules and specifications are not negotiable, to insist upon strict adherence to them, must bring the development, implementation and improvement of new technologies to a standstill.

Clearly, if you view the dispute between N.A.S.A. and the Thiokol engineers from this point of view, then Mulloy and his colleagues from N.A.S.A. do not look so bad. From this point of view, their assessment of the O-rings does not appear to be unjustifiable or illegitimate. Of course, their assessment was and remains a false one, but we must distinguish making a rational and legitimate assessment of risk of the basis of the evidence available at the time from making an assessment which is true.

So even when we resort to deeper principles like the “coal-face” principle mentioned earlier, or the principle of negotiability, our problem is still not resolved. For both of these principles seem to be perfectly correct, yet as the Challenger accident shows, they can lead us to contradictory evaluations of assessments of risk and of the behaviour one can expect from components and systems.

It looks, then, as though the business of anticipating risks and failures is chronically conditional and hypothetical. That is, it looks as if there is no such thing as an assessment of risk and of future behaviour which does not rest upon very rich theoretical assumptions about how facts interact causally with one another; and upon very rich methodological assumptions about the appropriate principles for evaluating scientific and technical claims. It is as if attached to each assessment of risk, each anticipation of future behaviour, there were a giant proviso: If A, B and C is done, then there is a 90% chance that X, Y and Z will happen - provided that such and such an account of how things like A, B and C behave is right and provided that those principles are correct according to which the evidence provided by colleagues Smith and Jones is the soundest. This is a pretty rich and substantial proviso; in fact, our discussion of the clash between N.A.S.A. and the Thiokol engineers about the O-rings shows it to be so rich that experts can easily dispute a given proviso, and thus a given risk assessment, in ways that are quite rational and reasonable. This is a disturbing result because it means that debates about risks and the future behaviour of systems and components can in principle go on forever without reaching a point where no more questions can be rationally raised. It appears that if such debates about safety and future consequences are ever to have an end, they must be broken off without reaching any absolutely definitive conclusion which settles all possible doubts.

Clearly, the result we have reached is very important, not just for people building and operating space shuttles, but also for people caught up in a world where new technologies are being shoved down their throats at increasing speed. I suspect that most people, and possibly even most experts, believe that at least some scientific and technical arguments can be so water-tight that to call them into question would be to display scientific incompetence, lack of understanding or even downright irrationality.15 But the way scientists and technicians actually do argue and debate, and the kinds of argument they do actually formulate, suggest that this conviction is wrong. And if this conviction is wrong, then so, too, is the conviction that we can conclusively and unconditionally anticipate how systems and components will behave in the future. This of course means that any technology which is built around the idea of total control is built around a fundamentally mistaken idea. For such technology presupposes that we can absolutely anticipate, at least in principle, or at least given sufficiently powerful computers, how systems and components of systems will behave. That it is not in fact possible to anticipate the future behaviour of systems in any absolute sense is shown by the fact that the more powerful and the more complex technology becomes, the more the experts tend to disagree about how such technology will behave and what has actually happened when this technology goes wrong. Where the technological system in question is relatively simple and slow-moving, for example, road transportation, experts can very often, if not always, agree about such things as how such and such a make of car will behave in certain kinds of accident, how alcohol affects driver performance, and so on. Moreover, they can often agree about what happened when a certain accident occurs. But this is not because at least here, in such simple cases, their claims, evidence and conclusions are somehow indisputable and unconditional. It is rather because in such simple cases they all do tend to have the same relatively simple and straightforward theories about the behaviour of cars, drivers and the like. But in more complex systems, background assumptions both about how things interact causally and about what counts as a good scientific reason can become bones of contention themselves, and thus there is not as much unspoken agreement about basics. So in such systems, the basically conditional, proviso-ridden character of their assessments about what will happen and about what did happen becomes particularly manifest.

What, then, is the general lesson to be learnt here? It is surely not a sceptical one, say, that there is no sense at all in which we can anticipate the behaviour of technological systems and components. In very many cases we clearly do manage to anticipate enough to be able to use these systems and components. And in very many cases we can and do reach consensus about the causes of accidents and breakdowns. Yet these anticipations and these agreements are relatively limited affairs which, although they can usefully guide our dealings with the technology in question, are quite prone to turn out wrong. As long as our technology is relatively low-scale and loosely interconnected, the occasional failure of our ability to anticipate or our failure to agree is not so tragic. But once our technology starts to get more complex and tightly interconnected, we can no longer afford breakdowns to the same extent. Thus, at this point, the limits to technical anticipation and scientific agreement start to become apparent. So the lession to be learnt here is really not such a dramatic one: it is simply that we should regard simplicity and loose-interconnectedness as central criteria governing our design and introduction of new technology. I do not think that today’s technologists regard these two things as central virtues at all, much less as criteria which should enter decisively into the design and introduction of new technology. Perhaps this is in part because they still believe in total control and anticipatability. Perhaps it is also because they fear that if these two things do become central criteria, then they will not get a chance to build space shuttles and the like. But this need not be so. Humanity could consistently adopt simplicity and loose-interconnectedness as central technological virtues while retaining its space shuttles. It could still decide that space exploration is worth the development of the complex, tightly interconnected technology which this particular activity requires. The point is simply that the technology we use to live here on Earth should as far as possible avoid the underlying presuppositions of space shuttle technology. We should not attempt to control our entire human existence as we must attempt to control our space shuttle flights. The principles underlying space shuttle technology should remain as exotic as this technology itself.

Notes

  1. I personally think that computers can only be programmed to control, i.e., to anticipate in advance. If this is the case, then the replacement of humans by computers will always be the replacement of skills of management by procedures of control. To this extent, the kinds of problem raised by the incident on the Vincennes, in particular, the problem of anticipating all eventualities which will affect the performance of a system operating in a given context, will remain chronic problems of large-scale technological systems where it is no longer possible for there to be one or more persons who integrate data received from the system into a revisable overall picture of the system’s performance and environment. Incidentally, it should be noted that this problem is a well-known one; in artificial intelligence it is known as the frame problem.

  2. In Social Responses to Large Technical Systems - Control or Anticipation, edited by Todd R. La Porte, Kluwer Academic Publishers, 1991, pp.143-158.

  3. Hence the black humour which arose shortly after the tragedy: Question: What does N.A.S.A. stand for? Answer: “Need Another Seven Astronauts”.

  4. Pinch reports that the impact of the tragedy “… has been compared to the assassination of John F. Kennedy.” (p.144, Pinch)

  5. Indeed according to Pinch even those accounts which seem to have little credibility can give us problems (see p.148, Pinch). Apparently, one group of people in America blamed the Challenger accident, as well as other mishaps which struck the American space programme in 1986, to a new Russian gravity wave weapon. Are we to rule such accounts out of court just because they seem incredible? On what basis do we say they are incredible? The fact that according to most experts the Russians could not possibly have such a weapon? But what entitles us to accept the judgement of the majority of experts - especially when prior to the accident the majority of experts would have regarded the accident itself as highly unlikely? Recall how prior to Three Mile Island the experts would have had us believe that the widespread use of nuclear power was very safe, safer in fact than coal.

  6. This accident also illustrates another feature of how in complex technological systems it becomes hard to identify even the facts themselves, much less their causal significance. In the course of the accident there was an explosion in the reactor. Apparently, this was caused by the reactor’s becoming exposed, which apparently allowed a reaction between zirconium and water to occur, which generated a hydrogen bubble, which eventually exploded. But depending on whose testimony you believe, it took the experts either hours or even days to conceive of this possibility. Clearly, this indicates that in systems as complex as this humans will not be able to identify serious failures as they occur and rectify them before they have their serious consequences.

  7. If in any accident there is agreement right from the start about what the causal story is, this will only be because the investigators basically agree on the relevant theory. Thus, in a motor accident where the car was seen by numerous witnesses to veer crazily down the road, where the driver had been seen in the pub a hour before, where the driver had a blood alcohol level of 2.0, there presumably will be no disagreement - the driver was drunk at the wheel, which caused him to lose control of the car. But there is still a massive theoretical assumption involved here about the effects of alcohol on behaviour, about what kind of behaviour is required for safe driving, etc. Things only look so clear-cut and obvious because we are all agreed on this background theory. The standard account could of course always be wrong. It may turn out that even though the driver was drunk and thus driving dangerously, the accident was in fact caused by, say, the driver’s suffering an epileptic fit at the wheel, an occurrence which presumably has nothing to do with his being drunk.

  8. Actually, what Pinch is getting at with the notion of interpretative flexibility is just a version of something which has been known for a long time in the philosophy of science, namely, that scientific and technological facts are not just out there waiting to be seen and assessed in their true, unambiguous causal significance. This old myth about science is perhaps what motivates the idea that the scientist, unlike the woolly-headed student of literature, historian, theologian, or philosopher, takes a truly objective stance because he or she lets the facts speak for themselves, i.e., lets the facts and the facts alone determine judgement.

  9. See p.148, Pinch.

  10. Rowland, R. 1986. “The Relationship between the Public and the Technical Spheres of Argument: A Case Study of the Challenger VII Disaster”, p.143 Central States Speech Journal 37, pp.136-146; quoted in Pinch, p.151.

  11. I am not completely sure of these details. Their accuracy or inaccuracy does not affect the general argument, however.

  12. Quoted in Pinch, p.153. Pinch himself has quoted Collins from Mackenzie, D., Rudig, W., and Spinardi, G., “Social Research on Technology and the Policy Agenda: An Example from the Strategic Arms Race”, in Technology and Social Process, edited by B. Elliot, Edinburgh: Edinburgh University Press, 1988, pp.152-180.

  13. Mackenzie, D., Rudig, W., and Spinardi, G., “Social Research on Technology and the Policy Agenda: An Example from the Strategic Arms Race”, p.162, in Technology and Social Process, edited by B. Elliot, Edinburgh: Edinburgh University Press, 1988, pp.152-180.

  14. Wynne, Brian, 1988. “Unruly Technology: Practical Rules, Impractical Discourse and Public Understanding”, p.153; quoted in Pinch, p.153. In Social Studies of Science, Vol. 18, pp.147-168.

  15. Perhaps one of the reasons why people are so casual in their attitude to new technologies is that they, like many experts themselves, believe that it is possible to reach absolutely definitive conclusions with regard to the likely risks and future behaviour of new systems and components. This attitude no doubt derives from the more general belief that, unlike the knowledge claims of philosophy, theology, politics and everyday life, the knowledge claims of science and technology are capable of being resolved without any room being left for rational doubt.