Problems of Complexity—The U.S.S. Vincennes

Abstract

This is the fifth of a set of twelve course notes written in 1993 and revised across 1994 and 1995 for Technology and Human Existence, a half-semester first year option on the philosophy of technology.

Steering Complex Technological Systems—The U.S.S. Vincennes and Iran Air Flight 655

In previous notes I have emphasised the following crucial features of modern technology:

  1. Modern technology comes in the form of systems or techniques; they are complex processes involving humans, machines, administrative procedures and methods, all organised around attaining some goal;

  2. A modern technological system can only exist and function as part of a web of other technological systems which support and maintain this system;1

  3. The more complex and interconnected individual technological systems and the whole web of systems become, the harder it becomes for humans to operate individual systems and to see where the whole web of systems is headed.

Now in these notes I want to provide concrete empirical evidence for this latter claim. In particular, I want to consider an example which shows how under conditions of great technological complexity and interconnectedness, it becomes hard for humans to get an overview of how the system to which they belong is functioning at any one moment. By showing this, the example I have in mind shows how hard it is for humans to identify when things go wrong in highly complex technological systems and to intervene in time to correct errors and prevent their spread and amplification throughout the whole system. The example thus explains why the more complex technological systems become, the more elaborate their failure-safety mechanisms are and the less role there is for human beings as system components whose job it is to use data provided by the system to construct and continually re-assess a total picture of how the whole system is operating and on this basis to decide what actions currently need to be undertaken.

The example I want to discuss is the shooting down of an Iran Air passenger aircraft by the U.S.S. Vincennes in the Persian Gulf on July 3rd, 1988. I am going to base what I say here on the excellent analysis of this incident given by Gene Rochlin in his paper “Iran Air Flight 655 and the U.S.S. Vincennes—Complex, Large-Scale Military Systems and the Failure of Control”.2 This paper not only illustrates the three claims made above; it also brings out another thing which is related to the third, the problem of control, namely, of how in large scale systems people’s perceptions and experience can actually become distorted, leading them to make what turned out to be in this case a fatal error of judgement.

The U.S.S. Vincennes was the latest thing in U.S. naval technology. She was a fast, lightly armoured cruiser designed to provide air defence to an aircraft carrier battle group. To this end, she had on board something called the Aegis fire-control system, a very complex system of computer hardware which according to Rochlin was “capable of projecting a visible image of an air battle of many hundred square miles, tracking and distinguishing friendly and potentially hostile aircraft at ranges of tens of miles while engaging a variety of potential targets ranging from high-flying reconnaissance aircraft to high-speed cruise missiles.” (p.99, Rochlin) At the time of this incident the war between Iraq and Iran was raging, and, as I remember, the Iranians had threatened to attack oil tankers sailing up and down the Persian Gulf. The U.S., in order to protect Western oil supplies, sent in ships to protect the tankers. The Vincennes was one of these ships on duty in the Persian Gulf.

At the time of the incident, the Vincennes was making a sweep of the Straits of Hormuz, the narrow passage which leads out of the Gulf into the Indian Ocean. This was a mission which required more traditional naval skills such as ship-handling and gunnery; the hi-tech on the Vincennes was really not appropriate. In fact, the confined waters of the Persian Gulf made “this billion dollar bundle of sophisticated and advanced technology” rather vulnerable; Rochlin says she was not much more capable of defending herself against Iranian mines and speedboats than an ordinary destroyer (see p.99, Rochlin). Furthermore, the previous year another American ship, the U.S.S. Stark, had been hit by a missile fired by an Iraqi military plane, so all warning and detection systems on the ship were on full alert. Two other American ships were in the area, the frigates U.S.S. Elmer Montgomery and the U.S.S. Sides. Finally, there are some important background facts about the Iranians: Apparently, their civil and military aircraft followed the familiar practice of having so-called aircraft identification transponders on board. These are devices which emit a certain pre-arranged signal when the aircraft is probed by radar; different signals are arranged for different nationalities and for different aircraft types, i.e., civil vs. military. If the probing radar system has a transponder of its own, it can receive this signal and determine the nationality and type of aircraft it has probed. Iranian civil aircraft had been set to emit a Mode III signal, Iranian military aircraft, a Mode II signal.

On the morning of July 3rd, the Elmer Montgomery was attacked by Iranian speed boats armed with machine guns and rockets, so the Vincennes went to help her. At 10.13 am, the Vincennes got involved in a surface gun battle with the speed boats. At 10.17 am, Iran Air Flight 655 first appeared on the Vincennes’ radar. The Vincennes began issuing warnings to the approaching plane on the military air distress frequency at 10.19 am, and then on the civilian air distress frequency at 10.20 am. The Vincennes got no response from the plane nor did the plane alter its course. At just this time, the forward gun of the Vincennes got blocked by a shell which misfired so the Vincennes made a rapid turn to bring its rear gun to bear on the Iranian boats. This sudden turn—30 degrees at 30 knots—sent loose equipment flying throughout the ship, including the so-called Combat Information Centre or CIC, which houses all the fancy equipment. This apparently sent the CIC into a state of confusion. From this point on, i.e., approximately 10.20 am, the Vincennes, which was still fighting the Iranian speedboats, mistakenly identified the Iranian passenger plane as possibly an Iranian F-14 fighter, mistakenly reported getting Mode II rather than Mode III signals from the aircraft and mistakenly reported the aircraft as descending as it neared the ship when in fact the aircraft was still climbing, having just taken off. The Vincennes then reported back to base that a potentially hostile aircraft was approaching, and requested permission to attack. Permission was granted; the Captain waited another minute or so, during which time the still unidentified aircraft had closed to fifteen miles and was still being reported as descending toward the ship. At about 10.24 am the Vincennes fired two missiles which, a few seconds later, hit the Iranian passenger plane while it was still climbing and which it was still well within its assigned air corridor. Everyone on board was killed.3 Iran Air Flight 655 had been in the air for a total of seven minutes. The crew of the Vincennes only learnt later that day that they had not shot down an Iranian F-14 but a passenger aircraft.

This whole incident just seems like an unfortunate mistake of the kind which inevitably happens in a tense combat situation. But, says, Rochlin, this is to see things too superficially. He wants to argue that “what happened over the Persian Gulf that day was illustrative of a whole new class of failures to which military large technical systems are increasing susceptible … .” (p.102) As you see, Rochlin wants to talk only about large scale technological systems in the military. But the question is whether his conclusions will not also apply to large scale technological systems in general. In this regard, we should bear in mind the kind of large scale system needed to regulate the running of a nuclear power plant. This is surely a system of comparably complexity. And as you may know from the reports on the accident at Three Mile Island, things got out of hand for similar reasons: all sorts of contingent things happened, almost silly, stupid things, which in their cumulative effect disoriented and misled the people operating the system.

So let us look at the scene aboard the Vincennes more closely. We in fact know quite a lot about what happened that day because the chain of events on the Vincennes was in fact recorded. The Vincennes was equipped with equipment which recorded all incoming and outgoing signals to the Vincennes. The interesting thing is that the recorded data tell a relatively simple and unambiguous story, namely, that Iran Air Flight 655 took off at 10.17 am, emitting identification signal Mode III and climbing steadily the whole time. This story from the Vincennes’ own data tapes is confirmed by the data and testimony taken from the U.S.S. Sides, which had also been monitoring the flight but which, seconds before the Vincennes fired its missiles, had identified the aircraft as non-hostile and turned back to deal with the speed boats.

But the story told by the people who received and worked with this data, the people inside the Combat Information Centre, is quite different. Right from the first contact, various personnel began to report a Mode II identification signal of the kind associated with Iranian F-14 planes. Although the data recorders on the Vincennes all consistently reported a Mode III signal, the people on the Vincennes continued to misreport it as Mode II. As the aircraft drew nearer, they began to broadcast more and more urgent messages to it, at first on both civil and military frequencies, but then, as the idea became fixed in their minds that the plane was an Iranian F 14, exclusively on the military frequency. And they began to address the approaching aircraft as “unidentified Iranian F-14”. Rochlin says, “A quick thumb-through a listing of commercial flights missed the clear listing for Flight 655, although it was on course and nearly on time.” (p.106). And in the heat of the moment—apparently it was as the Vincennes made her sharp turn—a warning about possible civil aircraft was indeed acknowledged by the commanding officer, Captain Rogers. Unfortunately, because he had his hands full with the surface battle, he left dealing with this warning to the Anti-Air Warfare Commander. This latter, however, was new in the job and perceived to be a weak leader; actual leadership in the CIC fell upon the junior Tactical Information Coordinator, who by now was almost shouting about the immediacy and seriousness of the air threat. The warning was thus not dealt with at all. Even so, Captain Rogers waited a while and challenged the aircraft several more times. But eventually, on the basis of the information he was receiving from the Combat Information Centre, he decided that the threat to the ship was too serious to ignore. Under pressure to avoid the fate of the U.S.S. Stark, he authorised the firing.

We have, then, in this whole episode, three different versions of events: firstly, the story as recollected and told by the people in the CIC; secondly, the story as recollected and told by the people on board the other ship, the U.S.S. Sides; and thirdly, the story told by the Vincennes’ own instruments. The story told by the crew of the Sides coincided with the story told by the Vincennes’ instruments: they had picked up the aircraft on their radar at about the same time, tracked it for about the same time, identified it as harmless and then prior to the Vincennes’ attack turned their attention away from it. The Sides was a relatively low-tech ship, compared to the Vincennes. Rochlin says, “In precisely the same circumstances, the high-technology command and control system “failed” to provide the means for correct identification, while the low-technology one did not.” (p.107, Rochlin)

Now the naval hearing which investigated the incident exonerated Captain Rogers, the overall commander of the Vincennes. In their opinion, he had acted perfectly correctly on the basis of information he had received, and in fact Captain Rogers was later awarded a medal for his performance on the day. The problem was that the information given to him was faulty because by the time the missiles were fired, the entire personnel in the CIC were convinced that they were under attack, even though all the data they had from their instruments clearly showed that this conviction was wrong. The hearing attributed this collective misreading of data to two relatively junior officers, the Tactical Information Co-ordinator and the Identification Supervisor. These two officers had become convinced that Iran Air Flight 655 was an F-14 after the latter, the IDS, had reported receiving a momentary Mode II identification signal. The report of the Naval hearing has this to say about what happened then: > After this report of the Mode II, TIC (that is, the Tactical Information Co-ordinator—B.C.) appears to have distorted data flow in an unconscious attempt to make available evidence fit a pre-conceived scenario (“scenario fulfillment”). (quoted in Rochlin, p.109)

In other words, the Mode II report from the IDS convinced the TIC that the ship was facing a certain situation, namely, potential attack by a possibly missile bearing aircraft, which belonged to one of the many battle scenarios for which the whole crew in the CIC had been trained. Once the TIC became convinced of this, the idea became fixed in his mind, even though the whole range of electronic instruments were clearly and evidently saying the contrary. It was just unfortunate that the TIC’s superior, the Anti-Air Warfare Commander, was perceived by all in the CIC to be weak because this meant that effective command passed to the TIC, now totally convinced that the ship was under attack. The TIC’s reports were heard by all, since he kept reporting the approach of the hostile F-14; he was practically shouting when the missiles were launched, and by this time the whole CIC was just as convinced as he that they were being attacked. Yet all the time the instruments were clearly and evidently contradicting this now collective conviction. Perhaps we can understand a little better now why the Navy gave Captain Rogers a medal; his only source of information, namely, the CIC, was constantly and repeatedly telling him that a hostile plane was approaching. Under the circumstances, it is remarkable that he delayed firing as long as he did.

So we have here an extraordinary failure of the decision making process in the CIC. Starting with the TIC, literally everyone down there in the CIC began interpreting the evidence of their own instruments incorrectly. It would be easy to conclude that the whole incident was just an unfortunate combination of human error and contingent circumstances. This appears to be what the naval hearing concluded; their final verdict was that the tragedy was caused by “stress, task-fixation, and unconscious distortion of data”.4 (p.108, Rochlin) The final opinion of the Navy was that serious mistakes had been made that day on the Vincennes but there was no culpable conduct. The event was just a regrettable accident.

That the Navy should ultimately come to this conclusion was in fact inevitable. For right from the start the Navy had assumed that the cause of the incident could only be either human error or a physical malfunction in the system. Since the data tapes proved that the system itself had at no point malfunctioned, the only question remaining was whether the human errors were culpable, i.e., ones deserving of censure or punishment. And eventually the Navy came to the opinion that these errors were not culpable.5 Thus, the Navy’s final conclusion inevitably was that the whole incident was indeed just a regrettable accident.

But Rochlin says, rightly I think, that it is wrong to regard that the whole incident as just an unfortunate combination of human error and contingent circumstances. To do this is to ignore the significance of the fact that the very sophistication and complexity of the equipment on the Vincennes itself contributed to the tragedy. This significance is brought out all the more by the fact that caught up with the Vincennes in the whole affair was a relatively low-tech ship, the U.S.S. Sides, which was able to identify the aircraft as non-hostile under exactly the same conditions of stress—conditions, incidentally, which were in no obvious way worse than what one would expect in a battle. On the U.S.S. Sides there was presumably no task-fixation and certainly no distortion of data. These facts suggest that the role played by the equipment on the Vincennes should not be ignored; they point to the limitations of such complex technological systems in battle situations, where humans must respond quickly and adroitly to very rapid, almost kaleidoscopic changes in the course of events.

So what was the role played by the complex system on the Vincennes? How and why did it lead to a task-fixation and the unconscious attempt to fit incoming data into a pre-conceived scenario which was actually false? In the days of sailing ships command of a naval ship was relatively simple. Things moved slowly, so threats usually emerged at a reasonable pace, within the field of human vision and were confined to the surface of the water. But from World War I onwards, when submarines and aeroplanes were introduced, this threat environment become increasingly large, complex and multidimensional. To quote Rochlin, “A modern fighting ship fights in a three-dimensional environment that extends to hundreds of miles.” It faces “threats that may develop with great rapidity, and with a demand to interpret and integrate many disparate sources of data and information.” (pp.115-116, Rochlin) Because the threats which a modern ship faces are much more extensive and wide-ranging, modern ships themselves must be much more complex technologically. They must have all sorts of different kinds of instruments for detecting the unfolding of possible attacks at long ranges, high speeds and in diverse mediums, e.g., on the surface, under water and in the air. All the different kinds of information which these different instruments deliver must be somehow quickly and accurately put together to yield a picture of just what is happening and where. Now because this information is so diverse and so complex, because it must be dealt with so quickly, the actual task of putting it together into an overall picture which can be used to determine appropriate responses can no longer be carried out by the commanding officer. This latter is concerned with the overall well-being and operation of the ship; moreover, he must coordinate his ship with others and maintain communication with higher command. So on modern fighting ships at least in the U.S. Navy this job of collating, synthesising and interpreting diverse streams of information is delegated to another officer, who is called the Tactical Operations Officer. As I understand it, the TAO is not only responsible for creating such an overall picture of what is going on within the ship’s environment. He is also responsible for determining appropriate defensive action. That is, he decides what needs to be done in response to what he sees in the welter of information he is dealing with; the only role played by the TAO’s superior officers is to say “yea” or “nay” to the responses planned by the TAO.

As you can imagine, the TAO’s job is very difficult. They must develop a proper command and integration of the information flowing to them from charts, radar displays, console operators and other sources; out of this diverse flow they must fashion a single, mental spatio-temporal picture or map which organises and orders the information in such a way that decisions can be made. A crucial aspect of the mental picture or map that the TAO fashions is its essential revisability; at any time it is, so to speak, a hypothesis which the latest information might show to be in need of correction. The task which confronts the TAO is thus one of developing, continually checking and where necessary revising such an overall picture in the light of incoming information. Rochlin says that past and present TAO’s have described their sense of having this kind of interpretive, self-correcting command of their information flows as “having the bubble”:

When you’ve got the bubble, all of the charts, the radar displays, the information from console operators, and the inputs from others and from senior staff fall into place as parts of a large, coherent picture. Given the large amount of information, and the critical nature of the task, keeping the bubble is a considerable strain. On many ships, TAO shifts are held to no more than two hours. And because getting the bubble cannot be done quickly or simply, shifts often overlap by up to an hour to ensure that the “bubble” is transferred without a dangerous break to the incoming TAO. When for one reason or other the TAO loses the sense of coherence, or cannot integrate the data, he announces loudly to all that he has “lost the bubble” and needs either replacement or time to rebuild it. Losing the bubble is a serious, and ever-present, threat, and has become incorporated in the general conversation of TAOs as representing a state of incomprehension or misunderstanding even in an ambience of good information.6 (p.117, Rochlin)

Now in an organisationally and technically complex system which must operate quickly in situations of high risk and in highly uncertain, unpredictable environments reliable and high-quality performance can only be obtained if somewhere in the whole system there is someone who has “got the bubble.” This is true whether the system be a modern day aircraft carrier, a jet fighter or even the air traffic control system at a modern airport. “Bubbles” are, as Rochlin says (p.117), primarily human decision systems, and if there is no one in a highly complex technical system who has “got the bubble,” then things do not go well; the system stumbles about blindly in a sea of reliable information.

Just this latter description fits the Vincennes perfectly. In the Vincennes, there was no one who had a total overview of the situation. Moreover, the results of the naval hearing which investigated the incident suggest that the way the Vincennes was designed to operate positively worked against anyone’s developing and maintaining a “bubble.” Not merely did no one on board the Vincennes that day actually have a “bubble”; the whole system appears to have made it hard for anyone involved in its operation to form one, that is, to engage in the ongoing activity of forming and revising an overall mental picture or scenario on the basis of the information delivered by its various technical sub-systems. On other ships, the role of multiple technical systems is to feed information to the TAO so that he can build and continually rebuild an overall scenario. But on board the Vincennes things seem to work in reverse: not the TAO or anyone else, but rather the computer-directed anti-aircraft missile system is the primary information integrating system. On board the Vincennes, the TAO, instead of integrating information, just inputs it into the computer. Indeed, it appears that the very concept around which the Vincennes was built was that of reducing as far as possible any pressure on the crew to interpret data and adjust systems to meet changing realities. According to Rochlin, the crew of the Vincennes was instead “trained to activate an elaborate system of control in a situation where the only uncertainties would be those deliberately created by potential enemies trying to confuse their information and data collection system.” (p.118, Rochlin) That is, the crew were not trained to interpret incoming data so as to obtain a coherent overall picture. Rather, they were trained to recognise certain kinds of incoming data as indications that certain pre-conceived courses of actions or routines should be initiated.

In the light of this analysis, it becomes explainable why the whole crew in the Command Information Centre could so easily slip into a pre-conceived battle scenario and stay there, even though their instruments were telling a quite different story. They had been trained to use their machines as factory workers operate plant equipment or secretaries operate word processing devices. That is, they had been trained to perform specific tasks in specific kinds of environment. And this was how they had to be trained because this was how their equipment was designed to be operated. For on the Vincennes, the computer-guided anti-aircraft missile system was what did all the integrating and synthesising of data. Thus, when Iran Air Flight 655 appeared on their screen, when the TIC made and persisted in his fateful error in response to the IDS’ false report of a Mode II signal, and when finally the interpersonal dynamics of the crew in the CIC took effect—weak Anti-Air Warfare Commander, effective leadership in the hands of the TIC—, the entire crew locked into their “attack by missile-bearing aircraft” routine and acted it out.

You might want to reply to this explanation that surely their training could not have turned individual crew members into such automatons that they could not see what their instruments were saying. But as I understand Rochlin’s analysis, this would be to misunderstand the nature of the systems in the Vincennes. They were so complex, so sophisticated, that individual crew members could not have seen what their instruments were saying. The systems such as those on the Vincennes provide information so quickly and so densely that without some kind of overall spatiotemporal mental picture or map you cannot interpret the data. For precisely this reason there is on other ships, but not on the Vincennes, a special person whose job it is to construct and continually check out such an overall picture on the basis of all the incoming data. But on the Vincennes there was no such person. The TIC, in an effort to make sense of what was going on, latched on to a certain pre-conceived overall picture or scenario which of all the ones for which they had been trained best fitted. But neither he nor anyone else was in a position to develop a “bubble”; indeed, since the whole system was not designed around anyone’s playing this role, no one in the CIC had never been trained in forming a “bubble.” Thus it was that the TIC’s initial picture, instead of being checked and discarded on the basis of what the instruments were saying, became entrenched in everyone’s mind. On ships where there is someone with a “bubble”, errors in pre-conceived scenarios have a chance of being detected, and thus pre-conceived scenarios themselves have a chance of being revised or rejected. But on a ship like the Vincennes, where the system itself works against there being anyone with a “bubble”, pre-conceived scenarios can become entrenched as the only means of making sense of things and thus as the only means of remaining capable of action.

Clearly, if Rochlin’s analysis of why things happened as they did on board the Vincennes that day is correct, then it is quite inappropriate to say, as did the U.S. Navy, that the incident was just a result of human error. For if Rochlin is right, the kind of human error involved here was one which arose directly out of the structure and context of the large technical system in which these humans were embedded. As such, this kind of human error cannot be regarded not as a chance or fortuitous occurrence which says nothing about the reliability or unreliability of the system within which it occurs. Rather, this kind of human error can be regarded as an inherent weakness, an inherent tendency to failure, of the total system itself. Wherever the task of humans is to operate large-scale, complex systems of the kind to be found on the Vincennes, there will be an ever-present tendency on the part of humans to get entrenched in rigid and inflexible interpretations of how the system is running and under what conditions. For these humans need some overall conception of what is going on simply in order to keep the system functioning—and if the system is so organised that it prevents them correcting and adjusting this overall conception, then the danger is present that these humans will stick with this conception even when it is, or has become, no longer appropriate.

Now as Rochlin points out, underlying the kind of large-scale, complex technical system to be found on the Vincennes is a certain conception of what it is to operate any kind of complex organisation, be it a naval vessel or a corporation. This is the conception of command as control of one’s operating environment. On this conception, good commanding consists in anticipating all relevant eventualities in one’s environment and developing in advance ways of dealing with them. Clearly, such a conception of command presupposes that uncertainty can be eliminated. That is, it presupposes that one can be certain in advance about all contexts and circumstances in which the organisation commanded might ever find itself and on this basis determine in advance the best response to them. The Vincennes was constructed under this presupposition: the system was built to deal with a certain range of possible battle scenarios and training to operate the system consisted in learning certain responses to events in these scenarios.

There is, however, another conception of command, namely, command as management of one’s operating environment. This conception allows that in decision-making there will be an ineliminable element of uncertainty as to the precise conditions and circumstances in which the decision will be carried out. It thus does not attempt to develop in advance ways of dealing with all contingencies, but rather proceeds in a more “trial-and-error” way, correcting its models and pictures of its operating environment as it goes. Rochlin gives some examples of this kind of management: “a soccer goaltender watching the oncoming play, a batsman guarding a wicket, and a battlefield commander adjusting his troops according to the tide of battle are all “managing” their particular critical environment, adjusting on the fly to try and get a favorable outcome.” (p.103, Rochlin)

Now until recently, military command on the battlefield was necessarily a matter of battle management as opposed to battle control; limited technological capabilities left no other option. This is not to say that the attempt has never been made to command by rigorous control. In fact, quite a few commanders have attempted this. But their efforts have almost always been dismally unsuccessful.7 At least since World War Two, however, technology has advanced so rapidly that the idea of a truly controlled, mechanically organised battle has come to seem possible to many military people. Advances in communication technologies and data processing have made it seem increasingly possible for central command structures to control ever larger units of men and machines. It has come to seem possible to steer an entire battle or indeed an entire war from one central, computerised control bunker.8

The question is, though, whether the idea of total control is not ultimately an illusion. The case of the Vincennes suggests that it is. For the Vincennes, with all its hi-tech computer systems, was built around the idea of battlefield control by complex technical systems; humans were there simply to provide the appropriate inputs and activate the appropriate mechanisms. And this complex system failed in a way which is directly attributable to the character of its design: the Vincennes was designed in the conviction that all significant eventualities could be anticipated and taken into account in advance. Thus, it was designed to carry out fixed, pre-programmed routines in response to certain kinds of incoming signal. But battle environments are hardly ever predictable and anticipatable; it is always possible that the environment should contain certain facts or events which crucially affect the running of the system but which have not been anticipated in any of the scenarios or routines with which the system has been supplied. The shooting down of Iran Air Flight 655 by the Vincennes clearly illustrates this: the designers of the computer-guided missile system on board the Vincennes could never have anticipated that the ship would one day find itself in an external environment where it had to guard Western oil supplies in shallow and confined waters which were regularly flown over by civil aircraft, which were shared by two warring Moslem nations both hostile to the U.S.A. of which one had already caught off guard and badly damaged an American ship, thereby greatly embarrassing the American Navy and increasing the pressure on all American ships in the Gulf to avoid a repetition. Nor could the designers have possibly predicted or anticipated the dynamics of the interpersonal relations in the CIC on that day, namely, perception of a superior officer as weak and inexperienced, which allowed effective command to pass to a relatively junior officer. Yet these facts were causally relevant to the events which occurred on July 3rd; they were instrumental in causing the system to fail in the way it did. Interestingly, they were only instrumental in causing the tragedy because the system on the Vincennes allowed them to be. For the ship’s designers had not attempted to ensure that there was someone on board whose job it was to use the data provided by the system both to form a picture of what was going on outside and to continually re-assess and where necessary revise this picture in the light of fresh data. If there had been such a person, then the pre-conceived scenario of attack from the air might never have become entrenched in the mind of the people operating the CIC.

So why did they design the Vincennes in this way? Were they, like so many military people today, just victims of the crazy idea of total control? Well, no doubt they were. But in fact there is a real sense in which the kind of system which the Vincennes represents is actually becoming necessary. Today’s smart weapons systems are so clever, so powerful and so rapid that it is no longer possible to rely on human skills of management and assessment. These human skills are only effective in situations where events develop slowly enough and are sufficiently loosely interconnected to allow time for human decision-making, which is essentially a trial-and-error procedure of making a first stab, seeing what happens and then making adjustments and revisions where necessary. But on the modern battlefield human trial-and-error techniques are simply too slow;9 modern weaponry renders them inapplicable because the speed and power with which it strikes means that everything is decided on a time scale smaller that what these human methods require. Thus, it is not just crazy technomania which has increasingly led the military to use new technology not to supplement past practices of command, but rather to replace them. Increasingly, military people are using new and powerful technologies to create throughout their organisations “a series of direct links for information and control, and placing at the center of the resulting web powerful, centralized command centers that are intended to exercise direct control from the top down to even the smallest of battlefield units.” (p.105, Rochlin) And they are doing this not just because they want to, but because they have to. For they recognise that it is becoming harder and harder for humans to play the kind of role played by TAO’s on ships other than the Vincennes.

With this we see that our military find themselves in something of a dilemma. Improvements in the technology of war force a shift in military organisations away from the idea of management to the idea of total control. But the technological systems required for effective control are themselves so complex that they make a new kind of system failure possible, namely, the inability of the humans operating the system to readily identify that they are operating the system on false assumptions about the current state of the system and its environment.10 When this happens, the system is in effect running blind and only good luck can prevent tragedy.

Now I am not particularly concerned about the problems of military planners. But there is an important general point here. For surely this dilemma is not restricted to hi-tech military systems. Surely, in any kind of technological system, the more complex it becomes, the greater the tendency will be to replace command by management with command by control. Nuclear power generation is a case in point. Here, the whole thrust of power plant design has been to eliminate the human element. Just like the system on the Vincennes, nuclear power plants embody a quite new conception of the relation between human beings and the technology they operate. When technology was simpler and slower, it was both possible and necessary for the operator of technology to be an ever watchful and relatively knowledgeable manager who could easily identify and allow for the weaknesses and failures of the system being operated. It was both possible and necessary for him or her to have a general skill in manipulating relatively simple and unreliable equipment in all sorts of circumstances which could not be predicted in advance. He or she could even be expected to step in and undertake repairs on the fly in order to keep faulty equipment running. But such an operator is no longer possible, either in modern, fast-moving battles or in modern, fast-moving nuclear reactors. The processes involved in nuclear power generation are so complex, dangerous and tightly interwoven that failures can neither be readily identified and understood when they occur, nor unfold in a relatively leisurely and contained way. So the old conception of what it is to operate technology no longer applies. The manager of limited and failure-prone systems must give way to the controller of complex, technologically reliable processes whose failures are not readily identifiable and correctable on the run. Thus, operators of nuclear power plants and hi-tech cruisers are much more like workers operating machines in a factory, or secretaries operating word-processing systems, than dashing pilots of Sopwith Camels. They are people trained to perform specific tasks in specific environments who have little skill and even less experience in fixing things on the fly when something goes wrong.

In general, the more complex technological systems become, the less and less their operators can get a grasp of how their system as a whole is behaving in the given context. And thus they will be less and less able to adjust their systems to new circumstances as they arise. The only way to compensate for this increasing inability is to pre-programme systems with more and more scenarios and routines in the hope of anticipating more and more eventualities and contingencies. Indeed, these days, there are systems around which can “learn” from their mistakes and rebuild or revise their pre-programmed routines and scenarios. But even this degree of sophistication cannot neutralise the fact that the environments of systems remain fundamentally unpredictable. There will thus always be the possibility of some fact or event occurring within the system’s environment which affects its performance but was not anticipated in the system’s routines, even when these routines have themselves been modified and improved by the system on previous occasions. Only humans, with their abilities to manage rather than control, seem capable of dealing with such facts and events.

Notes

  1. I have also mentioned the claim made by Jacques Ellul that such a web of technological systems eventually becomes so complex that technological innovation and improvement become themselves structural or inherent properties of the web. In my fourth set of notes I extracted from Ellul the following argument in support of this claim: As the total web of systems or techniques becomes more complex, that is, as it comes to contain more and more systems and more and more interconnections between systems, the more unpredictable the environment of any one system becomes. And the more unpredictable a system’s environment becomes, the more pressure there will be upon an individual system to develop a capacity to redefine its connections with its environment quickly. Because technological innovation provides a means of reducing dependency on existing relations and of redefining them, increased pressure to be flexible in its relations to the environment will encourage a system to look for new ways of doing what it does better. Technological improvement and updating will increasing become an essential part of its own activities, and thus an inherent property of the total web.

    Incidentally, Ellul’s arguments for this thesis of the self-augmentation of technique are rather jumbled; he tends to mix the kind of argument just presented with a number of other arguments to the effect that certain psychological tendencies of individuals and groups exacerbate and accelerate the tendency of technological society to incessant improvement and extension of its technical means, e.g., individuals’ fascination with technology and the inclination to maintain complex technological organisations beyond the point where they have fulfilled their initial task. (Winner adduces the decision to retain N.A.S.A. after the completion of the Apollo moon missions as an example of the tendency of technological society to resist the break-up of its large technological organisations once their mission has been attained—see p.245 of his book Autonomous Technology. I must confess that I do not find at least this example particularly convincing. For in the case of N.A.S.A. it was not just a matter of finding something for N.A.S.A. to do, no matter what, no matter how irrational. If you are firmly convinced that humans ought to be reaching out further and further into space, then it is a perfectly logical thing for you to argue that this existing and highly successful organisation should be retained and given the new job of getting us, say, to Mars. I can myself think of no case where a technical organisation was kept alive simply for the sake of keeping it alive, where the grounds used to justify keeping it alive were in fact spurious and senseless pretexts advanced by people who simply wanted to retain the organisation at all costs.)

  2. In Social Responses to Large Technical Systems—Control or Anticipation, edited by Todd R. La Porte, Kluwer Academic Publishers, 1991, pp.99-125.

  3. Most of this information comes from the Fogarty report, the report of the Naval Hearing into the whole incident, which was convened by Rear Admiral W. M. Fogarty on July 6th, 1988.

  4. Interestingly, they did also recommend that a study be undertaken of the stress factors on personnel in ships equipped with hi-tech systems and that “(t)his study … also address the possibility of establishing a psychological profile for personnel who must function in this environment.” (p.109, Rochlin) As Rochlin points out, this is a tacit admission that the very sophistication and complexity of the system on the Vincennes may have contributed to the incident. but this line of thought was not followed any further.

  5. As a matter of fact, the Hearing had originally recommended a mild censure for the Anti-Air Warfare Commander but this was later revoked by Carlucci, the Secretary of the Department of Defence.

  6. That is, they describe it as a state of incomprehension or misunderstanding even in a situation where the incoming information is all perfectly good.

  7. Perhaps the most tragic attempt to command by control is provided by the attempt made in the First World War by General Alexander Haig to conduct one of the several battles of the Somme along almost mechanical lines; predictably, it turned out to be one of the most disastrous for the Allies. Apparently, Haig ordered British soldiers went over the top not in a mass charge, but in columns, and to advance at walking pace towards the German trenches. Haig’s idea was apparently to revive some of the tactics used in early nineteenth century battles prior to the invention of the breech-loading, repeater rifle and in particular the machine gun. The casualties were enormous. In the various battles of the Somme, the Allies lost around one million men. This example is also mentioned by Rochlin—see p.104.

  8. N.A.T.O., for example, spent much of its time during the Cold War developing highly complex plans for what they called the AirLand Battle in Europe; if the Soviets had ever invaded Western Europe, they would have responded with a single, massive, co-ordinated counter-attack from the land, sea and air, all controlled from central command bunkers.

  9. As Rochlin says, “In an era of supersonic aircraft armed with high-speed missiles, quick-reacting radar-directed gun and missile batteries, and tank battles that may be won or lost on the first shot, there is simply not the time for centralized command systems to exercise real-time control over battlefield events.” (p.105, Rochlin)

  10. As Rochlin puts it, “(t)he very complexity of the equipment will make it more difficult to ascertain what is going awry, why, and, at times, whether something is going badly or incorrectly at all.” (p.114, Rochlin)