Some years ago, a time sufficiently distant that I was still a member of the IEEE (Institute of Electrical and Electronics Engineers) I attended one of that organization’s Simulation Conferences. I there heard an anecdotal tale from two engineers, who had prepared a queuing model for a particular warehouse operation. The model had the unfortunate tendency to fail with some regularity; the simulation would progress to a particular point and then cease to function, responding with a series of error messages.
The two engineers decided to visit the site of the warehouse that they were simulating, in order to observe a day at the loading dock, which was the input to the queuing model. They observed the following sequence:
The engineers were nonplussed. “Why did you just do that?” they asked the dock attendant. The dock attendant then explained that there had been a blueprint error in the construction of the warehouse and the storage area that fed the automated stacking system was about nine inches too short, so it was necessary to force the last box into the warehouse. The boxes in the storage area had just enough give for that task to be accomplished, provided some substantial force was applied to the last box.
In a sense, then, the simulation had been a success. The automated warehouse system would fail if used as constructed. However, human intervention has compensated for what was basically a defective system design.
In recent years, a substantial portion of the productivity gains in the U.S. economy have been ascribed to Information Technology (IT) as the causal agent. There is probably substantial truth to this conjecture, especially if IT is expanded to include telecommunications of all sorts, email, cellular telephone technology, broadband internet connections, etc. However, there are a number of implications to heavy IT involvement on business processes that should be borne in mind. I will speak primarily to two such implications. The first is that a high degree of IT involvement in a business tends to make that business “system centric” as opposed to “process centric.” There are several further implications of that observation, and I’ll get to those in a bit.
The second thing to bear in mind is that IT systems are generally simulations of something, rather than res ipsa “the thing itself.” This is easy to see when we speak of a computer simulation model, but this hardly scratches the surface. A telephone, for example, does not deliver the voice of the person at the other end; rather, electrical information is sent from one place to another that then allows a simulated human voice to appear at the receiving end. Similarly, financial and accounting systems are simulations, and economists speak of “the financial economy” as different from “the real economy.” When decisions made for financial accounting reasons are derided as “coming from the bean counters,” that derision carries with it the idea that the financial simulation is missing some important part of the real situation.
Nevertheless, accountants and financial officers are charged with protecting the financial systems of an organization, and with good reason; if the financial system fails, it likely mirrors a failure in the “real” organization, and even if it does not, financial system failure makes some part of the real organization either unobservable or uncontrollable, or both. As is often the case, financial systems are both simulation systems and control systems.
The same may be said for IT systems. IT systems are generally either communications or control systems (or both), and failure of an IT system can have adverse or even catastrophic consequences, whether it be losing an inventory database, sensitive customer information, or a set of emails. Without adequate information, management decisions become mere guesses, and without adequate communications, those decisions cannot be implemented.
Moreover, as the tale of the automated warehouse suggests, IT systems are not as robust as real processes, because human beings are inherently more adaptable than algorithms and hardware. If the box doesn’t fit, the dock foreman may try kicking it; if that doesn’t work, perhaps a box cutter will do. If that fails, then perhaps he stores the box temporarily, until the holding area clears. As long as a human being has a goal in mind, he or she will often find away around obstacles. There is even a common name for this: “work-arounds.”
So it is understandable, perhaps even inevitable, that IT organizations will tend to become “system-centric,” but this carries numerous drawbacks. The obvious drawback is the tendency to concentrate of system development at the expense of business process development. This has several follow-on failure modes.
First, concentrating on systems rather than process tends to starve the processes themselves of needed resources. This is particularly problematic in a cost-cutting environment, where IT systems are, in themselves, supposed to reduce the need for personnel. Too often, however, the personnel savings are assumed rather than demonstrated, with an outcome that is deleterious to the underlying process. Moreover, IT systems usually carry their own overhead burdens, so it is not uncommon to see the effect of merely trading one set of employees for another set, the latter often being more expensive than the former. This, incidentally, is a problem that is not at all confined to IT. It is often true that attempts to implement cost-cutting through personnel reduction results in an increase in managerial overhead (“off-shoring” projects are notorious in this regard) to no great benefit to the organization as a whole. Hospital management is another area where this phenomenon has been observed, with reductions in nurse-to-patient ratios often occurring simultaneous to substantial increases in administrative (and sometimes legal) overhead.
Another implication of “system-centricity” is that it allows, even encourages, “gaming the system.” A substantial portion of the enhanced productivity created by IT systems in large organizations is the deletion of several layers of middle management. The managerial review process is then replaced with a variety of system-derived “metrics.” However, employees then often find it easier to “work the metric,” than to perform the work that the metric is supposed to reflect. This is the equivalent of the use of standardized testing in the public school system. Given the need for a certain percentage of students to achieve particular scores on the tests, teachers find themselves under pressure to “teach to the test,” emphasizing material that is known to be on the tests and de-emphasizing material which either does not appear on the tests, or material for which it is inherently difficult to test. In some cases, school administrators and teachers have been found restricting the student population that takes the tests, sending those who will do poorly home on test days. In other cases, there has been instructor assisted cheating.
There are no easy answers or quick fixes to these problems. Certainly the older “thick middle management” model contained its own pitfalls and cannot be taken as a model of information clarity and inevitably sound judgment. If higher level managers wish to be kept ignorant of problems within the organization, they will accomplish this task, whether by killing the messengers bearing bad news, promoting those who tell flattering lies, or receiving “dashboards” that do not adequately convey a picture of the organization. However, those wishing to be dutiful officers and representatives of the organization as a whole, should listen to the whispered memento mori (“Remember Thou art Mortal”) and take heed.