Wednesday, February 28, 2007

Next Year’s Model: Psychohistory

In writing about The Foundation Trilogy, Asimov once explained that his notion of how psychohistory would work was an analogy to chemistry, where individual, hard-to-predict atoms are nevertheless predictable when there are great numbers of them, such that statistical mechanics can explain their mass behavior. Analogies are tricky. Next to metaphor, they are the most poetic of models, and as such are prone to all the problems of poetry, sounding good but just possibly being a crock.

So I alternate between thinking that Asimov’s explanation was loopy and bizarre, or contained a simplified kernel of truth.

In any case, while I might bristle at comparing complex humans to simple atoms, we do have the example of economics, where homo economicus rules the theoretical landscape, though “cognitive economics” has become a buzzword lately in attempting to replace HE with something a bit more realistic. In any case, economics is the most numerical of the social sciences, the one most suitable to engineering style analysis, and the one that I found most attractive back when I was girding up for my intellectual assault on the world.

(Hmm, did I ever put it in such Byronic terms? Oh, hell, probably. Now at least there’s some irony in it, though there probably was back then, too.)

It wasn’t just economics, though. We systems guys were making a sweet toolkit for the engineering of “urban systems.” There was an entire field for the study of transportation networks and queues that was opening up because of cheap computing power. Also of note was the ability to bring computer analysis to maps and mapping; it’s no accident that the company Mapinfo was started as an RPI incubator project.

And there were some cool data gathering approaches becoming available from satellite imagery. I interviewed at the Earth Satellite Corporation in Berkeley when I first moved there (they weren’t hiring, or at least they weren’t hiring me). I also tried for a job in the San Francisco Urban Development agency as an “Urban Analyst,” or was it “Urban Planner?”

All perfectly respectable stuff. And all those fancy tools, plus further developments, are still being used to this day.

So why the feeling of letdown? Aren’t these the first steps toward the dream of modeling large scale urban and social systems? Doesn’t psychohistory still look like a probably future?

Well, no, not really. Not to me anyway. In some regards, this is a little like the Artificial Intelligence disappointment, only more so. At least I still think that we might someday build machines that possess sentient intelligence (just not during my lifetime). But predicting human society? Nope, no longer a believer.

First off, non-linear models (and, man, are social models non-linear) tend to slip over into chaotic behavior awfully easily. Chaotic behavior might as well be random; when some future state winds up depending upon the 37th decimal place of an initial condition, it might as well be random. So you’re going to have some intrinsic randomness in your model, as well as some random extrinsic inputs, like, say, weather. So well, okay, yes, that’s a problem.

Another problem, as I noted a little while ago, is that large scale models need a lot of initial data. In order to predict the future, you have to know the present, at the very least. How are you going to get enough information to even set the initial state of your social model? And even if you could conceivably mount a data gathering project large enough to get that much data, how is that project going to affect the system?

(Some naïve commentators say that this is the human equivalent of the quantum uncertainty principle. It isn’t. It is, however, part of a known phenomenon that is also found in the Hawthorne Effect. The Wikipedia has a pretty good, long article on the Hawthorne Effect, but be sure to read the talk section for a discussion of various disputes on the matter).

The final problem is one of expectations, self-interest, and the tendency of people to “game the system.” Simply put, if a big and useful social model did exist, its results would affect policy in such a way that some groups would immediately desire to subvert or control said model, in order to use it to further their own interests. I mean, I’ve seen this happen often enough with smog models, and the amount of money involved in environmental policy is a small fraction of what a predictive societal model would involve.

Think that’s cynical? I haven’t even started.

If you recall, I noted a few essays back that one way to make a non-linear model more tractable is to linearize it. In control theory, this is called “linearization along a trajectory.” It’s the method first used to control the Saturn Booster rockets.

But notice that I said “control theory.” This only works if what you want to do is to keep a system near a particular path. It’s not for prediction or analysis; it’s for control.

So Asimov got that part right as well. The Seldon Plan failed, as such, but the Second Foundation became a hidden group, trying to get humanity back onto a path for a Galactic Empire. And the Empire would be a good thing.

That’s where Isaac and I would disagree.

This Years Model IV

To recap a few things:

I attended Rensselear Polytechnic Institute from 1968 to 1974, graduating with a Master's degree in Engineering Science. Engineering Science was at that time and remains to this day a funny program, a "roll your own" sort of thing. Engineering Science degrees at RPI require that you convince the curriculum chairman that the course of study that you had designed was an appropriate course of study for an engineer. Then, of course, you actually have to complete the program, which is not as easy as you might think (and certainly not as easy as you thought when you first thought up the idea).

One buddy of mine studied urban planning, transportation, and architecture, so now he designs airports around the country and the world. Another took courses in electrical and biomedical engineering, and he's now a hospital management consultant. All in all, it worked out well, I think, at least sufficiently well that the Engineering Science program at RPI continues to this day.

As for me, my course of study centered on the simulation modeling of urban and environmental systems. These days, it's hard for me to even write that sentence without marveling at youthful hubris. What were we thinking? Well, grand notions were in the air. The Cybernetics movement of the 40s and 50s has flowed into what was called General Systems Theory: the idea that science and engineering has developed a set of analytical tools that were so powerful that they might even be able to handle the social and biological sciences.

I put together a course of study that included linear systems and control theory, voice and image processing, urban analysis, with a solid chunk of statistics and operations research for trying to get the data that most people agreed would be necessary to validate and calibrate these huge models that we were going to build. I'm not sure how coherent the course of study, but I will say that for a while it seemed that every course eventually would up with us trying to invert some damn matrix or another.

Finally, I did my graduate work on rewriting a simple simulation model to compare to the very large model that the Lake George Ecosystem project at RPI had prepared. Then I graduated, moved to California, and began to look for a job. Eventually a dream job fell into my lap (literally, as one of the guys I was living with at the time tossed the phone number into my lap, saying, "We decided I wasn't the right guy for this, but it sounds right up your alley), and I became a smog scientist. Why this was a perfect next step will now require some simple math.

Conceptually, the most general model of a dynamic (changes with time) system is the state variable formulation. It pretty much goes, “Here is a series of variables that describe some phenomenon. Each variable is linked to the other variables such that the state of that variable at an instant in time is a function of the other variables at some previous time.” Often, the “previous time” is the instant immediately preceding (for a differential equation), or a discrete time step back (for a difference equation). Sometimes, however, the functional relationship looks back some period of time, although this can be turned into an instant/single time step formulation just by creating more state variables which then contain a time lag link to previous variables, i. e. “memory variables.”

Having said all that, Keep It Simple, Stupid is a good rule to live by, and it’s a pretty good rule in science and engineering. So let me write a simple equation:

dC/dt = kAB

In case anyone here has math nausea, let me emphasize how simple this is. It just says that the rate that C is changing with time depends on the product of A and B with k as a rate parameter (just multiply A and B and k). A, B, and C represent state variables, while k is a parameter, and t is time.

C can be anything, but it is most interesting when C has an effect on A and/or B. For example, suppose

C = -A-B+D+E or

This is like a chemical reaction, where A and B react to form D and E. Another way of writing it is

A + B => C + D

That’s your basic chemical shorthand.

Or suppose you are dealing with a predator/prey relationship:

Wolf + Deer => (1+∆)Wolf (i. e., a well fed wolf)

Or an aquatic ecosystem:

Phytoplankton + Zooplankton => (1+∆)Zooplankton

Phytoplankton + light +phosphate => (1+∆)Phytoplankton –phosphate

Now why is this so interesting?

The most interesting thing about such equations is how generally applicable they are. As I’ve just shown, you can use them for chemical compounds or ecosystems with equal abandon. Why? Because the setup is similar; the rate of change of the state variables depends upon how often the individuals (molecules, plankton, wolves) come in contact with each other. That is a very common situation.

The equations are non-linear but they can come close to being linear when either A or B is much larger than the other. Sometimes, this is called "well-behaved" which means "I think I sometimes understand how it works." Nevertheless, large, "well-behaved" systems often are "counter-intuitive," which means, "Okay, so I was wrong at first, but this time I'm sure I'm right, maybe."

This is just the chemistry part, of course, and the rest of photochemical modeling also has a lot of physics and mechanics in it. Whitten and I used to joke about how academic lectures of smog modeling would usually begin with somebody writing the diffusion equation on the board, a really daunting looking three-dimensional partial differential equation describing fluid flow, followed by a couple of single letters that represented “emissions” and “chemistry.” The joke was that there were well-established ways of solving the diffusion equation numerically, and the process itself (fluid mechanics) has been pretty well understood for generations. In other words, all the really hard work, preparing the emissions inventory and developing the chemistry, were compressed into two humble little letters.

A while back, in “The Right Formula” (May 7), I wrote a bit on “the stiffness problem” that often occurs when you’re trying to solve dynamic state equations that have widely varying time scales. In that essay, I briefly noted the “Gear-Hindemarsh” routines that we used in simulating smog chamber chemistry, before we hard coded the chemical kinetic mechanisms into urban smog models. When doing the latter, we used various tricks that can only be used if you already know the chemistry you’re dealing with. Obviously this isn’t very good when you’re developing the chemistry, but that’s okay, because we had the Gear routines.

The Gear-Hindemarsh codes were developed at Lawrence Livermore Labs, in order to deal with equations like this:

Li6 + n => He4 + H3

H3 + H2 => He4 + n + 17.2Mev

H2 + H2 => He3 + n

H2 + H2 => H3 + H1

In case you didn't notice, the 17.2 Mev means that if you do this to a substantial amount of Li6 and H2 (deuterium), you get a lot of energy. If the pressure and density of the material is right, you get a very large bang, i.e. a thermonuclear detonation. Hence, Livermore's interest.

Gear-Hindemarsh and similar schemes solve stiffness problem but at a cost: every time the systems see an input with discontinuities (step discontinuities even at the 4th or 5th derivative), the solver drops to a lower order predictor corrector and takes very small steps. This is fine for smog chamber experiments, not so good for urban simulations, where inputs and boundary conditions keep changing by the hour.

As the final bit of something a little like irony, I’ll note that I once revisited my Master’s Degree work and used a Gear-Hindemarsh solver on it instead of DYNAMO, which was a simulation language developed at MIT and used by Jay Forrester for his Urban Dynamics and World Dynamics models. The Gear-Hindemarsh results were substantially different from the DYNAMO results, indicating that the (pretty crude) numerical solver in DYNAMO was still sensitive to step size in my simulations. Oops. I have no idea if Forrester’s results suffered from the same problem, though I don’t actually think it matters that much. Forrester’s work had substantially worse problems than a bad number cruncher.

This Years Model III

The massive (and historically cheap) increase in computing power that began in the 1950s and 60s changed the way that many previously difficult problems could be solved. In some cases, the analytical techniques had been around for a long time; cheap computing simply meant that they could be extended to larger problems, or applied to things that hadn’t been worth such analysis in times past. But there were also a number of methods of analysis that were invented during (or just immediately before) that fortuitous period: Shannon’s information theory, Danzig’s linear programming, von Neumann and Morgenstern’s game theory, and Kalman filtering, just off the top of my head.

One big chunk of real estate in this New World was labeled Operations Research. It came out of the post-war Department of Defense and more or less took the corporate world by storm, since it offered the hope (and often delivered) of untangling complicated production and logistics operations into an at least somewhat optimized operation. I learned my OR material from the RPI School of Management, along with a good stiff dose of advanced statistics.

But Systems Engineering, and its not-quite-a-cult sibling, General Systems Theory was where the really big game lived. Here was a tool kit of almost mind bogglingly powerful techniques, things like linear systems analysis, discrete time systems, transform theory, including frequency domain analysis, Laplace transforms, and on and on.

There was already a lot of history in linear systems theory, because linear systems could often be solved analytically. Moreover, any RCL circuit is a linear system, so circuit theory and electrical engineering had all these cool techniques already in the can. Mechanical engineering also had linear systems as a backbone, with every mass-spring system (and in ME, almost everything is a mass-spring system), getting the same treatment. So the science of simulations already had plenty of analog models being used, with tuned circuits often used to simulate mechanical devices, or chemical processes, or, well, other electrical devices.

All well and good, but I had my eye on non-linear systems.

Non-linear systems are where “Everything You Know is Wrong.” The phrase most often used is “counterintuitive.” What are now called chaotic systems are non-linear. In fluid mechanics, this was called “the turbulence problem,” and it had been known to be a bitch for a long, long time. What the computer revolution did was to give people a handle on just how bitchy it was.

Now linear systems can be very complicated, too complicated for a human mind to understand it, even. And linear systems can surprise you when they get very complicated as well. But in non-linear systems, even simple systems can be surprising. And it gets even worse when you connect them together.

The result is that there aren’t very many ways of analyzing non-linear systems. Most of them, in fact, consisted of making the system somehow appear to be linear, then using the linear techniques. It that didn’t work, you were stuck with the only other general method, which was modeling. Prior to electronic computers, the model was usually a real, physical model, like an aerodynamic model in a wind tunnel, or a small propeller in a tub of water.

With computers, however, you could make large scale numerical models, computer models as they were called, then just models, as the computer oracle took over the world.

I’ll talk a little about some of the math in my next essay, but I’ll end this one on a note about the sociology of it all. The thing is, computers became a magic wand. The Next Big Thing is often a magic wand in the popular imagination. That’s why Clarke got so much mileage out of his “Any sufficiently advanced technology…” aphorism. When people see something doing cool stuff, they expect it to do other cool stuff, without understanding any of it. So “computer model” became a powerful buzz phrase. It also fed into the tendency of people to project their own fantasies onto technology. The computer model became an oracle, its pronouncements inherently partaking of the mojo of science.

For myself, arrogant individual(ist) that I am, I never bought into the idea that “the model says…” “Models don’t say anything,” I would assert, “Modelers do.”

I was right, of course. As if that has ever done me any good.

This Year’s Model II

I don’t believe that any SF writer or futurist in the 50s or 60s gets any credit for writing about how computer would be the Next Big Thing. For that matter, no one who had seen the first transistor radios replace the tubed giants could have missed that computers were also going to be the Next Little Thing, either. The increase in computing power and miniaturization of electronics were part of the “log-log paper and a straight edge” view of the future, and the only errors in that area were the slope of the line that got drawn.

Nevertheless, there was not all that good a foresight as to what those computers would be for. The (possibly apocryphal) story of the fellow who estimated that the world would need about six computers apparently comes from John Mauchly, co-developer of Eniac, which did numerical calculations for the Defense Department. Given that Mauchly eventually designed the Univac I, which ultimately sold about 40 machines, his estimate wasn’t that far off, if his original estimate was about economics, markets, and the near-term. Certainly no one was going to use a Univac for word processing.

The split between number crunching and number sorting was in computing from early on, with the split between “business” computers and “scientific” computers. By the time I came onto the scene, that split was exemplified by those working on IBM machines, and those working on CDC machines, with the latter sneering at the former at every opportunity. IBM was the domain of Cobol, while science used Fortran, or sometimes PL-1. Algol was Burroughs territory, and that land was too weird to rank.

Vast oversimplification, of course, since there were Fortran compilers for IBM machines, as well, and I’m sliding over Honeywell, and the rest of the “seven dwarves” of the computing world, but then, I’m not trying to write a history of computing here. Besides, mini-computers and then micro-computers soon made the previous status hierarchy obsolete.

From the science and engineering end of it, computers changed the equation, so to speak. The rapid fall of the cost of computing quickly meant that all sorts of calculations that were prohibitively expensive before computers became cheap and practical as the price/performance curve accelerated. It was the opening of a New World, not geographically, but conceptually, with unfathomable riches just strewn around on the ground. The question was, what part of the new landscape did you want to explore?

Before I get into some of the details of things that I personally know about, though, let me mention the real heartbreaker: Artificial Intelligence.

In a 1961 article in Analog, entitled “How to Think a Science Fiction Story,” G. Harry Stine did a bunch of extrapolations and made some predictions about the future. It’s worth noting that Stine was pushing something more extreme than the exponential extrapolation; in fact, what he was predicting was a lot like the Singularity that is all the rage. As a result, among other things, he predicted FTL travel for sometime in the 1980s, and human immortality for anyone born after 2000. Didn’t help Stine much; he died in 1997.

Stine’s extrapolation for computers was also a tad optimistic, predicting about 4 billion “circuits” in a computer by 1972. I’m not sure what he meant by “circuit” but we’ve only recently reached that number of transistors on a single CPU chip. Stine also thought that the human brain had that many “neural circuits.” Again, I’m not sure of what those were supposed to be, but the human brain has on the order of 100 billion neurons.

Of course a neuron is also a lot more complex in function than a transistor, so we’re still nowhere near a brain simulating device.

Nevertheless, Stine’s optimism was pretty well matched by the actual AI community, which thought that neurons and brains were really, really, inefficient, and that it would be possible to create intelligent devices that used many fewer circuit elements than the human brain required. Oops.

The result was an entire generation of computer science lost to AI. I’m overstating that, of course, partly because I knew more than one person who had his heart broken by the failure of the AI dream. I mention it as a cautionary tale. Not all the fruit in the New World is tasty. Not all of it even exists.

(For the record, based on some guesses about the complexity of neural function and assuming that Moore’s Law holds forever, I made an estimate in the mid-1980s that real machine intelligence would take from 80 to 100 years to come to fruition. Based on recent announcements by IBM of the “Blue Brain” project to simulate neural behavior in the neo-cortex, I’m currently estimating it will take 60-80 years. Unless Stine was right about the human longevity thing, I’ll not live to see that prediction proved right, although I could see it proved wrong if someone gets there first.)

Still and all, whatever happened in computer science stayed in computer science, for the most part. The only real hangover of the AI binge that I’ve ever had to deal with is the fact that the macro language of AutoCad is a version of Lisp. Besides, I was never much interested in Hal, Mike, or Robby the Robot. I wanted to be Harry Seldon, or someone like him.

I’ll write about that later.

This Year’s Model I

I’ve been asked to write a few things about models and modeling, as that is (in theory at least) my core area of expertise. Before I get to the cool stuff, the computer simulation models that I’ve spent at least 25 of the last 35 years working with, I’m going to “go meta” for a bit.

The word “model” in the most general sense in science is very, very broad, and can be applied to almost anything. In fact, any abstraction is a “model” of something else, as is any analogy, simile, theory, or description. This almost, but not quite, makes the general sense of model so general as to be useless.

At this high level of abstraction, even the idea of “fact” becomes a theory. Indeed, it’s pretty hard to find an instrument or even a sensory experience, that doesn’t depend upon some theoretical construct for its meaning and interpretation. So “meaning” becomes a model, as does “language,” and “thought.” That’s what Plato was getting at when he was babbling about shadows on the cave wall. Reality itself is a theoretical construct.

In the recent past, Steve Gillette and I have had a few back and forths that touch on this area when we were discussing “brute facts” vs “institutional facts.” That division is similar to the “analytic” vs “synthetic” distinction suggested by the Positivists (who rewrote a page out of Kant). It’s also possible to map “analytic” to “deductive” and “synthetic” to “inductive.” The basic idea is that there are certain sorts of propositions (or facts) that are derived from a priori definitions, using logic to derive propositions of greater and greater complexity. These are analytic/deductive/institutional propositions/facts. Then there are other sorts of propositions/facts that must take the sensory world into account in their formulation. These are synthetic/inductive/brute facts.

One of the big arguments that this sort of talk produces is that it suggests that mathematics is an invention, similar to, for example, the law, or musical theater. Some mathematicians claim instead that mathematics has objective reality. I’ll only note that the best mathematician I know personally is of the strong opinion that mathematics is a human invention.

Grind this grist fine enough and you get back to basic epistemology and questions about the nature of reality. Fine. I consider my position on the matter to be entirely defensible, but recognize that it seems to be hard to get across to some people. I once had a fairly lengthy online exchange with someone where I was trying to get across to him the difference between an abstract concept that the concrete reality that the concept might cloak. Consider a three-sided load bearing structure that we call a truss. One might very well locate a fossil, or a geological formation that can be said to be a three-sided load bearing structure. Was it a truss a million years ago? He insisted that it was, completely oblivious to my suggesting that “truss” is a human invention, and without humans, no truss exists.

I know, I know, a pointless argument, except maybe not, because it is a trap to believe that concepts exist apart from the conceiver. The sentence “If a tree falls in the forest, does it make a sound?” is a Zen koan, actually, and, contrary to common belief, koans often have answers. In this case, the answer is, “The one I’m thinking of did.”

Thursday, February 22, 2007

20 Reasons Why We Should Not Have Invaded Afghanistan

1. It’s Afghanistan.

2. The Taliban had no involvement with 9/11. Toppling that government because of 9/11 was one of the first examples of the blurring of distinctions among 9/11 conspirators, jihadists generally, jihadist sympathizers, terrorists generally, Islamists, Muslims, Arabs, and anyone the Bush Administration happens not to like.

3. It’s Afghanistan.

4. It was an act of war. War is generally a bad idea. It is a negative sum game, and there is an absolute guarantee that innocent people will be killed. When the primary grievance was the loss of innocent lives on 9/11, a response that costs more innocent lives calls into question the moral basis of that response.

5. The Guantánamo Bay detainment camp.

6. The Taliban Government of Afghanistan, however repellent it might have seemed to various observers, was the legitimate government of Afghanistan, and was recognized as such by the U.S. The United States therefore undertook to topple a sovereign government whose transgression was harboring a fugitive, a fugitive who, it should be noted, had never set foot in the U.S., nor ever been indicted for the 9/11 attacks (although there are prior U.S. indictments dating back to 1998). In any event, it is not clear that the Taliban could have delivered bin Laden even if they so desired, and sans extradition treaties, it’s not entirely clear what the legal basis would have been for doing so.

7. The “no land wars in Asia” thing.

8. There is every reason to believe that the invasion of Afghanistan was a result that was not only expected by bin Laden and company, but one that was actively desired. The occupation of Afghanistan played a major role in breaking the Soviet Union, and the possibility of doing the same to the U.S. cannot have been discounted by the planners of 9/11. The idea that the invasion caught al-Qaeda by surprise is preposterous, and any comment along the lines of “We could have caught him in Tora Bora,” should be followed with something like “if only the Fuhrer had listened to me,” or “if it hadn’t been for those meddling kids and their stupid dog.”

9. There is a children’s game in Afghanistan where a group of young boys tie up one of the group, push him down in the dirt, and then kick more dirt on him until he manages to spit on one of the other boys, who then gets tied up etc. You have trouble spitting if you are either angry or afraid.

10. The United States supplied a substantial amount of ordinance to the Afghani mujahadeen during the Soviet Occupation. An unknown, but probably substantial amount of this ordinance still exists, ready for use in IEDs.

11. It’s Afghanistan!

12. War is an action between sovereign states. A military response to terrorism itself validates terrorism, by conveying the status of sovereignty to terrorists, magnifying their perceived importance in their own eyes and those of their followers. Bin Laden is now perceived by some more as a head-of-state-in-exile than a criminal.

13. Helicopters crash more frequently in mountainous territory.

14. The creation of a large number of refugees, who then fled to Pakistan, running the risk of destabilizing a country with a large Islamist population and nuclear weapons.

15. A military response to terrorism makes it much more difficult, even impossible, to later invoke the criminal justice system. This further undercuts the rule of law, which is another terrorist objective.

16. The story goes that some Afghani mountain tribesmen pass the winter months with a bar of steel and a file. By the time spring comes, they have made a rifle.

17. Opium production in Afghanistan was nearly eliminated in 2001; in 2006 it was estimated at nearly 6000 metric tons. However one feels about drug laws, the illegal drug trade is a source of violence and corruption wherever it exists.

18. The Taliban was the product of a generation traumatized by the warfare of the 1970s and 1980s. Upon taking power, their first actions were extreme, but there are arguments that they had begun to moderate by the late 1990s, owing to the inevitable practical compromises that come with actually having to rule. The latest foreign occupation has traumatized a new generation, and we can expect the same aftermath and blowback if, as seems likely, they eventually return to power.

19. Whether or not the invasion of Afghanistan was primarily based on racism and militarism, it certainly gave aid and comfort to racists and militarists in the United States and elsewhere.


Wednesday, February 21, 2007

A Brief Primer on Global Warming

Global warming skeptics hide behind the notion that the effects of greenhouse gases are “very complicated,” which is true, but that shouldn’t stop anyone from knowing the easy parts.

The first easy part has to do what’s called radiative equilibrium, which may be expressed with a Firesign Theater reference to Teslacles’ Deviant to Fudd’s Law: “It goes in; it must come out.” In other words, the energy input to the Earth from the Sun must be re-radiated, otherwise the Earth would continue getting hotter without bound, and that would violate the Second Law of Thermodynamics.

In fact, the Earth, along with most other planets, radiates slightly more energy than it gets from the Sun, because the Earth has a natural, internal heat source, caused by radioactive decay. But that’s a small correction for the Earth (Jupiter’s excess heat is much greater, but Jupiter has more than just radioactive sources for its internal heat source).

Now the first thing that happens to light hitting a planet is that some of it gets reflected immediately. That’s called albedo, and it’s not a small effect. The albedo of the Earth is about 37%, which is pretty high compared to the other inner planets. Only Venus, at 65% is higher, whereas Mercury’s reflectivity is only 11%, the Moon’s is 12% and Mars’ 15%. These are averages, of course, although Venus is pretty uniform, because of the constant clouds.

The Gas Giants (Jupiter, Saturn, etc.) hover around 50%; some icy moon’s (I’m looking at you, Enceladus!) have very high albedos, despite not having atmospheres.

But it’s atmospheres that boost albedo, no real surprise there, because molecular scattering of light by gases (Raleigh scattering) is pretty efficient. Then there are those spiffy sulfuric acid droplets in the atmosphere of Venus.

Venus is particularly instructive, because its atmosphere is both thick and reflective. The result is that, by one measure, the place is surprisingly cool. That measure is the temperature at 55 km above the surface, where you have about one-half Earth’s atmospheric pressure and a temperature of a balmy 80 F. Up the O2 in your air supply a bit, and you could live there quite comfortably (in my story “Aphrodite’s Children,” I have it a bit cooler, owing to a reduction in the planetary greenhouse, but that just means that people can live further down). If Venus had a lower albedo, you wouldn’t be able to do that, because even at that height it would have to be much hotter to get rid of all the absorbed sunlight.

But the surface of Venus is hundreds of degree hotter, because increasing pressure heats things up. This is called the adiabatic lapse rate, and it’s a simple consequence of Boyle’s Law. Put some more pressure on a quantity of gas and its volume gets smaller, and its temperature increases.

So what does this have to do with the Earth’s greenhouse effect?

Well, first, Earth’s atmosphere is mostly transparent. Oxygen photodissociates in the hard ultraviolet (below about 280 nm, as I recall), and ozone is produced in the upper atmosphere. Ozone, in turn, absorbs certain wavelengths of infrared, so the stratosphere warms, and because there’s a lot of ozone in the stratosphere, it’s warm enough to stratify, i.e. form a thermal inversion.

Other gases absorb some IR, most notably water vapor. Water vapor is mostly confined to the lower atmosphere, because it gets colder as you go up (adiabatic lapse rate/Boyle’s law again), and the water rains and freezes out.

The Sun puts out a fair approximation of black body radiation, and at the temperature at the surface of the Sun, most of the energy is in the visible and near IR region. So the atmosphere is mostly transparent to sunlight. Some of it does get scattered (“Why is the sky blue, Daddy?”) and that has a lot to do with the Earth’s relatively high albedo. And some gets reflected away by clouds, more on that in a bit.

Most sunlight reaches the surface of the Earth, and most of it is absorbed. So the surface warms. Then what?

The surface of the Earth only gets up to, at most, a bit over three hundred Kelvin; a hundred degrees F is mighty hot for an Earth surface temperature, and most of the surface is way below that. At those temperatures, the radiation emitted is in the microwave, very long IR region. A lot of gases have absorption bands in that region; these are the so-called “greenhouse gases,” water vapor, CO2, methane, ozone, etc.

So when the Earth emits radiation, a good bit of it is absorbed by the air above it.

Also, and this is very important, air is warmed by contact with the Earth. So again, air near the ground gets warmed by the Earth.

Warm air rises, and as it rises, it cools. Eventually, it reaches the same temperature as the air whose level it has risen to. But there are some things that interfere with the idea of a “dry adiabat,” as it’s called. For one thing, the air probably contains some water vapor. As the air cools, it loses the capacity to hold water, and so you get water or ice formation (and precipitation). The condensation of the water vapor emits heat, so the air gets some extra “oomph” as it rises.

Then there is the matter of the air continuing to emit thermal radiation. The atmosphere is transparent to some of the bands in the radiation that the air is emitting, so that energy escapes rapidly into space. That cools the air more quickly than you would expect. Eventually, the cooling air begins to descend; usually it has moved toward higher latitudes when it does this, because of the large scale circulation patterns in the atmosphere.

If you add more greenhouse gases to the atmosphere, that radiative cooling phenomenon that I just mentioned happens at a higher altitude than if the GHGs weren’t there. The “optical thickness” of the atmosphere is greater to IR, so the rate of radiative loss is less at any given height. So the air stays warmer longer—and goes higher.

But Boyle’s Law still applies, and when the air begins to sink again, by the time it reaches the ground, it’s a bit warmer. That’s the “Greenhouse Effect” in a nutshell.

Now if that were all there was to it, estimates of global warming would be about half of what they currently are at. So current models of GW have a positive feedback term in them. That positive feedback term is water vapor. Warm air hold more water vapor.

Back in the 1980s, before the GW signal was really clear in the statistics, I thought that the water vapor effect was a loophole. In fact, I thought that increased water vapor would mean increased clouds, and that would result in a negative feedback term, so GW estimates might be high by as much as a factor of four.

Then came Pinatubo.

The eruption of Pinatubo put huge amounts of sulfuric acid droplets into the stratosphere, increasing Earth’s albedo by an easily measured amount. And the standard Global Climate Models just nailed the effect. The water vapor/clouds effect isn’t a long time constant effect. If it were a loophole, it would have shown up after Pinatubo. It didn’t; water vapor is a positive feedback effect, end of story.

Actual scientists (as opposed to political operatives, who are thick on the ground with corporate money from the carbon lobby) have been searching for negative feedback effects to offset GW for many years now. One was the notion that more water vapor meant increased precipitation at higher latitudes, more snow and ice, and that would have an effect on albedo. Nope, glaciers have been in retreat. Others think that added CO2 will cause greater plant growth, and this will somehow do something, but CO2 just keeps going up.

In fact, only about half of emitted CO2 stays in the atmosphere; most think the rest goes into the ocean. Heaven help us if that negative feedback loop goes positive.

And there have been some negative loops that have flipped. Sometime in the 1980s, the Arctic tundra went from being a net sink to a net source of greenhouse gases.

As things continue to heat up, the GW denialists are looking more and more like a faith-based initiative. The science just keeps looking more and more certain, while the denials get more and more strident. It’s already ugly. It’s going to get worse.

Tuesday, February 20, 2007

Why Barry Goldwater Lost Tennessee

Barry Goldwater’s loss in the Presidential election of 1964 was one of the most overwhelming defeats in U.S. history. He won just six states: Alabama, Georgia, Louisiana, Mississippi, South Carolina, and his home state of Arizona, with less than 40% of the popular vote.

Except for Arizona, every state he won was part of the Deep South, hardcore segregationist states, in other words. Goldwater himself was not a segregationist, but he did believe in “States’ Rights” as a matter of constitutional conviction and federalism. The use of this doctrine to support segregation supposedly caused him genuine discomfort, at least according to “Shakespeare,” Goldwater’s nickname for his speechwriter Karl Hess (who later became a radical left libertarian, but that’s another story).

Let’s look at the state by state election returns for Goldwater’s top 19 states:

Mississippi: 87.14%
Alabama: 69.45%
South Carolina: 58.89%
Louisiana: 56.81%
Georgia: 54.12%
Arizona: 50.45%
Idaho: 49.08%
Florida: 48.85%
Nebraska: 47.39%
Virginia: 46.18%
Utah: 45.14%
Kansas: 45.06%
Tennessee: 44.49%
South Dakota: 44.39%
Oklahoma: 44.25%
North Carolina: 43.85%
Indiana: 43.56%
Wyoming: 43.44%
Arkansas: 43.41%

Mississippi is the real standout, of course, and the good people of Mississippi were pretty forthright about the reasons for their votes; it was in furtherance of a segregationist objective (it goes without saying that there were practically no eligible black voters in Mississippi at the time).

I’m not going to speak states other than Tennessee (and for reasons that will become apparent, North Carolina) on this list, since I have no personal knowledge of their situation at that time. (I did live for a while in North Carolina, some years before 1964, and it shares some characteristics with Tennessee).

At first blush, one might think that Tennessee might have been fertile ground for Goldwater. For one thing, unlike many other southern states, it actually had a functioning Republican Party. This owes to the peculiar geopolitical configuration of Tennessee. Tennessee, like Gaul, is divided into three parts. I grew up in Middle Tennessee, sometimes called the “Nashville Basin,” which is bounded by the “Highland Rim.” Middle Tennessee also sometimes considers its boundaries to be the Tennessee River, though strictly speaking, that extends Middle Tennessee well into the Appalachians to the east, but Nashvillians are nothing if not expansive.

But East Tennessee is Appalachian, and has been a Republican redoubt since the Civil War. Indeed, the separate politics of the Appalachians gave rise to the State of West Virginia in 1863, for similar reasons: the mountain people wanted no part of slavery and, in fact, resented the competition of cheap slave labor.

West Tennessee, by contrast, is Deep South. It was cotton country, and its “cultural capital” is Memphis, which serves the same function to Mississippi. When a Mississippian speaks of going to “the city,” he is more likely to be speaking of Memphis than any city in Mississippi itself.

So West Tennessee actually voted in significant numbers for Goldwater, giving him 9 out of 21 counties. Likewise, East Tennessee stayed significantly Republican, with only Chattanooga and 5 counties including Knoxville edging into Johnson territory. It may be noted that Tennessee cities at the time generally followed the machine political model of northern cities like Chicago. Blacks were not disenfranchised in Tennessee urban areas; they formed an important part of the Democratic political base. This was also true of Memphis, which voted for Johnson.

So prior analysis would have suggested a squeaker, maybe like Florida, or even closer. Yet Goldwater lost Tennessee and Kentucky big time. Why?

Three letters: TVA.

Barry Goldwater had gone on record as being against the Tennessee Valley Authority, which he considered socialist. Actually if socialist means “government ownership of the means of production, it surely was socialist, although that didn’t stop Goldwater from supporting the Central Arizona Project, another big federal water project in his own state. In any case, Goldwater had said that he’d sell TVA “for a dollar” if he had to. Editorial cartoonists in Tennessee had a field day.

Goldwater didn’t even bother to campaign in Tennessee; his running mate, William Millar came and did a lackluster job of it. Johnson came to Nashville though, and stated that he would never sell TVA roughly 1,264 times by my count, though I admit I may have dozed off somewhere near number 500.

North Carolina also had plenty of cheap power coming from federal projects, and that may be part of why they went against Goldwater. But North Carolina and Virginia are both Tobacco States, and tobacco farmers knew from whom all blessing flowed: the Federal Government controlled the tobacco industry to the benefit of the farmers in those days, much less so now.

So let’s emphasize what happened to Goldwater in Tennessee. The Conservative candidate was defeated in a southern state because he was against socialism and they were for it. They had their goodies and they wanted to keep them. That’s “conservative” sure enough, but it’s not “Conservative.” It’s only very recently that “privatization” of TVA has become a real topic for discussion, under the rationale of “it will reduce costs.” That it will, I tell my political friends in Tennessee; it just won’t reduce prices. Layoffs will happen; costs will be cut; then prices will be raised. And the gap between the two will go to the shareholders, because that is how it works. That’s how it’s _supposed_ to work. People understood that in 1964; it’s only 40 years of propaganda and forgetting that gets them to think otherwise.

The 1964 novelty of voting something other than a straight Democratic ticket had its effects, as did the passage of the Civil Rights Act and the backlash from it, especially the Public Accommodations portion, which in Tennessee at least, resulted in a silly period when waiters at some restaurants would pretend that you had a reservation, if you were white, and regrettably inform you that only those with reservations would be served, if you weren’t.

In any case, the 1966 election for Senate produced the first Republican Senator since Reconstruction: Howard Baker Jr. The 1970 election produced the next, Bill Brock, who managed to take out Al Gore Sr., in a campaign that emphasized Gore’s connections to the Kennedy family. Brock had been a congressman, and, to the best I’ve been able to ascertain, voted against both the Civil Rights Act of 1964 and the Voting Rights Act of 1966. He was a real conservative, but lost in 1976 to the Carter coattails or the Ford backlash, take your pick.

The Republicanization of the south was retarded first and foremost by the stranglehold that the Democratic Party had on the Statehouses, giving incumbent Democrats the power of gerrymandering districts and outright voter fraud. The Voting Rights Act of 1966 also extended the period of Democratic hegemony by replacing the rapidly defecting whites with newly enfranchised blacks. In 1968, the “Goldwater South” went for Wallace, who hoped to have some leverage as a result, but Nixon won outright, so no deals were cut, as such. Nixon did take Tennessee and North Carolina, however.

The question of race in Presidential politics is muddled during the 1970s, owing to the lopsided victory of Nixon in 1972, and the southern candidate Carter in 1976. Reagan, however, kicked off his Presidential campaign in Philadelphia, Mississippi, a move that drew raised eyebrows in many quarters. Reagan barely won Tennessee in 1980, with a less than 0.3% margin.

But the march of Republicans through the South continued during the entire period from 1964 to the 1990s. Every election brought another “first Republican Senator/congressman since Reconstruction” story, as well as ongoing “flips” where a former Southern Democrat decided that the Republican Party was more to his liking. Was it because Republicans were more “conservative?” Sure, provided you remember that “conservative” can include “socialist” if it is perceived as an advantage to the voter.

Was there racism involved? Well, I have your reservation right here, Mr. White. We’ll be happy to seat you now.

Thursday, February 15, 2007

Nancy Reagan's Astrologer

If you do a little research, you'll find that the most dangerous profession in the United States is "timber cutter," aka "lumberjack." (Short pause for Monty Python nostalgia). Look at the stats a little more closely, and you see that, while the timber cutters, with 118 deaths per 100,000 per year die more often than fishermen generally (71 d/100000/y), Alaskan crab fishers are at greater risk: a shocking 400/100K/y.

Even so, Alaskan fishing is not the most dangerous job in the U.S. Being President is much more dangerous.

We've had 1 assassinated President since Eisenhower (the first President I remember), and 4 since the office was created. Both of those stats work out to be 1 death every 54 years, about 1850/100K/y, more than 4 times as dangerous as fishing for crab in Alaska. And it doesn't stop there. Of the 10 presidents in my lifetime, 8 had people sent to jail (or worse) for plotting/trying to kill them (the lucky 2 were Eisenhower and Johnson). One (Reagan) was seriously injured and could easily have died. One (Ford) had two separate incidents where members of the Manson Family (oh good grief) tried to kill him; in one case the shots went wild, and in the other, the dingbat apparently didn't know how to take the safety off. When I say "dingbat," incidentally, I am not implying that I wish she'd been more competent. I liked Ford, and I'm glad he lived a nice long life.

Still, looking at the stats over the past half century, if you become President you've got a near certainty that people are plotting to kill you, a 30% change you'll be fired upon, a 20% chance of being hit, and 10% chance of being killed. During the Lewinsky nonsense, a lot of people got very incensed about Clinton's "recklessness." Man, if there was ever a job that selected for reckless, it's being President, and hanky panky with an intern doesn't even make the charts, in my view.

Of course, Presidents get plenty to offset the risks, power, fame, money, all the alpha male perks. But what about their wives? Did those two attempts on her husband's life help put Betty Ford into the rehab business? She's the obvious one, but I daresay that other First Ladies have had to battle the demons without the opiate of power to help balance the scales.

Now maybe my memory is faulty, but I don't remember much reporting about Nancy Reagan's astrology kink before the incident that almost killed Ronnie. I do remember hearing the shaking heads complaining about how she was jerking his schedule around, based on her astrologer's advice; I'm pretty sure that was after Hinkley. Ah, silly superstitious bitch, how dare she inconvenience people, make the President's schedule unpredictable. How dare she, in fact, do the very thing that security experts say is one of the best things to do if there are death threats against "the target." Which is to say that, whoever was complaining about Nancy's interference, I'll bet it wasn't the Secret Service.

Now realize, I've never liked Nancy Reagan and I still don't. There's nothing special about my opinion in that respect; a lot of people don't like her, and I'm one of them. And I don't like astrology, either. Pseudoscientific crap is the short version. But, given the circumstances, I'd have to say that NR's response to the situation was more productive than alcohol or prayer. She might even have known that at the time.

What Moral Relativism Means To Me

The straw man version of moral relativism is “just do whatever you feel like doing.” But if straw men were the real deal, then we wouldn’t be bothering with such obvious losers as Darwin and Einstein. What I’m about to describe is one version of moral relativism (mine, which is as it should be, given the nature of the subject), and some notions about implications.

I’ll start with the concept of “personal good.” I would hope that this is not controversial (dream on), since it is easily verified and exists in the language in such phrases as cuo bono (“who benefits”), and “whose ox is being gored.” Simply put, one’s notion of what is good or bad is often purely individual. If I am hungry, a can of peanuts is good, because I can eat them and satisfy my hunger. However, for someone with a peanut allergy, peanuts are bad, because if they eat peanuts they could die.

As an aside, let me note that poisons are often called “vile” or “evil,” (similar words with possibly differing roots). This tends toward personification, ascribing human attributes to inanimate matter. The personification may be showing something important. While Hurricane Katrina killed more people than the Oklahoma City bombing, one rarely hears the hurricane itself called evil, not like (for example) illegal drugs are called evil. I’ll suggest that there is a tendency to ascribe “evilness” to things that deceive. A poison may lurk in a tasty food, so it is evil. A drug may convey pleasure, but corrode the spirit, also evil. And so forth. In any case, I’ll argue that the concept of “evil” carries with it an assumed human content, which may be important, given that we are talking about the difference between subjective and objective.

Good and bad are, therefore, at least at one level of discourse, subjective phenomena. Their attribution depends upon a point of view. However, human beings tend to try to objectify their subjective experiences, and the impulse is reasonable, assuming that there is an objective reality, and I’m not here to dispute that. The question here is, “Can one objectify the concepts of good and bad to the point where they become absolute?” This constitutes no problem for religion; morality then becomes a matter of divine judgment as determined by revelation and adherence is a matter of faith. However, this is an assertion rather than an argument and I’ll not deal with it here.

The strongest argument against the idea of purely objective, logically derived, absolute morality is that a number of Very Smart People have tried to devise such a system and have failed. Kant’s Categorical Imperative, for example, leads to the result that one must always be truthful, and so must tell Nazis where the Jews are hidden if they ask. John Rawls’ “A Theory of Justice” contains an interesting thought experiment (What society would you design if you knew you would be a member of it, but could not chose which role you would have?), but it is vulnerable to the inclusion/exclusion problem. Are we only talking about human beings? How about domestic animals? Wild animals? Trees? And if it is only human roles, what do you do about someone who denies the humanity of slaves, the vegetative, the French, fetuses, or terrorists? In any case, any attempt to design a society ab initio is suspect.

(The argument against the Categorical Imperative used above is common in discussions of moral philosophy. It’s often mistaken for “reductio ad absurdum” but the result is more horrific (turning people over to be murdered) than absurd. I’d call it “reductio ad nauseam” but “ad nauseam” is usually taken to mean “massive repetition” and that does not apply, except insofar as this is yet another use of Nazis in an argument, which is seldom a good sign).

The Nazi example is often used as an argument against moral relativism generally, or cultural relativism specifically, but it’s not a fair cop. National Socialism failed on its own terms: it destroyed the German Volk, got its leaders killed, and never achieved anything like its goal of a purified “master race.” That the last goal was impossible should be an indication that “relativism” has real limits, as opposed to the straw man version.

The fundamental principle of morality is that actions have consequences, and some of those consequences are better than others. Moreover, a single action will have multiple consequences, again, with varying value. Even from a solitary point of view that admits no others (the alone-on-a-desert-island example), consequences will vary over time, such that one would need to organize one’s behavior to maximize the good results – over time – and minimize the bad. It does not take much analysis to recognize that, to our hypothetical desert islander, using the morphine in the emergency kit to get high is more likely to have bad consequences than good. Having someone curse someone else’s lack of proper behavior is a good indicator that something has gone wrong in the morals/ethics department, even when the cursee and the curser are the same person with an intervening gap of time.

A philosophy of moral relativism, with its emphasis on point of view, is hardly a prescription for moral laxity or license. In actual fact, it recognizes the difficulties of human action more completely than do absolutist doctrines. Moral absolutism carries within it powerful temptations such as sanctimony and grievance. If some injury is done to me by someone, I consider their actions to be “bad,” which they certainly are, from my point of view. If morals are absolute, then the actions that led to that injury must also be bad, and the people who did them are also bad. If morals are absolute, if they are objective, then it is not necessary for me to consider the point of view of someone who has harmed me. That, after all, is merely their “relative” judgment, while I am backed by an absolute judgment. Add only the doctrine that it is okay to do bad things to bad people (I’m trying to think of some philosophy that does not allow crimes to be punished and I’m not coming up with much), and you have a prescription for war.

In short, absent a deity that speaks in a clear and unambiguous voice, a doctrine of absolute morality appears to be an invitation to projection: making one’s own judgments (and prejudices) the center of the universe, the law which all must obey. Moral relativism, at the very least, carries within it the idea that the moral universe has more than one center. I consider that to be an improvement.

Why I Like Paris Hilton

Paris Hilton pays Social Security taxes.

I find that remarkable, as indicated by the fact that I am remarking upon it. Although she has become the poster child for Spoiled Rich Bitch, Paris Hilton is, in fact, actually one of the few working, productive, members of her social class. Never having met her, I can't say whether or not her bimbo air-head slut act is real or just a public persona, but I don't really care all that much. What I do know is that she is paid for actual labor, and must pay Social Security and income taxes on her wages. My mother currently collects Social Security benefits, so in a very real way, Paris Hilton is paying money to my mother. I like that. Sending money to my mother is a good thing.

Yet, when attacking the moratorium on estate taxes (and the drive to make them permanent, so as to secure the creation of an hereditary moneyed aristocrasy), various pundits refer to it as the "Paris Hilton Benefit Tax Cut." Yet there she is, the hardest working model/actress/reality TV star since Tyra Banks. Why not go after the more egregious offenders, the ones who really do nothing but party and cash trust fund checks? Or worse, the ones who use their inherited wealth to fund think tanks devoted to explaining why inherited wealth is a well-deserved reward for being so innately superior to the common man?

But the eggregious examples are the "Top, Out-of-Sight Class" to use Paul Fussel's phrase for it. Most members of that class recognize that, if people ever really got wind of who they are and how they lived, the good times might cease, either through private enterprise (i.e. individual criminal behavior, like kidnapping), or public policy (oh, I don't know, maybe higher estate taxes?).

God forbid we should have a "death tax," though no one has ever explained to me why it's better to tax a living worker than a dead rich guy. It's not like the dead guy needs the money any more. Maybe it's just that labor is so intrisically inferior to accident of birth.

Yes, of course, the taxes are actually on the heirs. (Then why call it a "death tax?" For propaganda purposes, that's why.). So why not simply put a tax on heirs? Give it a sizeable deduction: you can inherit an untaxed $20 million during your lifetime, but after that, it's 75% to taxes. You have a billion dollars? Either find 50 people to inherit it, or pay a lot of taxes. You want to inherit a billion dollars, then you'd better inherit from someone with 4 billion. And that's not $20 million per inheritance, that's $20 million total, per person. You can't get $20 million from 50 people to get your billion, the high rate kicks in when the heir hits $20 million, no matter how many sources.

Now $20 million is probably a bit less than Paris Hilton stands to inherit, but I'm not worried. She looks like she's having fun as it is. And if she somehow screws it all up and winds up penniless in her old age, she's always have Social Security. That is, unless someone manages to kill it under the guise of "reform."

Body Count Mentality

When I was a freshman at RPI (back when a slide rule meant something, you whipersnappers!), I joined up with the Rensselaer Engineer, the school's engineering magazine. I later became its editor, but that's another story.

What I did that first year was write articles, a lot of them, sometimes several per issue, under pseudonyms so it would look like we had more people working for us that we did. One of those articles was titled, "Something Called CBW," after a chapter title in a James Bond book, as I recall. "CBW" is "Chemical and Biological Weapons."

Now they are called "Weapons of Mass Destruction," or "WMD," which is just wrong, on so many levels. There are several places on the Web that have fairly good discussions of why this is the case. I'll point you to this one, by retired Army Master Gunner, Red Thomas.

I will note that the Thomas certainly isn't a chemist, and when he says "amyl nitride" (there is no such thing) he means amyl nitrate or amyl nitrite, both of which have very similar actions on the body. Both are vasodilators, as are most organic nitrates/nitrites, and smooth muscle relaxants generally. (Remind me to say something about the paper "Sudden Death in Explosives Workers" sometime. It does not refer to the obvious dangers of explosives; it refers to what happens when someone with hidden heart disease goes off vasodilators on weekends, when they’re not working around nitrated explosives).

Now let me add a few things to what Master Gunner Thomas has to say.

People get pretty wiggy about biological agents, when actually these are the least of our worries, least likely to be used, least likely to be effective, and most likely to backfire against terrorists. Why the last? Because the real “scare scenario” that of triggering a pandemic, is the last thing that any terrorist organization wants. It removes their prime defense, which is non-locality. They are dispersed and diffuse, so they are difficult to find and kill. But a global pandemic doesn’t care about that at all; it will just kill x number in a population randomly. Actually, if you are living in a cave, it’s worse than random, because poor food, sanitation, and medical supplies make you more vulnerable. (Incidentally, that also means that, if a country were actually worried about biological warfare, then a top priority should be a well-run and well-funded public health service).

Notice that most “weaponized” biological agents are actually designed to reduce the likelihood of a spreading epidemic. Why is anthrax so well-suited for biowarfare (realize that anthrax is the only bioagent that has been used so far as a terror weapon)? Because anthrax is hard to transmit, but you can prepare a lot of spores to infect people. Botulinus toxin is even more obvious: it’s just a chemical agent with a biological source, like ricin.

Once upon a time, an air pollution regulator accused me (fairly politely, but still…) of having a “body count mentality.” What he meant was that I trusted mortality statistics more than projections or estimates of morbidity, real deaths as opposed to harder-to-quantify sickness or “projected” deaths deaths based on toxicological models, animal models, etc.). Probably a fair cop, but let’s see what the old body count mentality gives us for CBW:

Bioterror deaths:

U.S. mail anthrax cases of late 2001: 5
USSR accidental anthrax release 1979: 66 (possibly more, but no more than a factor of two)

That’s pretty much it. And for the USSR case definitely and the US case probably, both were the result of state-run biological warfare programs, not terrorism per se.

How about chemical weapons? There the body count is a bit higher for terrorists, mainly because of the Japanese subway attacks by the AUM group using Sarin nerve agent. That killed 12 people, and injured thousands.

Then there is the charge that Saddam Hussein used gas warfare against the Kurds, and/or against Iran in the Iran/Iraq war. I’ve followed these reports and it is my firm conclusion that I have no idea if any of it is true. There was also a report that an Improvised Explosive Device (IED) used in a roadside bomb in Iraq in 2004 also contained sarin, but I haven’t even been able to find the name of the lab that supposedly did the confirmation testing. So what we have is a press release, at a time when there have been news stories of deliberate disinformation campaigns sponsored by the military. Feh. What remains unquestionable, however, is that there was plenty of opportunity for Iraq to use the stuff against U.S. soldiers in two wars, and Iraq never did so. If they had it, why didn’t they use it?

The obvious answer: gas warfare is not very effective.

On the other hand, look at this table:

I’ve lumped a number of terrorist attacks together with airplane disasters, because an aircraft is probably one of the easier targets, even if you’re not trying to fly it into a building. The “Iraq SCUD” incident was the biggest single casualty hit during Desert Storm. The 1983 Beirut bombing was the greatest US casualty hit prior to 9/11 (and actually the greatest number of US soldiers killed in a single day since Vietnam. But notice that our own homegrown terror warrior Tim McVeigh managed quite a respectable kill ratio with just an ANFO truck bomb. Put another way, if each of the 19 9/11 hijackers had had the same success as TM managed in Okalahoma City, they’d have killed more people than they actually did.

In short, going on the record rather than “scenarios” and other fictions, ammonium nitrate and fuel oil is a better WMD than chemical or biological weapons.

Wednesday, February 14, 2007

Comic Book Mutants

I've been thinking quite a bit about evolution recently, as part of a complicated process of intellectualization, to try to keep at arms length the the fairly strong emotions I tend to have about evangelical and fundamentalist Christians. I might mention that the city where I was born and reared is a center of religious publishing among other things.

My musings (loosely defined) are, in part, a response to a friend's response to the Intelligent Design foufrou. Basically, he and I have been talking about how science fiction has treated evolution. As a result, I've been “reviewing the literature,” as science guys say. The biggest part of reviewing what people have written (and others have read) about this subject is what it says about the subterranean views of those readers/writers. Narrative as an X-ray into the psyche, as it were.

Overall, that's a vastly more complex subject than I have the time or patience for at the moment, to say nothing of my attention span. However, there's a bit of a tangent that I hope isn't too long for this space or my attention span: comic book mutants.

First, an odd little point: mutants in science fiction are usually part of a gaggle having similar characteristics, or they are singular, the only mutant in town, so to speak. But comic books tend to have a lot of mutants, and each one has a different power. Hmm. Does this mean anything?
Well, let's start with one factor: comic books lean heavily toward "superheros" who serve as wish-fullfillment identifiers for adolescent boys (and those of us in touch with our inner teenager). But to notice that superheros tend to have unique abilities only begs the question. Why are they unique?

I'll argue that this is a reflection on the subjective world of adolescence. I remember a Garrison Keillor riff I heard once that went something like: Every adolescent is convinced that he or she is unique, that the world has never seen one so splendid, the sun has never before shone on such a person before. And each and every one of them is right!

In short, one way we are all alike is that we are all unique. (Ah, I love the smell of paradox in the morning; it smells like simile).

Add to that the idea that the outsider is cast in that role because of his/her abilities and virtues (rather than shortcomings or vices), and you have the perfect narrative for the adolescent sensibility. There, there, child, they're mean to you because they're jealous. So you get a message that emanates from material as diverse as the Uncanny X-Men and Atlas Shrugged.

Let's also note that the urge to socialize (and be socialized) is also apparent in all of these narratives, from Galt's Gulch to Professor Xavier's School for Gifted Children. It's also there in the old SF stories about "The Next Step in Evolution," of course, where all the mutants are similar, members of a new race. But (and boy, don't I find myself taking weird comfort from some of these lines of thought), nowadays, we get a more diverse cast of characters, with accordingly diverse abilities. True, some of it is "characterization by funny hat" (or superpower), but one mustn't expect more than the narrative is set up to supply. Besides, and God bless, people often get a little skittish when something looks a little too much like racism, and skittish readers are easily lost. Of course, that may just be me being optimistic.

On Torture

I've got a novel that I've been shopping around, a bit of "space noir" about a protagonist with a Dark Past. You surmise fairly early on that, at some point and among other things, he tortured people. (Let me note that I wrote the story before the Abu Ghraib stuff happened, so I wasn't being topical or anything).

For that matter, what I'm going to say here isn't really topical; it's just some background rumination.

People who have looked at the problem carefully know that torture is not a very good way to obtain information, because basically someone who is being tortured is trying to get the torture to stop, period, so they'll say what they think you want to hear. Moreover, (again, according to people who are expert in interrogation) there are better ways of getting information from those who might have it and might be disposed to tell it, under the right circumstances.

Nevertheless, torture does "work," given the right notion of what "work" means. It will obtain confessions -- if you don't care whether the confession is true or not. It will deter. It was noted that during Desert Storm, Iraqi soldiers preferred to surrender to U.S. troops than to other members of the coalition; I doubt that this is true anymore (even if there were still a coalition), so that's one aspect of "deterrent." Torture does punish; it just happens to be part of that "cruel and unusual" thing that we sometimes hear about.

But, as we are reminded fairly often, a good many of the people who have come to be "subjected to unusual methods of interrogation" by agents of the U. S. have not been convicted of any crime, and there is reason to believe that many of them were simply in the wrong place at the wrong time, rather than being part of some nefarious scheme to produce 9/11, the Sequel.

Some have suggested that this is the result of racism or something like it, that simply because the prisoners are Arabs (though many of them are actually non-Arab Muslims), they are being scapegoated. Or that it's actually religious bigotry against Muslims. That's consistent with the facts on the ground, anyway, and I'll stipulate that this is probably among the reasons why some people support the practice.

But let me suggest another possibility here. Torture isn't just about punishing those who are being tortured. It isn't even about punishing Muslims generally.

In my view, Guantanimo and Abu Ghraib are part of the ongoing culture war in the U.S. Partly it is about the culture of toughness, one group of people showing how much they are not "bleeding heart liberals." Partly, it's to intimidate enemies. But that doesn't go far enough either. The mere fact that the U.S. sponsors torture causes pain to one side of the culture war (you know who you are). And The Other Side likes that it causes you pain. It's meant to hurt you, plain and simple. They hate you that much.

Monday, February 5, 2007

Magic Market Fairy Dust

As nearly as I can tell, the basic position of the Conservative Movement about racism in America is derived from two propositions, 1) There isn’t any racism in America any more, (except the liberal racism that is exemplified by affirmative action) and 2) What racial discrimination exists today is a rational response to market forces.

The Conservative Movement’s assault on affirmative action is pretty obvious, and is predicated on proposition 1, that there is no racist discrimination against African Americans anymore. That allows the framing of the debate about affirmative action in the same terms as the debate about “reparations” for slavery. Do past injustices require present remedies? That is a classic example of begging the question, in this case, the question of current racial discrimination. Given the fact of current racist discrimination against minorities, affirmative action would be more sensibly viewed as a not very good attempt to compensate for that discrimination in some way.

But the market argument is more pernicious, since it relies on what I’ve come to call “magic market fairy dust.” This is the generally unexamined assumption that whatever happens when “the market decides” is good and true and beautiful. Conversely, any attempt to “interfere” with the market must produce results that are bad, false, and ugly.

One notable dispenser of magic market fairy dust is Dinesh D'souza, a fellow at the Hoover Institution whose books include Illiberal Education, about how intolerant American universities are of conservatives, and The End of Racism, about how there isn’t any racism in America anymore, except among blacks and liberals.

D’souza’s work on racism is almost entirely a matter of seeking mechanisms for individual racist attitudes, then claiming, because there is a mechanism involved, it doesn’t represent racism. For example, he notes that in the rebellion against slavery, the “bad nigger” became a symbol of defiance and a hero to his people (see, for example, the legend of Stagolee). This rebellious attitude then became part of the norm, to the detriment of black integration into (now without racism, remember) white society.

Of course, rebellion against authority is a generally attribute of American culture generally, and suburban white kids are big fans of Ice T and 50 Cent. But the suburban white kid can simply put on a suit and tie and speak normally at the job interview. No matter what the black kid does, he’s still going to be black.

D’souza’s argument becomes even more stark when he writes about cab drivers failing to pick up blacks, a “rational” judgment, he suggests, given the higher crime rates among blacks.
Here is the magic market fairy dust in its starkest form. Let’s conduct a thought experiment. Suppose someone were to take a dislike to some author/speaker, Dinesh D’souza, for example. Suppose that someone were a handy sort of fellow, maybe like Theodore Kaczynski, the Unabomber, and this fellow set about planting bombs in places like bookstores that carried D’souza’s books, auditoriums where D’souza would speak, maybe even the Hoover Institution.

Would D’souza still get $10,000 per speaking engagement, and would he get as many as he gets now? Let’s assume that he would not, that bookstores and speakers bureaus would make the “rational” judgment that it was not worth the risk, so some bookstores wouldn’t stock the book, and some speaking engagements would simply go away.

Would this be okay with D’souza? Would he consider it fine because the market was the one making those decisions? Or would he, just possibly blame the guy who was setting the bombs?

The market takes all available information and converts it into prices. If some group of people undervalues black employees, then that will be reflected in the market price for blacks. “Rational” or not, it is unjust, and to pretend otherwise is also unjust.

I believe that economics is a science, and that “the market” is a phenomenon that is described by economics. In that sense it is like gravity. One doesn’t look at a crashed aircraft and say, “That’s gravity in action.” One doesn’t step off a balcony and say, “Let gravity decide.” So why should one look at poverty and say, “The market in action.” Why should one point our horribly inefficient and baroque health care system and say, “Let the market decide?” Why would you look at someone lumping people into a group by skin color and say, “That’s rational.”

At its core, the Conservative Movement is very clear about what it wants, more money for corporations and rich people, criminal penalties for certain drugs and sexual behavior, control of information and the public discourse. Movement conservatives seem to have no problem gimmicking markets whenever is necessary to advance those goals. But whenever there is some attempt to do something that is inconsistent with the Movement agenda, the market becomes inviolate, and the “problem,” like racism, poverty, health care, is made to vanish by sprinkling it with magic market fairy dust, which is pretty potent stuff, provided you want to trudge along the same old path and call it flying.

Ayn Randed, nearly Branded

“I’ve been Ayn Randed, nearly branded, Communist, ‘cause I’m left handed, but that’s the hand I use, well, never mind.” – Paul Simon, “A Simple Desultory Philippic.


When I was 14, I read The Fountainhead. Soon after that, I read Atlas Shrugged. Then We, the Living. Then Anthem. I even read The Night of January 16th. I read For the New Intellectual. I subscribed to The Objectivist. I read The Virtue of Selfishness. I read Capitalism: the Unknown Ideal. Ayn Rand touted Hayek, so I read Hayek. And von Mises, so I read von Mises. Milton Friedman, Henry Hasslet, The Feminine Mystique, all Randian recommendations. I bought a recording of a Tchaikovsky piano concerto that she liked. I eventually even found a copy of Calumet K.

Sounds like the recipe for a Randroid, doesn’t it?

Okay, how about this: Ayn Rand didn’t like Kant, so I read Critique of Pure Reason. She hated Keynes; I read the General Theory. She called Nietzsche a “whim worshiper,” so I read Nietzsche. She despised John Kenneth Galbraith, Thorstein Veblen, Karl Marx, so I read them all. She found existentialism abhorrent, so I sought out Sartre, Kierkegaard, Husserl, Heidegger, Buber. She disdained the logical positivists, so I read about positivism, and eventually even formed an opinion about what Wittgenstein was going on about. I also realized fairly early that what Rand was calling “Objectivist epistemology” was basically a variant of positivism, which was fairly revealing.

Maybe no so Randroidal as all that, eh?

I had a few friends in Nashville who were interested in some of the things I was interested in, even a couple of who were interested in the philosophical ramblings about. But I didn’t encounter many followers of Ayn Rand in Nashville, nor even at RPI. Those I did run into were, well, embarrassing, and I was curious as to why.

It was obvious pretty early on that there were some pretty gaping flaws in the Objectivist party line. For one thing, while this business of never “initiating” force, i.e. only fighting in self-defense sounds good, it pretty much ignores the problem of what we now call “collateral damage.” If collateral damage is forbidden, then you’re basically a pacifist. If it is ignored, then you have a loophole you can drive an atrocity through. Either way, you’re dealing with something that has been conveniently swept under the rug.

Then there was the jargon. Here I’m going to express a bit of pride for my teenaged self. It was apparent to me early on that if an idea was any good, you should be able to express it ways that did not depend on a jargonized vocabulary.

Sure, there are specialized terms that, when unpacked, become very long explanations, but that wasn’t really what was going on with the Randites. “Looter” and “Witch Doctor” aren’t specialized technical terms; they are pejoratives. Using them immediately labels you as a card carrying Randite, and also impedes, rather than advances, discourse. It does, however, give you a ticket into an in-group. Which, of course, every Randite would claim is the furthest thing from their mind.

In 1973, I suggested to the student head of the RPI Union Programs and Activities that it would be very interesting to have Karl Hess come to talk. As a bit of a reward, I got to pick him up at the airport, and we had a very nice talk on the way back to RPI, and his speech was also pretty fascinating. But his main purpose there was making converts, and he wasn’t particularly interested in me in that regard. He was interested in the Randites. Hess was Barry Goldwater’s speechwriter in 1964, so the analogy that comes to mind is that Hess was “hunting where the duck are.” He was on a mission to convert Randites to his own form of left libertarianism. He did a pretty good job of it, too.

I’ve made a stab or two, over the years, at “deprogramming” Randites, and it’s a hard row to hoe. Nevertheless, I think that there’s something important going on in that part of the pumpkin patch, and too many people are simply dismissive of the whole thing. Hess’s stated opinion was that adolescence is a lonely time, and Rand held out the hope of being “heroically lonely.” Rand also elevated the mother’s argument that “They’re mean to you because they’re jealous,” to an art form, and that’s another peg on the board. In any case, Atlas Shrugged has been in print for just short of fifty years.

The only way that it isn’t science fiction is that it didn’t first appear as a serial in one of the pulps. It has a more-or-less perpetual motion machine and a new version of 10 point steel as MacGuffins, not to mention a sonic death ray. The characters talk to each other in long monologues that attempt to explain the nature of their world, and the climax is one of those speeches that is over a hundred pages long.

And, again, it’s been continually in print for fifty years and has sold over five million copies. It’s easily one of the best selling SF novels of all time. Moreover, polls show that an ungodly number of people rate it as the Book That Has Most Influenced Their Lives. Figure out what makes it tick and you’ve found one of the main ki channels of American life. And, no, I haven’t figured out what makes it tick.

I see that there is news about plans to make a movie about it, though. My advise to the director would be to cut way back on the speeches and concentrate on the kinky sex. I may not be 14 any more, but I know what I like.

Friday, February 2, 2007

Running Out of Room at the Bottom

Let's write about something SFish.

We're running out of room at the bottom.

That's a heavy-handed allusion to Richard Feynman's paper, "Plenty of Room at the Bottom," which has assumed near Biblical stature to the field (and cult) of nanotechnology. You can find it on the Web (unlike Hugh Everett's paper on the many worlds interpretation of quantum mechanics, which is another much-cited paper that few people have actually read). You might want to go read the Feynman paper. I'll be here when you get back...

Er, hrm. Anyway....

A present day reader of Feynman's paper might notice a few things, not least being that we got to one ultimate goal quite a while back: the Scanning Tunneling Microscope can be used to observe -- and manipulate -- single atoms. This is seriously cool, even if the first thing the IBM guys who invented the technique did with it was to spell out "IBM."

Short aside: The original "IBM" was spelled out in xenon atoms, which are very unreactive, so they behaved themselves. Some while later, someone tried putting carbon atoms next to oxygen atoms using an STM. They were reasonably sure that the appropriate chemical reaction then took place, but they could never find the resultant carbon monoxide because the reaction produced so much energy that the molecule jumped somewhere beyond the vision of the STM.

But manipulation of things at the "nanoscale" has taken on a lot of baggage. I referred to it as both a field and a cult, and the cult is older than the field. The nanotech cult is the brainchild of Eric Drexler, and was the source of one of the great "magic wands" in science fiction. SF magic wands are notion that can be used to generate practically any result (provided you don't pay too careful attention to the science, and really how many SF writers do that?).

Another short aside (worthy of an essay all its own): SF magic wands let SF writers do "hard SF" that is chemically indistinguishable from fantasy. Past and present examples include esp/psi, alternate worlds, virtual reality, and the Singularity. Discuss.

Anyway, the tension between the two kinds of nanotechnology was enshrined in a debate (actually an exchange of letters) between Drexler and Richard Smalley that was sponsored by the American Chemical Society, and published in the Dec. 1, 2003 Chemical & Engineering News (C&EN) cover story. The debate was triggered by a critical article Smalley had published in the 2001 issue of Scientific American. I'm a member of the ACS, and I'm not coy about where my sympathies lie, but I'm going to talk about something different from what Drexler and Smalley debated (the debate can be found on the Web if you look hard enough; reviews can be found more easily).

What's the dividing line between nanotechnology and microtechnology? Let me note how interesting the micron (a millionth of a meter) is. A (very) few bacteria are as small as 0.2 microns, but most are about a micron in size. Most eukaryotic cells are larger still (organelles take up room), with the "typical" human cell being maybe 10 microns in size.

An air guy aside: Micron-sized particles are well-suited for scattering light via "Mie scattering," which is different from Rayleigh scattering, also called molecular scattering for the usual reasons. Mie scattering particles are larger than a wavelenth of light, larger than 400-800 nm in other words, so 1 micron is near the Mie limit. Mie scattering is how smoke scatters light, because smoke particles tend to be from 1-10 microns in size. Particles that
are much smaller than a micron in fact tend to "coagulate," i.e. cling together when they bump into one another, so sub-micron particles don't last long in the air. Particles larger than 10 microns tend to fall out of the air pretty quickly).

Since the real "magic wand" part of Drexler's sort of nanotechnology is all about remaking biology, let's take 1 micron as the logical barrier to a "nanomachine." Any bigger, and maybe you have to call it a micromachine, right? In any case, it certainly seems difficult to do things in the interiors of cells with something that is bigger than the cell. Maybe you can do it for a few cells, but you're not going to be able to fit a lot of the larger-than-micron machines alongside your cells, there's just not that much room.

I published a story a few years back: "Flower in the Void," that basically had a nanomachine as the protagonist. I'll call it that, because it was the closest thing to a character in the story; there weren't any people in it; that was sort of the point. I was also making a subtle argument (and when I get subtle, you can almost bet that I'm mostly talking to myself). Let me make that argument here, a lot more explicitly.

How big is an atom? That varies, but not by as much as you might think. Here's a table:

Substancedensity mol. Weight size

I put in the last two to show that it putting atoms in molecules doesn't let you pack them in much. Basically, we're talking about a quarter of a nanometer per atom. If we treat the atoms like stacked bricks (i.e. as if they were cubic), and put them into a cubic micron, we could get 4000*4000*4000 atoms into the cubic micron, or 6.4*10^13 atoms. It we wanted to fill only a sphere with a 1 micron diameter, we'd only get about half that, but we could get another 50%
in if we packed spherical atoms into the space in a tight (face centered cubic) structure. I don't care, really, since this is an order-of-magnitude thing here.

Now 64 million million (or 64 thousand billion) is a pretty big number, butlet's compare it to some things we deal with every day. Take the computer on your desktop. Does it have 1 gig of RAM and a 100 gig drive? If not, the next one you buy probably will. And those are gigabytes not bits, 8 bits per byte. So that 100 gigabyte hard drive, the one that sells for somewhere around $100, has pretty close to a thousand billion bits to it. Think we'll ever be able to encode 1 bit per atom? I doubt it; how much of the disk drive is actual recording medium and how much is control, power handling, protective shell -- overhead in other words?

But even with zero overhead, 800 billion atoms take up a volume of a quarter of a micron, larger than some bacteria.

Add in sensors; you can't do anything without I/O. How about a power supply? The biggest, heaviest thing in a PC is the power supply. Cooling system? Just how much computing can you do before the thing gets hot?

All in all, the standard desktop workstation is now probably well beyond the theoretical computing capabilities of a micron-sized molecular machine. That's what I mean by running out of room at the bottom.

You want to network the nanites? Go right ahead. Just remember how few computing problems lend themselves to distributed computing, not to mention the overhead of the network itself.

So, you want a nanomachine to go into your bloodstream and fix some part of your anatomy? How about first teaching your desktop to fix your car? And if you haven't managed that yet, tell me how you expect things to get easier when they are much, much, smaller. Making things smaller usually means that it is more difficult, and you don't make problems easier by making them harder.