Insanity = Expecting Different Results

 

LBJ_McN“Bureaucracy Does Its Thing: Institutional Constraints on U.S.-GVN Performance in Vietnam.” by R.W. Komer. Rand Corporation, August, 1972. https://www.rand.org/content/dam/rand/pubs/reports/2005/R967.pdf

Readers who have served in Iraq or Afghanistan, but particularly Iraq, would be wise to lay in a stock of their favorite adult beverage before diving into Robert Komer’s first-hand study of institutional inertia in the Vietnam War. In his final chapter, Komer makes the accurate and vital point that the past does not provide a foolproof template for the future. That said, the past often provides a far better sense of what will not work that what will. Iraq veterans are likely to respond to Komer’s study with a fair degree of rage.

Known as “Blowtorch Bob” for his direct, unfiltered manner, Robert Komer served in Vietnam from 1967 to 1968 as head of the Civil Operations Revolutionary Development Support (CORDS) program. Having served on the National Security Council before Vietnam, including a stint as interim National Security Adviser to President Johnson, he was appointed Ambassador to Turkey in 1968. Though his in-country tenure was brief, it covered a critical period in the Vietnam War and followed years of direct involvement in Vietnam policy in Washington. Although CORDS was responsible for the “hearts and minds” campaign, Komer was no soft-hearted do-gooder. CORDS had both military and civilian staff, fell under Military Assistance Command – Vietnam, and was responsible for over 20,000 deaths through the Phoenix program of targeted assassinations.

In 1971, the Advanced Research Projects Agency, through the Rand Corporation, commissioned Komer to write a study of the ways institutional constraints and characteristics affected U.S. and Vietnamese prosecution of the Vietnam War. Komer’s overwhelmingly caustic and pessimistic assessment is even more remarkable for having been written three years before the final collapse of the South Vietnamese government. Nobody reading this study in 1972 should have been the least bit surprised when North Vietnamese tanks rolled into Saigon.

Komer made four observations that should give pause to anyone advocating for direct U.S. involvement in a civil war or insurgency in a distant country with an alien culture.

  1. “[T]o a greater extent than is often realized, we recognized the nature of the operational problems we confronted in Vietnam, and…our policy was designed to overcome them.” (v)
  2. Official U.S. policy directed a counterinsurgency response that never fully materialized on the ground; both U.S. and Vietnamese military leaders employed conventional military tactics regardless of policy guidance.
  3. “[The U.S.] did not use vigorously the leverage over the Vietnamese leaders that our contributions gave us. We became their prisoners rather than they ours; The [Government of Vietnam] used its weakness far more effectively as leverage on us than we used our strength to lever it.” (vi)
  4. The various agencies operating in Vietnam, regardless of the circumstances and guidance, performed their “institutional repertoires” with disastrous results.

Along the way Komer provides important insights into the questions of organization, civil v. military authority, and other tactical and procedural issues that played a role in the final outcome.

In the years since the end of the Vietnam War, a common picture has emerged of falsified or rose-colored reporting, inappropriate metrics (i.e. body counts), and relentless, unjustified optimism on the part of military leaders in Saigon. Komer strongly challenges that picture. While there are any number of examples of false or misleading reporting, and while the public benchmarks for success were unquestionably inappropriate, Komer makes a compelling forest/trees argument predicated largely on the record in The Pentagon Papers. Sure, battalion and brigade commanders may have inflated their body counts and pencil-whipped reports on “pacification,” but Komer argues that policy makers in Washington were well aware of the deteriorating situation in Vietnam, and that they recognized the causes of that situation. No matter how many fake trees may have been reported, the White House knew the forest was on fire.

Aside from accentuating the positive, official reporting tends to suffer from an obsession with quantification. Consequently, the government privileges indicators that can be quantified without regard to their relevance. In campaigns like Vietnam and Iraq, where there is no clear enemy order of battle, information that can be counted is often meaningless, and the most important information is often subjective, intuitive, and constantly shifting. When public support is the key to success, effective polling is the obvious measure of choice. Unfortunately, effective polling relies on a whole suite of conditions that do not exist in a war zone–accurate census data and security for pollsters first and foremost.

The situation is further aggravated when the metrics change constantly. Following the Vietnam War, the U.S. military reached the obvious conclusion that individual replacement every twelve months shattered unit cohesion, deprived the force of institutional knowledge, and created vulnerabilities. Unfortunately, DoD was unwilling, or politically unable, to come to grips with the implications of those observations. While the military eliminated individual rotations, it simply replaced them with unit rotations, thereby improving the unit cohesion problem at the expense of making the institutional knowledge problem even worse. At least under individual replacement, only about 1/12 of the force rotated in a given month. Under unit replacement, nearly all institutional knowledge would depart a given operational area within a week.

The rotation of units, combined with the careerist pressures of the military, caused each incoming unit in Iraq to select a largely new set of metrics to assess progress. Incoming units established new metrics, determined that their predecessors had left a mess, then spent 12 (or later 15) months working to improve the new metrics, at which point they rotated out and the cycle repeated. Ironically, this cycle of mismeaurement and misinformation did not require unethical or inaccurate reporting by anyone. Incoming units really did find a mess because the situation was generally deteriorating. They not unreasonably determined that their predecessors were both failing to improve the situation and measuring the wrong indicators. In setting new indicators, incoming units were careful to select factors that they could both affect and measure, knowing that failure to accomplish their goals would be career-ending for their commanders. Units were therefore able to improve their metrics while the situation around them worsened.

Komer’s articulation of similar dynamics in Vietnam over three decades earlier raises the obvious question, and accusation, why did we not learn? Why did we repeat the errors of our fathers?

This brings us to the central premise of Komer’s study and his most valuable insight. Large, bureaucratic organizations do what they are organized, resourced, and trained to do. They are generally very good at dealing with clearly defined, recurring problems. They are not good at adapting to new, poorly defined problems. Even when the characteristics of a new problem become clear and the solution is visible, existing organizations will not adapt unless shaken by a disaster and threatened with destruction–sometimes not even then.

To understand the military’s failure to adapt in Vietnam it is essential to bear in mind both its antecedents and its contemporaries. The generals who led in Vietnam mostly began their careers in World War II, a conflict of firepower, linear tactics, and large-unit engagements. They followed World War II with Korea, where effective conventional tactics eventually worked to achieve a stalemate that favored the U.S. and its South Korean allies. The highly superficial commonalities between Korea and Vietnam–wars against Asian Communists from northern rump states–caused U.S. military leaders to make a category error. Meanwhile, the war in Vietnam was never the main show for the U.S. military. Throughout the conflict, the U.S. military’s primary role remained deterrence of the Soviet Union, necessitating highly conventional force design, training, equipping, and doctrine.

This is the point that Komer drives home again and again. Bureaucracies “perform their institutional repertoires” and the institutional repertoire of the U.S. military, particularly the U.S. Army, was big unit wars against conventional enemies.

Komer’s study focuses primarily on the U.S. bureaucracy and the Vietnamese bureaucracy through the lens of its interaction with the American, and he clearly believes at a very basic level that better performance would have resulted in better outcomes. Nevertheless, he acknowledges the possibility that the problem was unsolvable and articulates the fundamental dilemma of great power counterinsurgency over and over.

The GVN’s performance was even more constrained by its built-in limitations than that of the U.S. In the last analysis, perhaps the most important single reason why the U.S. achieved so little for so long in Vietnam was that it could not sufficiently revamp, or adequately substitute for, a South Vietnamese leadership, administration, and armed forces inadequate to the task. The sheer incapacity of the regimes we backed, which largely frittered away the enormous resources we gave them, may well have been the greatest single constraint on our ability to achieve the aims we set ourselves at acceptable cost. (vi)

Quite simply, if the local government were not a complete mess, it would not need great power intervention in the first place. There is a disparity of motivation between the great power and the insurgent, but there is an equally large disparity between the great power and the supported government. A “victory” that requires the ruling class to surrender its power, wealth, or ideology is not a victory in their eyes, but civil wars and insurgencies rarely gain traction in societies with equitable distribution and representation.

Veterans of Iraq and Afghanistan will see clear shades of Baghdad and Kabul in Komer’s assessment of the GVN. Indeed, T.E. Lawrence’s 15th article points to his understanding of both the inherent problem and the difficulty, for effective soldiers, in overcoming it:

Do not try to do too much with your own hands. Better the Arabs do it tolerably than that you do it perfectly. It is their war, and you are to help them, not to win it for them. Actually, also, under the very odd conditions of Arabia, your practical work will not be as good as, perhaps, you think it is. (http://www.pbs.org/lawrenceofarabia/revolt/warfare4.html)

Lawrence was uncharacteristically humble in circumscribing his advice to the culture he knew well, but the same basic dynamic was at work in Indochina. The frustration of watching poor or corrupt performance is simply too much for the average western military professional, and yet poor and corrupt performance will be the standard in any nation requiring outside assistance. It is a paradox understood by experienced counterinsurgents (and parents of teenagers) that providing less assistance can engender better performance. Komer points out that President Johnson’s limitations on troops and bombing following the “Tet shock” forced “the GVN and ARVN at long last to take such measures as manpower mobilization and purging of poor commanders and officials. After Tet 1968, GVN performance improved significantly” (142). Komer’s observation of bureaucratic behavior and limitations combined with the paradoxical realities of counterinsurgency pose a cautionary tale for anyone contemplating intervention.

The U.S. military of 1965 (or 2001) was an enormous organization, run through almost comical levels of bureaucracy by necessity. There is simply no other known method of organizing and operating a worldwide organization requiring millions of people and billions of dollars. Contrary to the mythology of Hollywood and Washington, DC, bureaucracies are not made up of or run by mindless drones, imagining new forms to require so they can be left in peace to enjoy their donuts and cigarette breaks. Bureaucratic leaders tend to be thorough, energetic, optimistic, and ambitious, and they are generally highly reliant on rules and order. They advance by closely following established procedures and avoiding embarrassment. When intervening in a civil war or insurgency, we place such leaders in a chaotic situation in which they have little control over actions or outcomes and are surrounded by people whom they see as incompetent, corrupt, and deceitful. Because their careers depend upon providing measurable results and avoiding embarrassing failures, they have strong incentives to gain control of the immediate tactical situation and to emphasize areas in which they can control the variables. In other words, they are exactly the wrong people to operate in the roundabout, oblique manner that Lawrence recommended.

The problem is aggravated by the mismatch between rhetoric and reality. During the Vietnam War, the U.S. government claimed that maintaining a friendly South Vietnam was a vital U.S. national security interest, but it never acted in a way that supported that claim. In both the Civil War and World War II, the U.S. Army enacted tectonic shifts in its manner of operations, admittedly facing resistance at every turn, because the unprecedented threats provided the impetus to overcome bureaucratic inertia. The Army’s approach to Vietnam (and Iraq and Afghanistan) demonstrated that its leaders did not believe their own dire assessments of the war’s importance. The Army failed to alter assignment policies or organization in ways that would have materially improved its efforts, even when those changes were identified. Moreover, the Army’s inertia may have been a rational response to the situation. The U.S. faced the Soviet Union in a peer competition that threatened the destruction of human civilization, while the 1975 collapse of the South Vietnamese regime with almost no significant consequences for the United States proved that the threat there had been badly overblown. Organizing to fight a conventional, mechanized war against regular units was precisely what the United States should have been doing in the 1960s.

Recent work by Walter C. Ladwig, III (“Influencing Clients in Counterinsurgency: U.S. Involvement in El Salvador’s Civil War, 1979–92,” International Security, Volume 41, Number 1, Summer 2016, pp. 99-146) raises the possibility of alternative approaches that do not rely so heavily on the adaptability of rigid military bureaucracies and do not commit U.S. prestige so decisively. Inherent in the more indirect approach, however, is the willingness to fail. While “failure is not an option” is rarely actually spoken in the military, the phrase “we don’t plan for failure” is common. Planning for or accepting the possibility of failure is anathema to the successful military officer in the same way that standing by and watching local partners accept bribes or perform incompetently is. However, the willingness to fail is a basic necessity for U.S. intervention in wars of choice if we do not wish to keep repeating the mistakes of Indochina, Afghanistan, and Iraq.

Once we accept the premise that the outcomes in places like Vietnam and Iraq can be desirable without being essential, we can correct the category error that has led us to excessive, but ultimately failed, full-scale military intervention. A civil war in a strategically useful but not essential country on the far side of the world may be a war for the people who live there without being a war for the United States. That is not to say there will not be violence or U.S. casualties–both occur often without being deemed a war–but they will not require mobilization of the nation or national commitments of prestige to the point that the U.S. cannot accept failure. Rather than view them as wars, U.S. decision makers can view them as investments, and just as with an investment they can define what risk they are willing to take and build an alternative plan for failure.

Komer describes the failures of the various command structures throughout the Vietnam conflict. Despite the recognition that the conflict was primarily political, the military always ended up in charge because it provided the bulk of the resources. Ambassador Henry Cabot Lodge was allergic to managing, and Ambassador Maxwell Taylor, who seemed to be the perfect “civilian” to merge the various efforts and subordinate the military, instead deferred to the generals and the military command structure from which he had sprung. In his conclusion, Komer argues that ad hoc organizations may be superior to repurposing existing organizations because they will not be so wedded to “institutional repertoires.” He maintains that such ad hoc organizations were generally effective in Vietnam, but it is worth noting that the Coalition Provisional Authority that nominally ran the early campaign in Iraq was a conspicuous failure. Theory and common sense would tell us that a strong ambassador is the right person to head such an effort, but history gives us reason for caution. This dynamic might be more functional in an environment with less military presence–the overwhelming resource disparity lends power to the military chain of command that civilian agencies find hard to overcome. A more indirect approach might reduce that inequality by lowering the requirement for and value of military contributions.

Regardless of the command structure, it is essential to identify measures of both performance and effectiveness, determine the indicators and resource collection, and then follow the evidence to legitimate, no matter how unwelcome, conclusions. Komer devotes extensive space to assessing the assessments and reaches the conclusion that an external review is a necessity. In Vietnam the Office of the Secretary of Defense (OSD) conducted, in Komer’s view, excellent analysis, but the Joint Chiefs of Staff formally objected, twice, to OSD analyzing military performance in the field. In Vietnam, MACV consistently analyzed the wrong data because their theory of the war was wrong. In line with fighting a conventional war, they developed conventional order of battle metrics that failed to capture meaningful information. This is one lesson the U.S. military learned between 1975 and 2003 but did not follow to an effective solution. In Iraq, U.S. forces attempted to measure factors they associated with counterinsurgency, but DoD and the theater commanders never agreed on a standardized set of metrics, and they never resourced collection of the most relevant, and difficult to collect, data regarding public opinion. Consequently, incoming units frequently designed new metrics and started from new baselines, providing a hodgepodge of data covering 12, 15, and 9-month increments but useless for long-term comparisons.

While the ideal command arrangement is open to debate, the need for external, objective analysis is clear. In future, an honest broker, unbeholden to the chain of command, must collect and analyze relevant data across the entire duration of U.S. involvement in any conflict. Such an organization must be resourced to collect the necessary data regardless of cost or difficulty. Just as a venture capitalist would not commit funds to an enterprise without identifying indicators of success or without access to vital information, the U.S. cannot blindly commit itself without the ability to judge its own performance. Such an independent analytical organization, paired with an effective “red team” to challenge assumptions about the opposition might provide leaders with the necessary information to make hard decisions.

It is certainly true that history does not repeat itself–one of Komer’s key points is that each national situation is unique–but history does highlight institutional weaknesses that can operate similarly across multiple situations if not corrected. Bob Komer’s study of institutions in Vietnam is likely to strike a chord with anyone who has spent more than a week working in a bureaucracy, and it is likely to resonate painfully with those who watched the U.S. military flounder in Iraq and Afghanistan. It often leaves the impression that U.S. counterinsurgency theorists skimmed his chapter on possible viable alternatives without bothering to place it in the context of the entire report. Komer’s observations provide a devastating view of the inherent obstacles to great power intervention, and the history of such adventures since 1972 offers little reason to believe we can overcome them.

Review – The Bedford Boys: One American Town’s Ultimate D-day Sacrifice by Alex Kershaw

Kersaw_BedfordThe Bedford Boys: One American Town’s Ultimate D-day Sacrifice. by Alex Kershaw. MJF Books, 2003. ISBN: 978-1-60671-135-4. 274 pages.

In The Bedford Boys, Alex Kershaw tells the story of one Virginia National Guard infantry company that was virtually wiped out on D-Day. Because of the extraordinary sacrifice of the small town of Bedford, Virginia, it was later chosen as the site of the National D-Day Memorial. The Bedford Boys is second- or third-rate history, a chapter or long magazine article stretched out to book length without much added.

Kershaw tells the story of the soldiers and their families struggling through the Depression and joining the National Guard for the steady employment and the camaraderie. In focusing on the human stories–so similar to millions of other stories from thousands of small blue-collar communities all over the United States, he misses the opportunity to do more valuable work. Why, in 1941, did the United States rely so heavily on geographically recruited National Guard units to fill out its ranks? As David Johnson pointed out in Fast Tanks and Heavy Bombers, the problems of mass mobilization dominated American military thinking between the world wars. The United States had successfully recruited a mass army between 1861 and 1865 but had then botched it badly in 1898.

Early in World War I, the British Army had recruited “pals battalions” of young men from small villages, schools, or even single factories as a way to encourage enlistment prior to the imposition of conscription. The results were devastating when those same battalions were cut down in waves by German machine guns on the western front. Because casualties in war are so disproportionately distributed, recruiting infantry units from small communities can devastate individual communities. The United States had experienced the same phenomenon in the Civil War. Nevertheless, military planners viewed rapid recruitment and induction as their primary challenge without much thought for the social effects. Combined with America’s militia history and sensitivities over states’ rights, the National Guard provided a convenient solution.

The results were devastating for small towns with high National Guard participation, and none more so than Bedford. Company A of the 116th Infantry Regiment was assigned to the first “suicide” wave at Omaha Beach. Not one soldier from the company commander’s landing team returned home. In fact, it is likely that all died within the first ten minutes of the landing. Thirty-seven young men from Bedford, Virginia were serving in Company A on June 6, 1944. Twenty-two died in the Normandy campaign. Only six of those who actually made it to Omaha Beach also made it home. None of the survivors served as a rifleman throughout the campaign–their casualty rate was 100%.

Review – Churchill & Orwell: The Fight for Freedom by Tom Ricks

RicksChurchill and Orwell: The Fight for Freedom by Thomas E. Ricks. Penguin Press, 2017. ISBN: 9781594206139. 339 pgs.

Tom Ricks begins his latest book, Churchill and Orwell: The Fight for Freedom, with anecdotes about moments in the 1930s when each of his protaganists came very close to a premature death–much closer in Orwell’s case than Churchill. The future prime minister looked the wrong way stepping into a New York street, on which the cars drove on the wrong (right) side, and was nearly mowed down by a taxi. The future bard of totalitarianism stuck his head up from a parapet in Spain while backlit by the rising sun and took a 7mm bullet through the neck. Ricks begins his book this way so that he can spend the next several hundred pages demonstrating that the survival of both men mattered. Ricks is arguing here for the great man theory of history–long out of vogue–and he makes a compelling case. Oddly, his case for Orwell may be even more compelling than his case for Churchill.

In Ricks’s telling, Churchill and Orwell were distinguished and linked by their determination to see the world as it is (Orwell more than Churchill) rather than fit the “facts” to their predetermined theories or conform to the conventional wisdom of their “sets.” When the British ruling class was, at best, committed to appeasement and in many cases enamored of Fascism, Churchill doggedly insisted that Fascism was both evil and a looming threat. Orwell, though a committed man of the left, came to see Stalinism as just one more form of totalitarianism, no better than Nazism. Churchill too sounded, or rather re-sounded, the alarm about the threat of Soviet Communism after the Second World War–the main reason for his canonization by the American right.

To make the case for Churchill as the essential man, Ricks relies on two sequential but separate periods of his life. In the first, Churchill is nearly alone in his persistent condemnation of Nazism and his advocacy for greater preparedness. To understand the courage of Churchill’s position, it is necessary to understand his environment, and it is not a pretty picture. Much of the British ruling class, particularly the titled aristocracy, was at best defeatist and at worst sympathetic to Hitler and the Nazis. Anti-semitism was rampant, and faith in democracy was at a low ebb. Lord Londonderry and the Mitford sisters probably enjoyed more support than Churchill, who was obnoxiously strident about a problem that most people just wished would go away and be forgotten.

Appeasement has taken on a universally negative connotation in the decades since Munich, but it was quite popular at the time. Considered in context rather than with the advantage of hindsight, it is not hard to see why. Britain was barely a generation removed from the Somme, and the veterans, war widows, and orphans could hardly be expected to enthusiastically go to war on behalf of Czechoslovakia. The more inexcusable tragedy happened after Munich. Neville Chamberlain’s government was orderly and efficient, but it did not really do anything. Even after the Germans took the rest of Czechoslovakia and overran Poland, Chamberlain seemed supine. Belying the defense that Munich bought time for the British to prepare, unemployment actually rose from 1.2M in September 1939 to 1.5M in February 1940, when Britain should have been running factories 24 hours per day.

It was here, in the critical first months of the Second World War, that Winston Churchill again earned his place in history as the essential man. The collapse of France had not convinced the British appeasers and fellow travelers that resistance was necessary–quite the opposite. Churchill almost single-handedly energized the war effort, strengthened the spines of the British people through grim but stirring oratory, and imagined the strategy that would eventually lead to victory. The Prime Minister drove his generals and many of his fellow politicians mad with his meddling in various war matters, but Churchill understood better than anyone else what would be needed to win the war (as opposed to battles) and how to get it. Furthermore, Churchill’s direct intervention in many facets of war preparation was not amateurish meddling but based on his many years of careful study of the issues–as if John McCain suddenly found himself a wartime president. Ricks points out early in the book that Churchill was shockingly uneducated in numerous areas, but in history he was broadly and deeply read. It gave him an instinctive feel for grand strategy.

Churchill the strategist understood that strategic victories are not necessarily built from tactical victories. In 1940 the British were quite reasonably afraid and needed to regain their belief that they could win, so that, “military moves that did not make sense on a tactical basis sometimes were nonetheless advantageous for strategic or political reasons.” It was better to do something and seize the initiative than to sit placidly accepting attack and allowing the British people to perceive weakness and lethargy. Furthermore, Churchill understood the importance of organization and leadership and helped translate it into strategic advantage. “Military historians have long recognized that technological innovation is close to useless without carefully constructed organizational support.” Britain won the Battle of Britain because home field advantage preserved its pilots, fuel, and air time and because the British organized a very successful layered detection and defense that leveraged their strengths. It helped that the Luftwaffe, led by Goering, was virtually incompetent. While their planes and pilots were very good, they had no underlying theory of victory. They dropped as many bombs as they could, largely at random, in hopes the British would simply give up under the pounding. Churchill’s maddening obstinance was contagious, and Goering’s largely indiscriminate bombing caused massive damage and casualties but probably stiffened British resolve.

In the end, Churchill was right to be confident and not just because the U.S. and the Soviet Union eventually entered the war against Germany. Ricks quotes Bungay’s history of the Battle of Britain: “the margin of victory was not narrow…the Luftwaffe never came close.” Churchill could see beyond the early losses and the seeming invincibility of the German war machine to understand that invading and conquering England would be very hard indeed and probably beyond Germany’s capability–even before they invaded the Soviet Union. More importantly, he could transmit that confidence to his people so they could brace themselves for the trials to come and resolve to do what they must.

Where Winston Churchill was a larger than life figure before World War II–difficult to ignore even in the political wilderness–George Orwell was a decidedly more modest presence. He had published a number of rather bad novels and one very good book about his time in Spain. That book, Homage to Catalonia, plus his membership in one of the Trotskyite factions in Spain, had marked him among his fellow leftists. Ricks, ever the journalist, admires both Churchill and Orwell because they did not try to twist what they saw before them into their pre-conceived notions of how they wanted the world to be. Orwell was a standard British socialist when he went to Spain, but when he got there he saw Soviet agents manipulating the situation for the benefit of the Soviet Union, infighting between the groups with a willingness on the part of the Communists to ruthlessly suppress the others rather than cede any control, and he described it as he saw it. Consequently, he spent the war in a rather awkward position. His Spanish wound and his general ill health completely unfitted him for active service or even overseas reporting. He was no longer trusted by the socialist establishment, and he was genuinely eager to aid the war effort because he saw Nazism as the threat that it was. So he went to work for the BBC as a half-hearted and ineffective commentator.

As the United States entered the war in full in 1943, George Orwell began to rise above his middling life just as Winston Churchill descended into irrelevance. In Ricks’s telling, the Tehran conference marked a watershed for both. Churchill was relegated to second-class as the U.S. and the Soviet Union made plans for the post-war world with little concern for the wishes of their junior partners. Orwell saw the two emerging superpowers dividing the world between them and began thinking in the terms that would inform both Animal Farm and 1984. Churchill had been the essential man in 1940 when Stalin was still allied with Hitler and Roosevelt was unable to budge the United States off its isolationism. Stalin, the ultimate realist, knew that British wishes were of small concern to him, and Roosevelt was determined to end colonialism, even at the extent of his wartime allies.

The most important aspects of Orwell’s life were packed into six short years. His first wife died, leaving him with a young son in the midst of the war. He wrote Animal Farm and finally alienated the entire British left who saw it as a direct assault on Stalin and the Soviet Union, which they still supported. His longtime publisher, Victor Gollancz, refused to publish it. Tellingly, he had also declined to publish Homage to Catalonia, which marked Orwell’s initial break from the socialist herd. In their sheeplike devotion to Soviet Communism, the British socialists were as craven and complicit as the British aristocrats had been in the 1930s regarding Nazism. It is probably some measure of the true popularity of hardcore socialism in Britain that Animal Farm sold out almost immediately, achieving a level of popularity that Orwell’s earlier works had never approached.

By the end of the war, Orwell was a very sick man, staving off tuberculosis. As his health declined, he worked frantically on two projects–finding a new wife and step-mother for his son and completing his dystopian novel of a future under totalitarian rule. He acted somewhat paranoid, believing Communist agents might be trying to kill him. The Soviet archives opened up decades later revealed that he was, in fact, on a kill list in Spain, but it is unlikely the Soviets would have killed a British subject in Britain at that time. Though visibly dying, Orwell managed to complete both his masterpiece, 1984, and his essay on “Politics and the English Language,” cementing his reputation as a writer and his influence on the post-war world.

It would have shocked people of the time to hear it, but Orwell is probably a more significant figure than Churchill in terms of his influence on the modern world. Churchill’s dogged resolve, inspirational rhetoric, and ultimately successful organization of Britain’s defense in the early days of the war made it possible for there to be a post-war world, but once the U.S. and the Soviet Union entered the war, Britain was primarily important as an unsinkable staging base off the coast of Festung Europa. Ricks notes that Churchill’s oratory became less stirring and more confusing as the war carried on. The Combined Chiefs paid less and less attention to his suggestions and desires. Age, fatigue, and drink all took their toll. When he finally faced election for the first time in 1945, he lost and faced the indignity of turning over Downing Street to Clement Atlee while the war was still going on. Churchill enjoyed a brief renaissance as an early prophet of conflict with the Soviets and crafted another of the many felicitous phrases that have embedded him in the English language–the Iron Curtain–but his second turn as prime minister was embarrassing. He was built for the moment of crisis and fulfilled his role beautifully. The world he made possible did not really need his gifts or have room for them.

Furthermore, it is somewhat misleading to laud Churchill as the champion of “freedom” without a very large asterisk. As Williamson Murray points out in his own review in War on the Rocks, millions of Asians and Africans (not to mention Irish) in former British colonies might find the idea very odd indeed. Churchill was the champion of British independence and the freedom of the English, Scottish, and Northern Irish to live under a particular form of self-government. He did not share Orwell’s much greater commitment to universal freedom of conscience and self-governance.

Orwell, by creating a fictional portrait of totalitarianism that was both vivid and accessible, became the poet laureate of the Cold War. Just as Churchill understood early that the Soviet Union was both monstrous and a threat to the West, Orwell too saw the erstwhile allies for what they were. While Animal Farm attacked the Soviet Union as it was and had been, 1984 presented the picture of what all societies could become if they gave in to the forces of totalitarianism. The most terrifying aspect of 1984 is not the surveillance or the suspense or the torture but rather the final sentence. Orwell understood that the deepest threat of totalitarianism was not its coercion but the tendency of ordinary people to embrace it.

Orwell’s vision of a dystopian future embedded itself into western culture. Even in the Soviet Union smuggled copies proved a powerful driver of resistance. His essay on the misuse of language in politics became a seminal tool for teaching the abuse and misuse of English to mislead and obfuscate. On this subject Ricks makes one of his rare misstatements, claiming that, “Less noted about the essay is that it isn’t simply against bad writing, it is suspicious of what motivates such prose.” That aspect has not gone unnoticed at all. It is what distinguishes Orwell from Strunk and White or any other guide to good writing. Indeed, there are serious problems with the essay as a guide to style, but as an expose of the techniques of political misdirection it is unmatched.

Tom Ricks has made a powerful argument for the very real impact of two individuals on the course of history. While large, impersonal forces ultimately made possible the Allied victory in World War II, its outcome would have been very different if the British had reached a separate peace in 1940. Had Churchill been run down by that New York taxi, it is difficult to see who would have fulfilled his role in the 1930s–certainly nobody of his talents or determination comes to mind. It is equally difficult to see how the British would have withstood the pressure to surrender without Churchill’s inspiration. Likewise, Orwell was unique among committed leftists in combining a clear-sighted understanding of the evils of Stalinism with the literary acumen to help others see it. Orwell was not a necessary factor in the West’s Cold War victory, but he was a useful implement for galvanizing the determination of western peoples to persevere and to resist Soviet propaganda.

Review – Fast Tanks and Heavy Bombers: Innovation in the U.S. Army 1917-1945 by David E. Johnson

JohnsonDFast Tanks and Heavy Bombers: Innovation in the U.S. Army 1917-1945 by David E. Johnson. Cornell University Press 1998. ISBN: 0-8014-8847-8. 288 pgs.

Fast Tanks and Heavy Bombers is a stolid, workmanlike review of the winding road for weapons development from World War I to World War II with an examination of the forces within the War Department that affected each. I read the book for a specific project at work, so I skipped the bomber development parts that were irrelevant to the project.

Institutional considerations were paramount in tank development. The infantry branch insisted on developing tanks as adjuncts to infantry and therefore taking total control of tank development. The senior officers who came out of WWI viewed their experience in that war as dispositive and utterly failed to see that the U.S. Army’s late arrival and the primitive nature of tanks made their experience almost wholly irrelevant.

Senior officers consistently made decisions based on an unexamined, indeed an unspoken, assumption that tank technology would remain where it was at the given moment. Rather than undertaking a serious study of what was likely to be possible by the time the U.S. Army would have to fight, they generally acted as if they would have to “fight tonight” even though there was little prospect of imminent ground combat. Consequently they made doctrinal and organizational decisions based on the weaknesses of early developmental tanks and then reinforced their prejudices by failing to upgrade their technology.

The July-August 1939 edition of the Cavalry Journal included two articles on the Polish horse cavalry claiming that it was well prepared to deal with a mechanized enemy. After the German panzers rolled over the Poles in September, the U.S. chief of cavalry, MG John K. Herr, felt a need to redouble his efforts to defend the primacy of horse cavalry while conceding to efforts at greater mechanization. Not surprisingly, MG Herr proved to be the last U.S. Chief of Cavalry.

It is important when developing new military technology to consider three questions:

  • How mature is this technology?
  • When to I expect to need it?
  • How mature do I expect it to be when I need it?

These questions become ever more important as the cost of new technology sky-rockets. The U.S. paid a price for its belated preparations for World War II, but it also derived unexpected benefits. The Germans, French, British, and Japanese all conducted extensive field testing under the most realistic conditions from 1939 to 1941. While German and Japanese aircraft were superior in the early days of the war, they were a sunk cost. U.S., British and Soviet designs that benefitted from the lessons learned in the earliest day [confirm this] came online later and in greater numbers while the Germans were able to design superior aircraft but not to field them.

In tanks, however, the U.S. never caught up. General Eisenhower was outraged to learn near the end of the war that American tank crews believed their equipment to be inferior to the Germans’. He had either not paid attention or had been fed happy nonsense up to that point, when it was too late to fix the problem. [Patton anecdote is telling but may not fit here]. But even this failure presents an important lesson about modernization. U.S. tanks were never able to match German tanks one for one or even by swarming in greater numbers. U.S. forces, nevertheless, defeated German forces on the ground. While the tanks were inferior, U.S. artillery and close air support were far superior. With greater numbers, superior logistical support, better organization, and superior ground attack aircraft, the U.S. combined arms team was more than a match for the Germans panzers that had pioneered mechanized combined arms warfare.

Lock ’em Up?

A few years ago, when my son was 12 years old, he painted on a wall at a public school. He got caught immediately, confessed his guilt, and after serving 52 hours community service, paying restitution, taking a victims rights class, and staying out of trouble for six months his suspended sentence was expunged. To this day he can barely talk about it because the mortification is overwhelming. Someday he will probably be able to acknowledge that the whole experience made him a better person–he became far more responsible, and to this day he is the most empathetic teenager I have ever known.

I have been thinking about that experience a lot lately as I read about the “campus sexual assault crisis” and the various dramas at universities trying to deal with the intersection of young adults, alcohol, sexual freedom, predatory behavior, bro culture, and on and on and on. I will not wade into the question of whether there really is a sexual assault crisis on America’s university campuses or whether we have redefined sexual assault down to incorporate normal behavior–I don’t know nearly enough about the data or the methodologies to even have a valid opinion.

What has struck me instead is the disconnect between my son’s experience and the experience of young men on college campuses accused of sexual assault. The question that nags at me is, why are colleges dealing with the aftermath of sexual assault allegations at all? Certainly universities are responsible for the environments they create. They are responsible for policies about on-campus behavior that foster a safe learning and living environment. The ways that they deal with sponsored activities, Greek culture, athletic teams, and alcohol in dorms are all matters for open and probing discussion–prevention is their business. Investigation and prosecution are not. Or they should not be.

In today’s New York Times, Jennifer Weiner leans heavily on the idea that justice is applied to young men disproportionately based on race, and that is no doubt true, but my son’s case indicates it is not the only variable. He was an upper middle-class white kid standing before a judge facing a possible one year jail sentence when he was twelve–for painting on a wall. Why are universities adjudicating allegations of serious crimes in any way except to refer them to legal authorities?

I have a hypothesis, but I’m not sure how you would test it. Public K-12 schools are mandated government institutions. They are battling budget pressures, ever growing and shifting mandates, demanding parents, neglectful parents. They have a legal obligation to take just about every kid and provide a vast array of services. If they are too lenient with trouble-makers they face public backlash from parents whose kids suffer. If they are too harsh, they face public backlash from privileged parents, activist parents, the press, etc. I challenge you to find a public school teacher who does not have a story about some parent whose child could do no wrong and who made it his or her life’s mission to coach the teacher in the proper care and handling of the little darling. Trying to mete out discipline in such an environment is an invitation to angry parents, public accusations in the press, and even litigation. There’s not much upside except for the proper development of our children. I suspect most educators would prefer to handle discipline issues rationally but not at the risk of their jobs, their pensions, and their reputations.

Universities are operating in a different environment. In The Death of Expertise Tom Nichols delivers an impassioned but convincing argument against the idea of the student as consumer. In a nation of vastly expanded higher education options, universities compete ruthlessly for the best students. As budgets shrink, public universities are under ever-increasing pressure for tuition, grants, donations, and incidental income. The results have not been good–excessive spending on administration and amenities, inappropriate deference to students’ feelings and prejudices, an erosion of the authority relationship between students and faculty, just for starters. In that environment, schools are walking a tightrope on sexual assault allegations. Do too little and you risk an explosive scandal and substantial liability. That danger would seem to motivate the sort of maximalist approach we see in the public K-12 schools, but there is an additional variable. Public prosecution will put your university in the news, and not in a good way. How many of those helicopter parents who drove the 10th grade English teacher crazy will choose not to send their little darling off to Big State U as a result of a highly publicized sexual assault trial? We know that human beings in general are not very good at assessing risk. We know they are even worse at it when the news media distorts their perceptions by fixating on certain telegenic (but not very common) crimes. I can attest personally that parents are inclined to vastly overestimate risk to their children–it’s part of why we have survived as a species. Parents are not well equipped to evaluate or even gather the evidence regarding variable sexual assault risk between universities. One spectacular trial could constitute a public relations and recruiting disaster.

I suspect the dynamic above accounts for at least some of the inequity we see in treatment of discipline issues between various levels of education. Universities have a perverse incentive to quiet accusations while taking sufficient action to inoculate themselves from litigation and scandal. The risk of scandal from doing too little is greater than the risk of scandal from denying due process to the accused. Universities’ tradition (now largely defunct) of acting in loco parentis provides a framework and a level of comfort with the in-house disciplinary approach. That is all understandable, but it is not the proper solution. If we want to protect students from sexual assault (and other serious crimes) and ensure the due process rights of students, who after all are almost all legal adults, then we should subject them to the same criminal justice system that would apply if they did not pay $25,000 tuition and ace the SAT. Treat alleged sexual assault by a 20-year-old with at least the same seriousness we apply to minor vandalism by a 12-year-old.

Review – The Death of Expertise by Tom Nichols & The Ideas Industry by Daniel Drezner

Nichols_ExpertiseThe Death of Expertise: The Campaign Against Established Knowledge and Why it Matters by Tom Nichols, Kindle, 272 pages. Published February 1, 2017 by Oxford University Press. ISBN: 0190469412, ASIN: B01MYCDVHH.

 

 

Drezner_IdeasThe Ideas Industry: How Pessimists, Partisans, and Plutocrats are Transforming the Marketplace of Ideas by Daniel Drezner, Kindle, 360 pages. Published March 1, 2017 by Oxford University Press. ASIN: B06X9CL2NL.

 

 

Sometimes intellectuals get lucky. A long-simmering idea turns into a project and comes to fruition just as external events come together to elevate the project’s relevance beyond its authors wildest dreams. Anyone who was finishing up an obscure article or book on terrorism or the Middle East in the summer of 2001 knows what I mean. Nichols and Drezner, both #NeverTrump Republicans, were nearing completion of these books when Trump surprised the world by winning the U.S. presidency. Nichols was a pessimist through much of the election and saw Trump’s victory as a possibility, while Drezner believed, along with nearly every other political scientist in America, that Hillary Clinton’s advantages and Trump’s alarming missteps would keep him from the White House. So while Trump and his acolytes are a presence in each of these books, neither was written specifically as a screed against Trumpism. Rather, Nichols and Drezner both identified political and intellectual dynamics that alarmed them and chose to address them. Of the two, The Ideas Industry is the better, deeper, and more serious book, while The Death of Expertise is more fun thanks to its author’s high degree of snark.

July 4th is a particularly appropriate day to reflect on Nichols’s work since it bears directly on the fundamental American question: what is a good citizen? In a nation based on divine favor, a good citizen is devout, but the state does not depend on his devotion. In an ethnic nation, the good citizen is pure of blood and culture, but if he is not, then he provides a useful target for the government and the masses. In a republic as conceived by the American founders, however, the legitimacy and effectiveness of the state rest on the extent to which citizens are both able and willing to make wise, informed decisions about their own governance.

Tom Nichols examines the evolution of discourse into a shouting match between partisans who all think their opinions are equally valid regardless of qualifications, experience, or intelligence. He provides a trenchant critique of the area he knows best–academia–and concludes that we are doing our children no favors by treating a university education as a commoditized service. Finally he covers the damage done by vast but unmediated sources of online information and the corrosion of journalistic norms.

Nichols comes as close as any serious author could to saying that many Americans are just too stupid, uninformed, or willfully ignorant to be good citizens. Perhaps it’s his blue collar background that makes him comfortable saying in print what would make an east coast upper class liberal squirm in self consciousness, but Nichols is not shy about his disdain for those who choose to remain ignorant but believe themselves entitled to respect on any number of complex issues. As he has frequently pointed out on Twitter, he doesn’t care if people cannot find Ukraine on a map; he just does not think they should voice an opinion on whether to go to war there. Or more accurately, he does not believe that those with actual decision-making power should pay any heed to the uninformed opinions shouted by the ignorati. Unfortunately, in a democratic republic, it is impossible to ignore the ignorant, prejudiced, or just plain stupid rantings of the masses, particularly when power actors are willing to weaponize that ignorance for political and commercial gain.

Nichols acknowledges but never really addresses the great dilemma: the how of correcting the current disaster. Winding himself up in a breathless conclusion, he writes, “the most daunting barrier, however, is the public’s own laziness. None of these efforts to track and grade experts will matter very much if ordinary citizens do not care enough to develop even a basic interest in such matters.” If a decisive number of American citizens are stupid or ignorant and unwilling to change, how do those who are not stupid or ignorant change them? It is unlikely that those who have consumed the fire water of populist demagoguery will choose to disbelieve that they are absolutely correct, that their opponents are evil and stupid, and that they need to replace the populist fire water with the kale smoothy of reasoned discourse, respect for experts, and ambiguity. The innate strength of experts is that they will be right more often than they are wrong. The innate weakness is that honest experts will always have to admit what they don’t know, own their mistakes, and resist the temptation to make rosy predictions. They will always be vulnerable then to charlatans and opportunists who are willing to make certain and rosy predictions. After British voters elected to leave the European Union, Leave advocates were forced to acknowledge that they had simply invented some of the economic figures they had employed. Those who led the movement suddenly dropped out after the vote, unwilling to lead a government that would have to deal with the disaster they had wrought. A clearer case of cynicism would be hard to find, but will it cause the British voting public to turn back to the experts and reverse course? Not likely. As Nichols himself points out almost daily on Twitter, people don’t like to admit they’ve been had.

The fundamental problem he identifies is the resentment of the uneducated against those who know more than they do. Being wrong does not foster contrition and humility–it causes greater resentment and doubling down. The man who has been conned has a strong incentive to deny the con because his self-respect is more valuable than whatever he lost. Con men rely on shame and embarrassment. The true experts will be hard pressed to fight back without sinking to the level of the charlatans. Nichols laments the reality, but he does not pretend that some 3-point program will fix it. He candidly admits that, “we can only hope that before this [collapse of the American system] happens, citizens, experts, and policymakers will engage in a hard (and so far unwelcome) debate about the role of experts and educated elites in American democracy.” It may be vain to hope that the same American public that cannot resist junk food and soda will choose the hard road of intellectual engagement and self-education when there are networks and websites and politicians ready to spoon feed them garbage for free.

Here we segue neatly to Daniel Drezner’s book, The Ideas Industry. Drezner, a full professor of political science at Tufts, a prolific blogger, and a frequent presence on cable new shows, examines the ways that journalism, the academy, and think tanks have evolved over the past century to the current cacophony of mutually exclusive partisanship, internet trolling, high-level plagiarism, and intellectual superstardom that feeds the disconnect Nichols finds so disheartening. Where Nichols’s book is largely an informed by impassioned polemic, Drezner’s is a more academic examination of the way the modern public receives its information and the incentives that drive the Ideas Industry.

Drezner devotes an early chapter to the difference between “thought leaders” and “public intellectuals.” Viewing himself (proudly) as the latter, he gives the thought leaders their due and acknowledges the value in driving ideas. The power of the thought leader is simplicity–in Isaiah Berlin’s dichotomy he is the hedgehog who know one thing. Consequently, the thought leader is attractive both to policy makers and to uninformed or semi-informed citizens. Thought leaders push clear policy options with definite outcomes and none of the on-the-one-hand-on-the-other-hand waffling of public intellectuals. The problem, of course, is that thought leaders are quite often wrong. At best they gloss over the complexity and ambiguity that bedevil human affairs. At worst they push nonsense (see anti-vaccination or any health advice peddled by Gwyneth Paltrow) to the uneducated and credulous masses lamented by Nichols. Public intellectuals will point to uncertainty and contingency. They will explain that presidential actions are only one tiny force on the economy. They will point out that military conflicts are inherently unpredictable and influenced by myriad external forces. They are Berlin’s fox, knowledgeable of many things and therefore unwilling to provide the certainty that people outside the academy crave.

One of Drezner’s more interesting examinations relates to the disparate fortunes of economics and political science both as academic disciplines and as policy influencers. As he puts it, “policymakers view economists as experts, but political scientists as charlatans.” While political science has been attacked for studying every more esoteric subjects with ever more elaborate mathematical models, economics has thrived as it has been taken over by quants and elaborate math. Economics has made itself less accessible to the general public and elevated itself, in the public consciousness, to a science, while political science has been more accessible (or perhaps less inaccessible) and become in the public mind an effete, irrelevant argument about unimportant minutiae. Why? Drezner argues that economists present as thought leaders while political scientists present as public intellectuals. Because economists share a consensus on many of the basic tenets of their field–even as they debate viciously on issues of tactics and implementation–they can present a positive front without fear of contradiction. Drezner cited Pareto optimization as a fundamental principle on which all economists can agree. In contrast, political scientists do not agree at all on the value of basing foreign policy on improving human rights. To laymen, a debate between political scientists seems like a battle between a Christian and an atheist, while economists can confidently present recommendations without fear that their fundamental values will be questioned by their peers. As Drezner points out, economists therefore gain the ear of the public and policymakers despite a dismal predictive track record.

Drezner’s most important observations, however, relate to the incentives and constraints on both think tanks and celebrity intellectuals. Looking closely at Fareed Zakaria, Niall Ferguson, Tom Friedman, and others, Drezner demonstrates how the rewards for what he calls “superstars” have ballooned and distorted the market for intellectual commentary. Where intellectuals could once hope for a teaching or journalism position that would keep them in the middle class, the most elite now draw five figure speaking fees and seven figure book deals. Perhaps even more enticing–they can reach the ears of presidents, cabinet secretaries, and legislators and see their ideas enacted as policy. Drezner is admirably open-minded toward people like Friedman, who often draw the ire of academics. He evinces enormous respect for Zakaria while demonstrating how the incentives of superstardom led him into serial plagiarism. The modern ideas industry has created a wide field for intellectuals who can market themselves through television and social media, but it has also created traps that steer those superstars toward ideas congenial to the plutocrats who fund conferences and the politicians who can implement their ideas.

Like the individual intellectuals, think tanks have undergone a transformation as their numbers have boomed and their influence has grown. In a more crowded intellectual marketplace, think tanks have found it necessary, or at least expedient, to market themselves more aggressively. The shift in fortunes from industrial barons to information age entrepreneurs has tilted the field toward think tanks that can generate immediate policy impact rather than those that seek to influence ideas over time. The shift is most obvious at Heritage, and Drezner delivers a brutal history of their fall from a dogmatic but intellectually respected institution to shameless flacks for a particular brand of politics and policy–more devastating because he published before the ouster of Jim DeMint. In the Heritage story, though, there is a ray of hope. Over the course of DeMint’s tenure Heritage shifted from conservative but rigorous scholarship to slanted propaganda. Drezner highlights instances in which Heritage leadership quashed reports with inconvenient conclusions or researchers achieved the desired results through the use of preposterous assumptions and distortions. Intellectual quality gave way to partisan advocacy. Drezner quotes a Heritage staffer on the leaders of Heritage Action, “they felt absolutely no intellectual modesty. They felt totally on par with people who had spent thirty years in the field and had Ph.D.s.”

Then a funny thing happened. Republican politicians got frustrated with Heritage’s blatant attempts to strong-arm them and began cutting off their access. Congressional aides reported that they relied less on Heritage products. Respected academics described their products as, “useless.” By 2016 Drezner’s own survey found that 74% of self-identified conservative opinion leaders expressed little confidence in Heritage reports. Drezner’s examples of mixed effects on influence are distressing by contain a kernel of hope. Heritage still executes effective advocacy campaigns and strong-arms top-line politicians to show up at their events, but decreasing respect from the congressional and executive staffers who actually write policy may mean decreasing influence on that policy. Heritage’s fall from grace has taken a long time and it might still not be apparent to a casual observer, but it indicates that quality matters. There will always be a market for partisan quackery, but it need not dominate the entire ideas industry. Extending beyond think tanks, universities can cater to the whims of undergraduates and their helicopter parents, but at some point employers will devalue education if the educated are no longer useful employees. If we believe in our own principles, that critical reasoning and deep knowledge provide real value, then institutions that provide them will find a way to thrive, even if they seem temporarily overshadowed by those who provide intellectual candy wrapped in perks. Recent reporting indicates that red state Republicans have begun turning away from tax cuts even as Arthur Laffer continues to push his snake oil in Kansas. The heavily Republican Kansas legislature recently rolled back tax cuts in a victory for reality over ideology.

Nichols and Drezner paint a dire picture of an America that could easily consume itself through intellectual sloth. As Nichols points out, citizens have a right to be wrong, and technocracy will not save us from an ill-informed, selfish, and lazy electorate. The ultimate consumers of the ideas industry are those same ill-informed and lazy voters, and the industry has plenty of incentives to produce the intellectual equivalent of cigarettes and soda. That said, the ideas industry is not governed purely by profit motives. Fareed Zakaria may have fallen prey to the incentives to overextend himself and take shortcuts; Tom Friedman may worry that publishers have no desire to constrain his work, but neither Zakaria nor Friedman is in it purely for the money. Both of them care deeply about their ideas and their policy influence. Paul Krugman regularly publishes a column detailing all of the mistakes he has made in the previous year. Most university presidents would not be happy with a reputation for a large endowment, a winning football team, and worthless academics. Certainly few professors wish for such a reputation regardless of their salaries. Drezner is far more accepting of the current state of affairs–arguing that some of it is an improvement and some of it is not, but it is irreversible. Nichols is less sanguine but also offers few practical ideas for improvement.

The unfortunate truth is that major cultural trends do not change easily. We will not soon convince Trump voters to embrace climate science or academic ideas on international relations. We can, however, alter the focus of the academy. Schools can move away from both political purity and undergraduate coddling to focus on academic rigor. They can resist attacks on free speech and teach their students to defend themselves rather than seek intellectual shelter. Conservatives like Nichols and Drezner can actively call out and provide alternatives to the culturally loaded language of the political right, and liberal politicians can shift their emphasis away from identity politics to concrete problem-solving. Shouting matches in which we call each other “fascists” and “libtards” will further warp the incentives for polarization, segregation, and red meat propaganda disguised as policy discussion. If expertise is to regain the respect of the public, and the ideas industry is to focus on educating and informing that public rather than herding them, then leadership will have to come from the intellectuals and politicians who currently reside in the public opinion basement. It will not be easy or fast, and it will require people of good will to prioritize core values of democracy over transient policy differences, but it is the only hope.

*This review refers to the Kindle edition of each book. Each of these books is loanable on Kindle.

 

Military Transition

I have now completed my first full month as a defense contractor, words I never expected to write. Anyone who thinks he can spend two and a half decades in an institution and walk away without disorientation is fooling himself, but I must admit that I’ve been caught off guard by the extent of disorientation. Ten months without a job and four months of true unemployment did not help matters at all. Anyway, here are some observations for anyone getting ready to make the jump.

  1. The Army is not what you do, it is who you are. If that is not true for you, then I do not know how you could give it 20+ years. There are simply too many times when it is not worth it on a strict cost:benefit basis. The About Me section of this blog lists the four things that anyone knows about me within five minutes of meeting me–I put being a soldier on a par with my marriage and my children. Chances are that when you leave the military–particularly to go into a contracting gig–you will do your job rather than be your job. If you want deep meaning from your work, then plan well in advance and set yourself up to go into public service, teaching, non-profit work, or something else that really fires your passion because those jobs will not come along naturally, and they will entail significant modifications to your lifestyle (more on that later).
  2. None of the above implies that defense contracting is bad or evil or even particularly venal. You can do work that is both challenging and valuable to our national defense. A single good day at my current job could save soldiers’ lives, make our Army more effective, or save enough money to pay for my entire program many times over. It is valuable work. The difference between what I’m doing now and what I did before, or what I might be doing, is that I don’t have any authority to make decisions, and I am inherently operating from a profit motive. My company exists to make money by doing good work. In theory, government employees do good work for the sake of the work and get paid just enough to keep them doing it. Everyone at my company cares about the quality of our work and the value we provide to the Army, but we would stop working immediately if they stopped paying us. That’s not a bug, it’s a feature. It’s how capitalism works. Just be prepared for a little soul searching after decades of pride in your personal sacrifice.
  3. And about that sacrifice…. Sure you risked your life and you spent years away from home and you dragged your children to eight different schools in 10 years. Those were all very real sacrifices. Beyond that I buried people I loved when they were far too young. I know people who sacrificed their health, their limbs, even their sanity. Those sacrifices are real. But if you’re a senior officer, chances are you did not sacrifice as much financially as you think you did. Odds are you do not pay state income taxes because at some point you were stationed in Texas or Washington or Florida or Tennessee or Alaska or California or Kentucky or some other state that either has no income tax or doesn’t charge military out-of-state income tax. You also receive a housing allowance every month that Uncle Sam not only did not tax but factored into your sales tax calculation–instead of taxing you on that very large amount of money, he actually gave you a tax deduction for it. I don’t know about you, but that was worth about $50,000 per year to my family. My contracting salary is 25% larger than my military salary, but my take-home pay is 1/3 less. With my retirement pay I am still money ahead, but I am not nearly as far ahead as the raw numbers would make you think. Be grateful for what you get as a career soldier, and be prepared for a reality check when you join the rest of the tax-paying country.
  4. The path to defense contracting is a rut, and it’s deep. There are reasons so many retirees find themselves right back in the Building or at Camp Swampy, doing largely what they did in uniform. First, the military dependence on contractors has grown substantially over the past 30-40 years. Republicans hate government employees but like military spending and love private-sector profits. Democrats like jobs in their districts and don’t much care if they are contractors or government employees. At least contractors pay taxes (see 3. above). The result is a growing number of contracting jobs and a static or shrinking number of government jobs. Of course military retirees do not have to sell their services to DoD in any form, but then we run into factor 2. Military experience often does not translate that well into the true private sector. Senior military officers are mostly generalists with lots of leadership and management experience but few hard skills. To the extent we do have hard skills, they are in areas like delivering fires, military logistics, organizing defense of a forward operating base, etc. There are companies that want those skills–defense contractors. Civilian companies want supply chain managers, personnel managers, IT specialists, sales managers, marketing managers–and they are not impressed with the military analogues. Apple and GE will not learn much about marketing from the Army. My current job requires both a deep and broad understanding of how the Army runs. You could hire a more junior contractor, but he would not be able to do the job. Government employees with years of experience in the Army staff would be just as good and cheaper, but the Army is cutting its staff, not growing it.
  5. The other option for retirees is the meaning route. Become a teacher, work for a charity, go into the clergy, work in politics. These are valid and viable options that offer a similar sense of purpose to your military career. They pose two surmountable but real obstacles–they do not pay much, and they’re just as hard to enter as the non-military corporate world. Some of these roles–teaching and clergy–require credentials. After you retire is not the best time to get them if you have a mortgage or tuition payments to cover. Others, like politics and non-profits, tend to hire young idealists and then promote from within. They rely on a high degree of inside knowledge and experience. Moreover, they may not want you even if you’re prepared to start at the bottom. Those doing the hiring are understandably leery of older, experience leaders accustomed to high salaries. They may worry that you think you are willing to start at the bottom but will quickly become disillusioned with the low pay or will expect a much larger voice in decisions than your entry-level merits. They may worry that three decades of military life have left you ill-suited for the non-profit culture. If you want to go this route, start volunteering well before your actual retirement and be prepared to wait for an opportunity. Arrange your life on a very austere budget and commit to substituting meaning for remuneration.

I am incredibly grateful that a former colleague sought me out when he had a job opening. I am incredibly grateful to the company that hired me. I am incredibly grateful that the military’s generous retirement pay and benefits gave me the freedom to wait months for an offer rather than lowering my sights in desperation. The world is full of opportunities for those leaving a military career, but better planning would have opened those options up to me more fully. Whenever I counseled soldiers who planned to leave the military, I told them to be sure they were running to something and not just from the Army. That was good advice, and I probably should have followed it a little more closely. I got lucky and everything worked out, but I do not recommend my course of action.

Afghanistan–Again

DSCN0503As the Trump administration moves forward on, or perhaps we should just say towards, its strategy for Afghanistan, the various tribes of the foreign policy and political establishment seem no closer to consensus than they have been since at least 2008. In our sixteenth year of war the lack of consensus indicates a lack of understanding and should stand as an enormous caution to all of us. On Thursday Sameer Lalwani published Four Ways Forward in Afghanistan, and this morning, on Memorial Day, the New York Times Editorial Board weighed in with The Groundhog Day War in Afghanistan. Michael G. Waltz published No Retreat: The American Legacy in Afghanistan Does Not Have to Be Defeat in War on the Rocks two weeks ago. Obviously the Times goes into less detail than Lalwani or Waltz, but nevertheless makes a clearer case. Perhaps inadvertently, the Times articulates in its penultimate paragraph the key, and insurmountable, difficulty while Lalwani chooses to ignore it when it is too inconvenient.

Lalwani’s four ways forward will be familiar to anyone who has responded to a staff college essay prompt–Statebuilding, Reconciliation, Containment, and Basing. He argues that the strategies are distinct, and indeed mutually exclusive. While each may contain elements of the others, he is correct that the United States has vacillated between the four and consequently failed at all. Strategy requires, as Lalwani states, “an honest appraisal of costs, risks, and priorities.” In fact I would reorder his list. The first concern is priorities–what end is non-negotiable or at least paramount. For which end would you sacrifice the others? In one key observation he notes that Statebuilding is incompatible with Basing, a factor with which military planners have been unwilling to grapple throughout our forever wars. Hamid Karzai’s maddening anti-Americanism was a political necessity for any Afghan politician aspiring to popular legitimacy.

Priorities, of course, can change once we determine the costs and risks. It may be that our first priority is unachievable at an acceptable cost, and this is particularly likely when engaging in a civil war in a remote and culturally alien country on the far side of the world. Afghanistan is strategically valuable for two reasons–it provides a base of operations in a volatile region where we have little presence, and it has the demonstrated potential to harbor and even nurture anti-western terrorists. Both of these advantages are real, but neither is unique or fundamentally necessary. The first is necessary only if we feel a compelling need to directly influence events in the region through military force. Accepting the limits of U.S. power in a remote area is also a viable option, though fraught with its own costs and risks. Salafist terrorists have found plenty of nurturing safe havens elsewhere since 2001, and so preventing them from using Afghanistan is of dubious value.

Statebuilding is the most ambitious of Lalwani’s suggestions, the most costly, and presents the highest potential payoff. In the minds of military planners, it achieves both of the above strategic objectives, but that is because they look at it in military rather than political and cultural terms. Lalwani himself falls into this trap, and it is worth quoting his implementation prescription to see the error:

The state-building strategy would deploy U.S. troops down to the brigade or battalion level to guide and mentor Afghan units and to signal an enduring commitment to the Afghan state. Retired officials have also argued that keeping troops in the fight will better ensure political support for aid to Afghanistan.

It is not always wrong or unwise to take things out of context. Sometimes when we are reading a long piece for the entire meaning we miss the small, specific errors that undermine the whole. Here it is obvious–Lalwani’s “statebuilding” is really security force building, and we’ve been doing that for at least seven years. It hasn’t worked because security forces are an organic part of a state and culture–they cannot be built separately, or at least good ones cannot be built separately. To build a state through the security forces means that you will end up with a militarized state, if you can do it at all. In 2014 I sat in a briefing in which the International Joint Command proudly announced that a particular kandak (battalion) had been trained to fire its howitzers. My battalion partnered with that same kandak in 2011, and we also trained them to fire their howitzers. In between, they had come apart due to poor leadership, recruiting and retention failure, a disastrous supply system, and a total lack of training management. The problem was not teaching a discreet group of Afghans to use a particular weapon, but rather trying to build a 20th century industrial army in a state that did not incorporate any of the cultural, educational, or political prerequisites. Deploying advisers down to brigade level will improve the planning and operating capability of those brigades as long as the advisers remain. It will do nothing to root out corruption, patronage promotions, rampant illiteracy, or any of the other fundamental problems.

Michael G. Waltz’s May 12 essay in War on the Rocks, No Retreat: The American Legacy in Afghanistan Does Not Have to Be Defeat, assumed the statebuilding strategy, and therefore was able to do a better job articulating the costs and risks in the space alloted. However, Waltz also assumed away crucial considerations in his otherwise clear-eyed argument for a full commitment. Waltz predicts that Secretary Mattis and Lieutenant General McMaster will successfully articulate to the president that “the key ingredient to that approach is time — most likely decades” without addressing the domestic political strategy that must accompany such a commitment. He goes on to say, “it took the Colombian government over 50 years to get to this point in its struggle with the FARC and it was arguably more advanced in its capability than the Afghan government,” without acknowledging such a precedent is likely to make the strategy politically infeasible. Waltz might argue that the president should sell the policy, but in reality he is banking on the disconnect between the military and the public. Put simply, Waltz and other advocates of statebuilding assume that a president can pursue a decades-long military and political commitment in Afghanistan, at a cost of hundreds of billions of dollars and an indeterminate number of U.S. lives, and the American people will not care enough about the Afghan war or the continued drain on the U.S. military to impose meaningful political costs. That is an open question, but counting on voter apathy leaves the president and the strategy vulnerable to a spectacular event that suddenly focuses attention.

Waltz is also more detailed than Lalwani in his examination of Pakistan’s role and the problems it creates, but he once again glosses over the fundamental problem. Waltz notes that no modern insurgency with external sanctuary and support has ever been defeated, and he argues that the U.S. government must be more coercive with Pakistan in order to gain compliance. Pakistan has calculated that the destabilization of Afghanistan and a friendly Pashtun insurgency dependent upon Pakistani largesse are in its interests. It is difficult to see how the U.S. can change that strategic calculation in the near term without destabilizing Pakistan, and Waltz’s ideas for applying pressure all run that risk. Waltz acknowledges the risk while continuing to view the situation through an Afghanistan lens, but that is the problem. We must be very clear here–no outcome in south Asia is worse than the collapse of the Pakistan government. Pakistan is a state of over 200 million people, roiling with sectarian, economic, and cultural tension, and in possession of nuclear weapons. Its government and military are corrupt, and they have fostered religious extremists as a means of maintaining internal power and destabilizing their neighbors. While tiptoeing around the Pakistan government and security forces may seem like rewarding bad behavior, it is the least bad of a set of very bad options. There is zero chance that a destabilized Pakistan government would be replaced by something better, and a high probability that it would be replaced by anarchy, civil war, an Islamist dictatorship, or some combination of the above.

Waltz does deserve credit for addressing one fatal flaw in U.S. policy to date. In arguing for U.S. advisers down to the tactical level and greater U.S. “enabler” support, he acknowledges that such a move will entail greater risk, particularly of “green-on-blue” attacks. My own experience in Afghanistan bears this out. In fact our risk-avoidance in this area is one of the clearer indicators of our lack of seriousness and the hollowness of our rhetoric. Because the U.S. military has been unwilling to accept friendly-fire casualties, we have imposed extreme measures to protect the advisors who integrate with Afghan units. Those measures raise the cost (in total personnel and mutual trust) and therefore reduce the total capability. Looked at tactically, the decisions make sense. Green-on-blue attacks gain media attention and pose the greatest near-term risk to domestic support for the Afghan mission. Viewed strategically they make no sense at all. Just over 150 coalition troops have been killed in green-on-blue attacks over more than 15 years of war. Green-on-blue attacks therefore represent a smaller proportion of deaths than accidents and suicides. To be blunt, a military operation that is not worth 10 deaths per year is probably an operation the United States should forego. If the mission in Afghanistan is truly necessary for American defense, then we should be willing to accept a doubling or trebling of the green-on-blue casualties without a thought. The perception that we are not willing to accept it is precisely the reason we should question the political will to embark on a decades-long statebuilding enterprise.

On reconciliation, Lalwani hits the key point–a reconciled Afghanistan in unlikely to be friendly or helpful to the United States. Looking at our two strategic advantages, an Afghan government that incorporates Taliban leaders would almost certainly devolve a great deal of local control. Pashtun leaders in Kandahar, Helmand, and elsewhere would be just as likely to provide safe haven to Salafist terrorists as the Taliban government was in the 1990s. Moreover, they would continue to make more. Such a government would almost certainly provide safe havens for the Pakistani Taliban, thereby ramping up regional tensions. No Taliban-inclusive government can be expected to permit continued U.S. presence. It is difficult to see what the U.S. gains from “deep reconciliation” or how we can achieve “shallow reconciliation” as Lalwani describes it. The Taliban has shown little willingness to surrender regardless of losses, and they are currently advancing in their key territories. A few thousand American advisers will not fundamentally alter that calculus, at least not for very long.

Basing presents a tempting target for U.S. military planners who, to their credit, view Afghanistan in the broader regional context. We cannot reiterate enough–Afghanistan itself is of no value to the U.S. Unfortunately, military planners tend to see the world through the lens of military plans and either wish away the political and cultural factors or leave it to others to “set the conditions.” Bases do not exist in a vacuum. They must be both supplied and defended. Afghanistan provides the ability to put U.S. assets in close proximity to the “‘Stans,” Pakistan, and eastern Iran–a tempting capability. The problem is keeping those bases secure and supplied. A reconciled Afghan government is unlikely to permit them. An Afghan government that permits them cannot gain the legitimacy it needs to be an independent state–you cannot build your independence on obvious dependence. As much as the planners at CENTCOM may want Afghan bases, they tend to ignore the problems of keeping Afghan bases, and more importantly the problems associated with losing the Afghan bases when/if the wheels come off.

That leaves us with only containment. The dangers of Afghanistan are not that great that we cannot contemplate withdrawal. Indeed, the greatest risks are political–a U.S. president must be the one to “lose Afghanistan.” This is where we run headlong into the great tragedy of Trump and Trumpism. Had the president been more knowledgeable and better advised, he could have railed in the campaign against the “stupidity” of the previous administration’s surge and continuing commitment. He could have attacked President Obama not for pulling out but for failing to pull out fast enough. Then as president he could have continued the withdrawal while holding out delays as an incentive for desired behavior by the Afghan government. Afghanistan presented Trump with an opportunity for a bold foreign policy move that would have fit within his campaign message and divided the foreign policy establishment. A precipitous Trump withdrawal would have outraged the neocon wing of the Republican Party, whom he outraged anyway, and placed Democrats in the awkward position of either arguing for continued military engagement in Afghanistan or supporting Trump.

The greatest tragedy of all, however, is the inevitable rehashing of this argument down the road. We do not have to “lose” in Afghanistan unless we choose to. The United States has ample power to maintain a presence and stave off total defeat in and around Kabul without massive U.S. casualties or a budget-busting investment. It is doubtful that we have the capability to “win” in any meaningful way (a subject for another post), and so staving off defeat just prolongs the inevitable and leaves the most painful choice for a future president. Barrack Obama, a model of maturity and responsibility, nevertheless kicked the can. Donald Trump, a model of immaturity and irresponsibility, squandered a unique opportunity to turn Afghan withdrawal into a political win and is unlikely to take upon himself the costs of withdrawing absent some obvious and immediate payoff. Let us hope, then, that we have this conversation in 2020 as part of electing the next president, rather than in 2021 as that new president weighs the long-term and unquantifiable benefits of an Afghan containment strategy against the immediate and painful costs.

Review – Why We Lost: A General’s Inside Account of the Iraq and Afghanistan Wars

BolgerWhy We Lost: A General’s Inside Account of the Iraq and Afghanistan Wars by Daniel Bolger. Kindle Edition, 565 pages. Published November 11, 2014 by Mariner Books. ASIN: B00KEWAP04

I first published this on Goodreads back in 2014. Someone recently liked it, and the notification caused me to go back and reread it. I’m a little proud of it, so I’m republishing here.
 

LTG Bolger’s review of the wars in Iraq and Afghanistan is disappointing. The title is a bait and switch–promising an examination of the strategic failures of these two wars but offering largely anecdotes of ground-level combat. The stories of the battles are told in greater depth and with more personal observation by those who actually fought them. Bolger commanded large organizations in both Iraq and Afghanistan, and his reputation within the Army combined with this book’s pre-publication media blitz led me to hope for a serious insider discussion of the strategic choices that left us where we are today.

Instead, Bolger offers, to the extent he has a thesis at all, a recapitulation of the Weinberger-Powell doctrine of quick, decisive force with a clear exit stategy. The prescription is appealing to those who experienced the euphoria of the quick, Gulf War “victory,” but it fails to address our continued tendency to land in wars that do not fit neatly into our preconceptions. Bolger even acknowledges that the Gulf War “victory” was a strategic illusion. If so, then his preferred method of warfare failed to achieve its political ends. Bolger, like so many U.S. security pundits, does a great job of identifying the failures in our post-Cold War strategy without offering any real insight into how to do better.

The U.S. today must deal with a frustrating paradox: we are the wealthiest country in the world and expend more resources on defense than the next 14-15 countries combined. All things being equal, we would expect to be superior in whatever military arena we choose to emphasize. This is an extremely effective strategy for deterring conventional threats from rival nation states. However, we cannot expect any adversary to challenge us in our arena of greatest strength. Developing overwhelming capability in conventional military operations will not eliminate opponents; it will drive our opponents to employ asymmetric techniques like flying IEDs and people’s war. The more effective the U.S. military is at conventional warfare, the less likely we are to engage in conventional warfare. Asymmetric means are very costly to our opponents, but a few will still be willing to employ them, particularly when their survival depends upon it.

Despite our preparation and predilection for maneuver warfare, we have ended up confronting asymmetric threats in Vietnam, El Salvador, Lebanon, Iraq, Afghanistan, the Philipines, Bosnia, Kosovo, Somalia, Pakistan, and Yemen. We have been successful where opponents have failed to maximize their strengths against our weaknesses. We have largely failed where our opponents have proven resilient, persistent, and used safe havens. It is all well and good to say we should not engage in nation-building that might lead to counterinsurgency, but our repeated failure to heed that advice requires us to either embed it in our political process or rethink our strategy, doctrine, manning, and equipping.

Modern conventional warfare requires few people and lots of expensive technology–largely due to choices the United States has made about equipping and training our forces. Our conventional forces are roughly equal or even greater in capability to the “special forces” fielded by other nations or by our own in the past. Even the United States cannot maintain both a very large military relative to our population and equip/train it to the levels we have come to expect. Superbly competent infantry have proven invaluable in tactical counterinsurgency just as superbly competent combined arms formations facilitated the rapid overthrow of Iraq. However, the size of our military, particularly the ground forces, has continually limited our reach and therefore our capability to control populations–the essential function of counterinsurgency. Manning a larger ground force at the current levels of training and equipping is prohibitively expensive in the absence of more significant threats than we currently face.

We have two reasonable courses of action going forward, but each involves significant tradeoffs.

We can continue on the current road of high-tech mastery. It will cost us a lot of money and leave us without an effective means to control foreign populations over long periods of time. If we build a national strategy around defending U.S. territory and vital national interests and leaving the rest of the world to muddle through their internal issues, this could work. It will frustrate those who do not differentiate between amounts of military power and types of military power. Recent history indicates we will continue using our superb hammer to drive screws with predictable results.

We can taper off our addiction to high technology. Paradoxically, this may position us better for some future great power war. We could start day one with lots of room for growth and lots of R&D but few sunk costs. Currently, we have enormous sunk costs and had better hope that we bought the right stuff. We have not demonstrated much capability for adapting quickly after the shooting starts. With less money spent on hardware, and I would argue additional tapering on per-person personnel expenses, we could afford a larger military that would be perfectly adequate for defending U.S. territory and vital interests, provide adequate manpower to occupy other countries if necessary, and remain more connected to the civilian population. It would NOT deliver lightning fast and bloodless victories in future Desert Storms. We would pay in casualties to some extent because we would be fielding a military of adequately-equipped citizen soldiers rather than superbly equipped/trained operators. Ask the guys in the Huertgen Forest how that can turn out.

LTG Bolger never really addresses the strategic paradox that put us in this position in the first place. If he is willing to stand up in the public square and loudly oppose future military operations to shape the world to our liking in the absence of existential or at least deadly serious threats, I will stand right there beside him. If not, then perhaps the problem lies not only in our decisions to fight such wars but in the institutional military’s refusal to prepare for them.

Here is one area of praise for the book. Bolger lays squarely on the general officer corps the responsibility for not arguing the case against the Iraq invasion and the Afghan nation-building. He essentially calls his fellow generals moral cowards as a group. Unfortunately, he undermines his own point by praising individually nearly every general he names. Only David Petraeus comes in for anything approaching personal attack. The other generals are all smarter than the press gave them credit for. They all see clearly (even though they see differently). When they fail it’s because they were too trusting or too honest. He conspicuously avoids naming those generals who deserve particular and unmitigated condemnation. Likewise, Bolger finds admiration for some of the less savory characters of the wars. He praises COL Mike Steele and implies that he was treated unfairly. Bolger clearly loathes the rules of engagement imposed in both Iraq and Afghanistan, but he completely fails to address the historical precedents–we tried unfettered strikes and high collateral damage in Vietnam. How did that work out? He acknowledges war crimes by various lower-ranking U.S. service members but either poo-poos them as mostly harmless shenanigans (Abu Ghraib) or emphasizes the tremendous pressures that must have led good-hearted American boys and girls to such lengths (Mahmudiah, Haditha).

Why We Lost essentially argues that we lost because we played. Bolger praises the courageous efforts of American service members and junior to mid-grade leaders. He condemns the general officers as a body (though not individually). In the end, however, he offers no path forward. To the extent he hints at a prescription, it is one that has been tried and found wanting. The U.S. has become the indispensable nation. Too many political blocs within the U.S. are unwilling to simply accept a world that does not conform to our desires. Unless that culture changes in the near future, the military will have to build itself for the world in which it lives–not the world in which it would like to live. Incompetent and purely evil opponents will not line themselves up in countries with U.S. ally neighbors and offer to fight us mano-a-mano. The last guy who did that ended up on the end of a rope.

Review – Postwar by Tony Judt

JudtPostwar: A History of Europe Since 1945 by Tony Judt.
Paperback, 960 pages. Published September 5, 2006 by Penguin Books. ISBN: 978-0143037750

Tony Judt’s Postwar is the sort of book that makes me question my life–how can I know and understand so little when there are people on this earth who can weave obscure Polish philosophers, Soviet dissidents, French intellectuals, British trade unionists and a thousand other bits of data into a coherent, engaging narrative that stretches from 1945 to 2005, from Moscow to Dublin and Norway to Greece? And yet Judt managed just that, albeit delving far more deeply into some countries (France and Czechoslovakia for instance) than others (Ireland and Spain).

Judt sees the forces that have shaped Europe since 1945 as an almost endless series of binary confrontations even as he makes the point that these binaries are false. The most obvious is free-market capitalism versus Soviet Communism, but Judt makes it clear how much of the “free market” was enmeshed with government ownership and economic manipulation, and just how often the various eastern bloc countries permitted free market inroads to stave off total collapse. Judt can acknowledge the cruelties of unfettered capitalism–and his treatment of Margaret Thatcher is notable for its heat–while condemning Soviet Communism and its many satellites for their boundless repression. Indeed, he reserved some of his harsher criticism for the various western European Communists who found it impossible to acknowledge even Stalin’s crimes, let alone the generalized repression and economic failure of the Soviet Union, until long after they were apparent to any reasonable observer.

Postwar makes it clear that history does not unfold as a sporting event, with one winner, one loser, and a clean result. In the introduction he makes the initially jarring claim that, “since 1989 it has become clearer than it was before just how much the stability of post-war Europe rested upon the accomplishments of Josef Stalin and Adolf Hitler.” (9) “Accomplishments” was surely intended to provoke when applied to history’s greatest tyrants, but Judt makes it clear that he means exactly that. Hitler and Stalin both, to varying degrees, sought to sort out the humanity under their control into homogeneous packets, and both succeeded largely in doing so. While the postwar world condemned what later came to be known as “ethnic cleansing,” it did not reverse it. If anything, the victorious allies extended it through the expulsion of Germans from Western Poland and Czechoslovakia and the failure to fully restore the surviving Jews to their homes and property. The resulting countries could pursue “national” goals in ways that would have been difficult or impossible in the multicultural states that preceded the war.

Moreover the European Union rests on foundations laid by Napoleon in his Continental System and vastly expanded by the Nazis to realize their vision of a continental reich. Vichy administrator Pierre Pucheu’s vision for a free market with a shared currency stemmed from frustrating experiences of economic policy planning between the world wars and was shared by Albert Speer and others. By conquering most of the continent and assimilating it into the German Reich, Hitler effectively achieved a common market without borders and sharing the Reichsmark as its currency. Consequently, postwar bureaucrats had experience with something similar to the eventual European Economic Community and European Union–it was not unimaginable because the Nazis had made it happen.

Needless to say the European Union’s core values are the antithesis of and reaction against Nazi ideology, and consequently few if any Europeans want to ascribe their liberal, multicultural superstate to its Nazi forebears. The willful forgetting of the EU’s historical antecedents is but one of the many ways in which parties, nations, and the entire continent of Europe have intentionally obscured and distorted history to form foundation myths and avoid the clashes that come with acknowledging past wrongs. The French mythmaking surrounding wartime collaboration and resistance is well-documented, but Judt dives deeply into the similar process in the occupied western European states, the neutral states, and the occupied states that fell behind the Iron Curtain. He documents the fundamental differences between postwar Britain, which never suffered occupation and can therefore take a purist attitude toward collaboration, and France, Belgium, the Netherlands, et. al. who collaborated to greater or lesser degrees out of necessity. In the east he contrasts the experience of Yugoslavia, which alone could claim a history of fierce resistance to the Nazis (while obscuring and suppressing the internal murder and conflict that required) with nations under the Soviet thumb that ascribed wartime crimes and collaboration to capitalist governments and thereby disavowed all responsibility.

No review would be complete without noting the extent to which Judt foreshadowed Brexit and the centrifugal forces that are roiling Europe today. Reading this book in light of recent developments, one must acknowledge in every section on Britain and its relations with the continental powers that British membership in the larger European experiment was always awkward and contingent. Britain stood apart from the continent before the war, and its greater integration since has often been more window dressing than reality. Britain’s socialism was different from continental socialism–older and more union-focused–and therefore less resilient in the face of economic integration. Ironically, Thatcherite privatization made Britain more vulnerable still. Its social safety net was thinner, its institutions more brittle, and its workers had already taken a significant hit as inefficient industries collapsed or departed. No reader of Postwar should have been surprised at the Brexit vote. France is a different story. Judt, writing before 1999, sees the National Front as an insignificant fringe. He cites Jean-Marie LePen repeatedly as an example of the ineffective far right. No doubt he would be surprised to see LePen’s daughter contending seriously for the presidency of the republic, even though she had to jettison her father and his baggage to make it possible.

At more than 800 pages of text, Postwar is a serious undertaking, but it is not dry or boring, and the narrative remains as engaging on Foucault and Derrida’s influence on the French left as on the blindness and stupidity of Serbian nationalists. Judt’s premature death in 2010 deprived us of a premier public intellectual.