By the US Bureau of Engraving and Printing, designed by Robert T. McCall. The United States’ government was once known for its extraordinary competence, with achievements ranging from the D-Day invasion and the Berlin Airlift to the Manhattan Project and the moon landing. Public domain, via Wikimedia Commons


We are no longer competent enough to pull off an election—or manage our nuclear stockpile. Why has the US become an incompetent nation? The reasons go far beyond partisanship.

The American columnist David French has pointed out something I’ve been meaning to note for quite some time. There has been a broad breakdown in competence in the United States. No one quite understands why. But as he writes, American history, roughly since the turn of the century, has been a history of staggering incompetence, as an exercise in counterfactual imagination suggests:

What are the ripple effects if Palm Beach County election officials designed a less-confusing ballot for the 2000 election? How does America change if our intelligence agencies were more accurate in their assessment of Saddam Hussein’s chemical and nuclear weapons programs? Or, if we still failed on that front, how is our nation different if military and civilian leaders had not made profound mistakes at the start of the Iraq occupation?

We can do this all day. Let’s suppose for a moment that industry experts were better able to gauge the risks of an expanding number of subprime mortgage loans. . Would we be more trusting of government if it could properly launch a health care website, the most public-facing aspect of the most significant social reform in a generation? How can we accurately judge foreign threats if ISIS is dubbed a “jayvee team” the very year that it explodes upon the world stage and creates the largest jihadist state in modern history?

The United States was once known for extraordinary competence. Consider the D-Day invasion, the Manhattan Project, the Berlin Airlift, the moon landing: In example after example, the United States government—not the private sector, note—mobilized vast talent to overcome historically unprecedented military, economic, technological, and governance challenges. So widely-known was our government for competence that to this day, we’re the object of conspiracy theories worldwide. Whatever we do, however dumb and cack-handed, is presumed to be deliberate, because so mighty a superpower as the United States could not possibly be capable of screwing up in such stupid ways. Just yesterday I was assured that the CIA had unleashed the Wuhan coronavirus—cui bono, after all? How could I be so naive as to think it a mere coincidence that the virus just spontaneously emerged near a virus research facility?

This kind of thinking owes much to the belief that the United States’ government is greatly more competent than it is. That belief, in turn, is a function of our competence of yore. Nothing we’ve done in this century would warrant it.

The loss of competence is bipartisan. The GOP is gloating over the Iowa meltdown. They would, but they shouldn’t. The worst American mistakes of this century were made under the GOP’s watch. I don’t think this is significant, though. They could just as easily have been made with Democrats in power. As usual, partisanship is preventing us from thinking about problems that are bipartisan, national, and systemic.

What exactly has gone wrong?


The historian Philip Zelikow wrote one of the best analyses of this problem I’ve read—the best, in fact—in a little-remarked essay for the Texas National Security Review. “The “hardware” of policymaking,” he writes, “—the tools and structures of government that frame the possibilities for useful work”—are obviously important:

Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.

“Software,” he argues, includes organizational cultures for obtaining and evaluating information, doing analysis, and recording what has been done. It includes commonly understood habits that routinely highlight gaps in information or analysis.

These are the qualities, he argues, that made for competent policy in the mid-twentieth century—and they neither came out of the academy nor did they return to the academy. Rather, they came from the strong, decentralized problem-solving culture of American business, and from the military—in turn influenced by British staffing systems, which Americans envied and imitated.

the wartime and immediate postwar experience profoundly influenced organizational culture for another generation or so. A great many Americans had been drawn into the work of higher-level policy design on numerous topics. “One analyst referred to [the war] as the largest program in postdoctoral education for faculty in the nation’s history.”

The military and business cultures of the United States in this period, he notes, “were intensely oriented toward practical problem-solving.”

They emphasized meticulous written staff work: unending flows of information and estimates, habitual preparation of meeting records or minutes, constant and focused debates about priorities and tradeoffs, and guidance directives drafted with concise precision that a lawyer would envy.

The result, especially by 1943 and afterward, was marked in dozens of projects from the atom bomb to the Marshall Plan to the Berlin Airlift. Any close study of such efforts reveals superior construction of large-scale, complex multi-instrument policy packages, including frequent adjustments.

The point about constant adjustment and iteration is notable. Even in military technology, most of the key Allied innovations turned out to be second-generation innovations. In other words, they were not the airplanes or ships that were available or in production at the start of the war. Instead, they were new or improved models of every kind, several of which had not even been imagined before the war. They were developed with agility and on a massive scale by a number of agencies and scores of companies in response to ongoing lessons learned, lessons that were constantly, consciously being extracted and studied.

Last week, the CIA announced the formation of a new unit, a dedicated China Mission Center. On the same day, The Wall Street Journal revealed that US troops had been deployed in Taiwan for “at least a year.” The leak appeared to be strategic. The Biden Administration, presumably, wished this to be known. Hu Xijin, the editor-in-chief of China’s party mouthpiece, the Global Times, frothed at the mouth: “See whether the PLA will launch a targeted air strike to eliminate those US invaders!” On October 11, China’s military announced that it had carried out amphibious assault drills in Fujian province, directly across the sea from Taiwan.

Where is all of this heading?

It is difficult for those who have not pored through the archives to appreciate the scale and scope of this work, ranging from economic statecraft to amphibious operations to science policy. The extraordinary sets of official history volumes from World War II, familiar to historians of the period, give a sense for the work. They are also a striking illustration of the organizational culture that would produce such meticulous and admirable historical analyses.

The organizational culture that accomplished so much during the war was passed along mainly through imitation and apprenticeship. But the best practices did not migrate into standardized training or academic degree programs.

Naturally, as that generation aged and died, these skills atrophied. That generation knew a great deal about making effective policy. They could not figure out how to teach it to the next generation. They failed to put into place an appropriate educational system for training an equally competent policy-making class.

This is a powerful explanation. It fits the facts. It makes intuitive sense.

It explains, too, something else that has always puzzled me. Whenever Americans point to European healthcare systems as something to emulate, I hesitate—not because they are wrong to say that health care is provided more rationally and less expensively in every other developed country. This is true. The French healthcare system is a marvel. The French bureaucracy, in general, works exceedingly well.

But as half of America will quickly point out—and they, too, are right to point it out—when our government gets involved in these things, we get the VA. The American right concludes from this that government itself is the problem: Only the private sector has the appropriate motivation to be competent. The American left points to Europe, or what they understand of it, and concludes this isn’t so: Obviously, in countries where the government takes a larger role in providing for health care, better outcomes result.

Both are missing a crucial point. It’s not government, per se, but our government that screws everything up. Zelikow’s hypothesis—that there’s something wrong with the way we educate our public servants—is an important idea. It may well be that we need to fix this before we can hope to fix anything else.

If it can be fixed.


American education for public service, he notes, has always been radically different from the rest of the world’s. The notion of a professional career in public service didn’t even emerge until the late 19th century.

In the postwar United States, Zelikow notes, the study of public administration lost prestige against the rising idea of the social sciences.

Partly in response, a new trend in public policy education took shape. The social sciences were developing new techniques for the systematic analysis of public policies using analytic models, many derived from economic theory, along with quantitative methods. …

Momentum produced a wholesale transformation in professional higher education. “At the heart of this shift [during the 1960s and early 1970s] was a growing faith in the power and prestige of economics as a field, a method, and even a science,” he writes

In the new curricula, the definition of “policy analysis” was narrowed to economics, statistics, and quantitative analysis; students focused on cost-benefit analysis, behavioral economics, game theory, and operations research. “But most policy making challenges,” he notes, “and the related staff work, call upon different sets of skills.”

Traditional graduate studies related to policy work tend to bifurcate into two very distinct tracks—a professional master’s degree program and an academic PhD program. Both of these programs serve important purposes, but they leave a major gap. The PhD students develop rigorous research skills to investigate theories in their fields, but are largely insulated from consideration of real-world policymaking. Professional master’s students are exposed to some complexities and challenges of practice. The strength of these programs can be training in quantitative analytic methods, public administration, and advocacy. For various reasons, they do not provide rigorous training in the kind of strategic and design thinking needed for problem-solving, nor do they impart enough relevant substantive knowledge.

Meanwhile, law schools began to provide larger and larger numbers of public employees.

Lawyer-officials have ready gifts. They know how to make an argument. They are usually experienced writers. On a good day, they are relatively rigorous in attending to factual and legal detail. The tradeoff for these “generalist” skills, however, is a lack of much subject-matter or foreign expertise. Experienced as advocates who can pick evidence to defend a position, lawyers are not necessarily trained to weigh and sift positions on both sides. Experienced in being asked to decide what “can” be done, lawyers are not trained to analyze what “should” be done, even in policies having to do with policing or the administration of justice. They have no necessary experience in policy design, analysis, or implementation.

Read his whole article for more insight into the ways our education system fails to create competent public administrators. It is very insightful. It explains a great deal.


He points as well to significant transformations in the organizational culture of our government—particularly, to the decline of careful record-keeping:

Through the war and postwar years, careful records were usually kept for all high-level meetings among American officials. This is hard to do. It was not done mainly out of regard for historians, although that was a factor. It was done because such records were considered essential for good government. It forced reflection on what had been said or not said. It helped others stay current if they had a need to know what was going on. …

It is now rare to find any good records kept of what is said at meetings among American officials. The quality of the records of meetings with foreigners has also deteriorated. The usual excuse given is the horror of leaks. But that horror was perfectly familiar to officials of the wartime and postwar generation as well. Though constantly irritated by leaks, those past officials thought that the net value of routines of good governance took precedence. The real reasons for the change are likely more banal. There was no conscious policy choice across the administrations to quit preparing good records. It is just hard to do it. Without a routinized discipline, it vanishes from the day-to-day culture. …

Developing these habits, the Americans during the 1940s were strongly influenced, through common work in various Allied organizations, by long-established and relatively high-quality British processes for collective policy analysis and staff work. Eisenhower was both a product and exemplar of such Allied experience. …

Although the origins are almost forgotten, the 1947 creation of America’s National Security Council system was greatly influenced by the model of British systems, including the British War Cabinet system. Many of the Americans had come to know, imitate, and grudgingly admire those staffing methods. They consciously adapted analogous habits of systematic paper preparation, record-keeping, historical evaluation, peer commentary, lucid guidance, and collective decision-making. Eisenhower well understood this background about why the National Security Council was created and how it was originally expected to function. He was the last American president who did.

As the quality of written staff work declines, he notes, fewer decisions may be made from written records. Instead, “high-level meetings proliferate. They become a surrogate for good written analysis and advocacy.”

This makes the delegation of analysis and action more difficult. Overworked principles make policy based on poorly-documented meetings. Their subordinates, in turn, become less responsible, and because they know their work is less meaningful, they reinforce the degenerative cycle.


He concludes with an interesting observation.

As the immensely powerful Qing empire in China began to decay in the early 1800s, a leading scholar began calling for reform of the Confucian system that selected and trained the country’s administrative elite. He looked around and saw “everything was falling apart … the administration was contaminated and vile.” The scholar, Bao Shichen, “found himself drawn toward more practical kinds of scholarship that were not tested on the civil service exams.”

Bao “would in time become one of the leading figures in a field known broadly as ‘statecraft’ scholarship, an informal movement of Confucians who were deeply concerned with real-world issues of administration and policy.” Tragically, for Bao and many of his reformist allies, though their efforts made some headway, it was not enough. They could not reverse the decline of their empire.

But he believes we might have better luck:

The United States government has plenty of problems too. Fortunately, it is not yet at the point the Qing dynasty reached. Americans can reflect on a proud heritage, not far in the past, when Americans were notorious across the world for their practical, can-do skills in everything from fixing cars to tackling apparently insurmountable problems, public as well as private. These seemingly bygone skills were not in their genes or in the air. They need not be consigned to wistful nostalgia. The skills were specific. They were cultural. And they are teachable.

This may be so. But there are very few examples of civilizations reversing, or even slowing, the process of decline. None, in fact, that I can think of.

That said, despair is a crime and we have to try.

A final point: Everything that went wrong in Iowa is exactly what I’m worried will go wrong with the world’s nuclear weapons. The system was excessively complex, untested, and managed by people who clearly lacked the imagination to envision what would happen when the system was tested under stress and by real people.

It’s an astonishing account of incompetence, particularly in the failure of the designers to appreciate that the great majority of the intended users would be elderly, and thus baffled by procedures for bypassing a phone’s security settings, two-factor ID, and PIN codes. (Don’t these people have parents?)

… the app had to be downloaded by bypassing a phone’s security settings, a complicated process for anyone unfamiliar with the intricacies of mobile operating systems, and especially hard for many of the older, less tech-savvy caucus chairs in Iowa.

The app also had to be installed using two-factor authentication and PIN passcodes. The information was included on worksheets given to volunteers at the Iowa precincts tallying the votes, but it added another layer of complication that appeared to hinder people.

In the end, only one-quarter of the 1,700 precinct chairs successfully downloaded and installed the app, according to a Democratic consultant who spoke on the condition of anonymity to avoid losing work. Many who resorted to calling in the results found that there were too few operators to handle the calls.

Some also took pictures of the worksheets they had been given — the PINs fully visible — and tweeted them out in frustration. Had the app worked, the information might have given trolls or hackers a chance to download the program and tamper with it.

We are no longer competent enough to manage an election. This is extremely painful, but we have to acknowledge this, because it is so dangerous. The deeper reasons for this go far beyond partisanship.

Claire Berlinski is the co-founder and editor of the Cosmopolitan Globalist.


  1. Government before 1940 was tiny. The people who led the US in its rise to world power came from outside government, from a culture of invention and management controlled by market mechanisms. The clueless elites in charge today come from an amazingly intellectually corrupt academia, unconnected to any corrective mechanism, and animated by its belief that it must control everything..

  2. “Would we be more trusting of government if it could properly launch a health care website, the most public-facing aspect of the most significant social reform in a generation?”

    Two counterfactuals within the counterfactual. Quite an achievement.

    “Consider the D-Day invasion, the Manhattan Project, the Berlin Airlift, the moon landing: In example after example, the United States government—not the private sector, note—mobilized vast talent to overcome historically unprecedented military, economic, technological, and governance challenges.”

    Known by the results. These were further examples of sausage in production and a comedy of errors in progress. The sausage of D-Day is well enough known. The moon landing? What could possibly go wrong with a pure oxygen atmosphere surrounding electrical connections? Another counterfactual: what about the health problems accruing from existing for days on end in a pure oxygen atmosphere?

    Aside from which, it was the private sector that did all of those things, and private money (including private money paid as taxes) that paid for all of that and private citizens dying on the beaches alongside other private citizens fighting to success on that day. Government just brokered those deals–every single one of them–no mean feat by government, but only that.

    Today’s government, though, insists on dictating what those deals must be, what they must look like, and how they must be effected. The private sector works for Government, after all.

    So will we be known by our results from today. Onliest difference between today and yesterday is that technology today lets us see the production in real-time instead of afterward. And a press that takes advantage of that to be sure and tout the interim mistakes while ignoring the interim successes because–as that same press has long bragged–bad news is what sells.

    “Both are missing a crucial point. It’s not government, per se, but our government that screws everything up.”

    All three are missing the point. European culture and American culture are not at all the same, for all that ours is an evolution of British (Daniel Hannon suggests more broadly of Anglo-Saxon) culture. We’re 200+ years past our British roots, with the early decades of those years marked by mutual contempt and enmity. Zelikow notwithstanding (he’s right as cited, although too narrow–we don’t do a decent job of educating anyone in our K-12 public schools, much less in our “higher” education facilities), our government is the one we have, not someone else’s, it’s the one we’ve always had, we’ve never had a European form or style of government, and we’ve always had a basic, generalized contempt for government.

    That’s the culture within which we operate.

    “Experienced in being asked to decide what “can” be done, lawyers are not trained to analyze what “should” be done, even in policies having to do with policing or the administration of justice.”

    The mindset in this illustrates a part of the problem. What “should” be done is a political decision, and so it’s best left to the political arm of American culture: not Government, but government’s [that spelling, by the way, is deliberate] employers, We the People. But the growing dominance of Government is cracking the foundation of our American culture, reducing our individual self-reliance, our personal responsibility, our sense of individual duty and individual liberty, and transferring those to reliance on Government as no longer broker but as The Solution, can we only get the right technocrats–physical and social–into the right positions.

    “It is now rare to find any good records kept of what is said at meetings among American officials. The quality of the records of meetings with foreigners has also deteriorated. The usual excuse given is the horror of leaks. But that horror was perfectly familiar to officials of the wartime and postwar generation as well.”

    This is too true. My thought when news came out of Comey’s having actually made a record of a meeting with the President, and the hue and cry associated with it (how dare he!? or why would he do that?), was Of course he wrote an MFR. That was routine for us in the USAF, even for classified meetings; the aggregate of them allowed a reasonably accurate record to be constructed, and they informed efforts to correct any minutes that were officially generated for some of those. Comey’s error was releasing that government-owned record to the public without the owner’s permission. And it was government property, not because it was generated on a government laptop, but because it was generated by a current government employee about a government meeting. ‘Way too few MFRs get generated today.

    Fear of leaks? Critical differences between the “wartime and postwar generation” and now include the lessened respect for necessary secrets today and the speed and ease with which leaks can be generated and disseminated.

    “We need to be sure those nukes don’t go off by accident.”

    That’s exceedingly unlikely. The complex part of that system is the human chain leading to the launch of a missile from its silo or the release of a nuke from its aircraft, not the physical system. And we train that all the time. Even from the President’s end of the chain. The most likely failure is that they won’t go off at all; they’ll only fizzle. Nuclear warheads decay with the inevitability of physics. That’s why we spend so much on “upgrading” our warheads–much of those upgrades are to replaced decayed fissiles.

    Eric Hines

Leave a comment

Your email address will not be published.