Myth-Busting Silicon Valley

Every regime has a founding myth—a self-validating account of its origin, propagated by its elites to confer legitimacy upon them. In Silicon Valley, the fabulously wealthy entrepreneurs who regard themselves as the Sun Kings of American prosperity have a story they tell about “founders.”

The Valley, in this story, is the cradle of progress because obsessive geniuses congregate in garages to invent new technologies and launch start-ups, taking enormous risk and reaping a small part of the reward they bequeath upon us all. Its pantheon of billionaires—Gates, Jobs, Zuckerberg, et cetera—presents the scrappy entrepreneur as the archetype of success.

Advertisement

The “founder” myth begets a culture of superiority among the techno-elite that demands deference from the political system. Beyond mere paeans to entrepreneurship, Silicon Valley expects policymakers to accept without reservation its preferred blend of self-regulating markets and limited government. Theirs is the land of “permissionless innovation” and “failing fast,” and any public servant who dares intercede risks killing the golden goose.

Some in the Valley go even further. Venture capitalist Katherine Boyle advocates for tech bros to “tak[e] on important missions of government” because it’s “easier to solve critical national problems through startups.” Her case in point: the Middle East. “The best comparison is that in the last two decades,” she explains, “roughly $1 trillion has gone to venture-backed companies in the U.S., while $6 trillion went to the War on Terror and failed nation-building in Afghanistan. The outcomes are different due to incentives, not investment.” What’s app-building, after all, but nation-building with a better stock-options plan?

Libertarians and fellow believers in market fundamentalism rely on the myth to advance their own, even bigger myth: that innovation, progress, and growth are the product of government’s absence. As Adam Thierer of the libertarian Mercatus Center has argued, “The best role that public policy can play at this time is to clear the way for…innovation by removing barriers to entry and trade.” As in every domain, the recommended innovation policy is no policy—and here, at least, the libertarians claim some empirical evidence on their side.

While all founding myths contain exaggerations and even fictions—that’s what makes them myths—good ones reinforce that which is best about a regime and provide a foundation on which a community can build. But Silicon Valley’s founder myth has things backward, misunderstanding the source of the regime’s power and flattering its worst instincts.

Silicon Valley was the product of aggressive public policy. The key technologies of our digital age were not the happy accidents of “permissionless innovation” in the “self-regulating” market, but of deliberate and prolonged government action. Public officials “picked winners” and decided the trajectory of technological development—from the materials used in chips to the protocols for networked computers. When the Valley was humming, it did so as a symbiosis of capitalist pursuit of profit and government pursuit of the public good. Left to its own devices, it has devolved into a “unicorn” hunting party while subsequent waves of innovation happen elsewhere.

Advertisement

Let There Be Capital

A better myth would go something like this: In the beginning, Silicon Valley was a tract of orchards.

At the end of the Second World War, the region had only a nascent technology sector, centered around Stanford University. The venture capital industry did not exist, and the region’s few entrepreneurs were former academic researchers like David Packard and William Hewlett, who expanded from their garage by filling military contracts. Packard and Hewlett received the prestigious Army-Navy “E” Award for excellence in wartime production four years before formally incorporating as the Hewlett Packard Company.

The Cold War transformed the region as the Pentagon’s imperatives became the Valley’s. Tasked with advancing technological capabilities for advanced espionage and warfare, public officials mobilized America’s ever-growing computing power, which was centered initially on the East Coast. Public funding sponsored roughly 25 percent of Bell Laboratories’s transistor research and more than half of R&D at IBM through the 1950s and, by the late 1950s, three-quarters of the nation’s computers were committed to public purposes.

Uncle Sam employed a set of complementary policy levers to channel the Valley’s technological development in the national interest.

First, the federal government created an entirely new agency to fund and coordinate cutting-edge research and development. In the wake of the panic sparked by the USSR’s Sputnik launch, President Eisenhower created the Advanced Research Projects Agency (ARPA) in 1958. The independent agency, which would become the engine of the Space Race and Cold War innovation, was designed to spur “blue-sky thinking” beyond the traditional federal R&D and military procurement system.

It became a remarkable success of industrial planning. Rather than build its own laboratories, the program adopted a decentralized structure manned by training engineers and computer scientists. These expert-practitioners-turned-program-officers were granted the autonomy to advise and direct research efforts across firms, universities, and government labs.

Contra Hayek, such work did not require perfect knowledge or information, only a coherent view of the future. With the market’s price signals offering little useful information to the researchers and entrepreneurs advancing the technological frontier, bureaucrats were indispensable Only the federal government had the resources, perspective, and incentives to fund and coordinate academic and private-sector researchers as well as facilitate the initial application and commercialization of their discoveries. As Julius Krein has observed:

Price competition often (though not always) works well for rationing, or deciding who gets what in the present, which need not require a view of the future. The same is not true of investing capital, which requires taking (more or less) risk on a (more or less contrarian) view of the future. Long-term investment capital (and thus productive capacity) simply cannot be prudently allocated on the basis of price signals alone. Hayek’s spontaneous order, in more than one sense, has no future.

For a single agency, ARPA’s reach was remarkable. In the early 1960s, the agency alone funded 70 percent of all computer research in the U.S. Even those commercial technologies were direct products of government investment and planning. The vision of the personal computer, including monitors, keyboards, and “electronic pointer controllers called ‘mice,’” was not first envisioned by Steve Jobs, but by ARPA officials in 1968. The agency would go on to fund research at Stanford University and MIT that would develop the first mouse-and-windows graphic user interface, single-user computer workstations, and Internet protocols.

In addition to shaping innovation across the industry, ARPA’s own operations spawned extraordinary breakthroughs. Its computer network known as ARPANET functioned as a proto-Internet. As a vehicle for experimentation rather than a fully developed service, the network was responsible for developing the first communication protocols, spawning network applications like file transfers and e-mail. ARPANET both pioneered and publicized such technological breakthroughs, enabling later developments by NSFNET and the government-backed Project MAC at MIT that developed time-sharing networks and the first Internet protocols.

As a second policy lever, public and military officials acted as a “collaborative first customer” for new technologies. In new fields where no commercial market existed and the capital requirements for scaling up manufacturing processes were prohibitive, federal procurement contracts were vital.

The market for transistors and, later, integrated circuits depended on these government contracts. Fairchild Semiconductor’s first contract was with defense contractor IBM, for high-voltage silicon transistors used in B-70 bombers’ on-board computers. Later, two procurement decisions by NASA and the Air Force for missile and space-guidance projects pushed chips into large-scale. NASA alone constituted 60 percent of the integrated-circuit market in the 1960s.

The customer is always right—especially when the customer is the Pentagon. Beyond revenue to tech companies, government contracts shaped the trajectory of technological development and key features of early computing technologies to suit the particular needs of military and defense applications. Santa Clara County only earned the name Silicon Valley, not Germanium Valley, because of the Air Force’s specific preference for new silicon transistors, rather than standard germanium ones.

In theory, chipmakers and other early technology companies could have ignored the government market and attempted to foster a private one. But in reality, they didn’t. There was no one else to sell to, because no customers were building their own businesses reliant on products that did not yet exist. There was nowhere else from which to attract investment because no private institution had adequate resources or appetite for risk. Only government had the scale and incentives to fund and purchase breakthrough technologies.

Third, federal policymakers addressed the shortfall of private investment in technology. As the nascent venture capital industry began to emerge, it was augmented by government funding that matched private investments. Beginning in 1958, the Small Business Administration operated a venture capital fund-matching program, investing two dollars for every venture capital dollar.

Venture capital would remain an important—albeit small—industry boosted by federal subsidies until the late 1970s, when regulatory reforms opened up a massive pool of institutional capital for venture capitalists. But by this point, the government had already funded the core R&D, sponsored advanced manufacturing, and facilitated the commercialization that made tech start-ups attractive to private capital.

The Valley’s booming technology sector fared well despite a lack of venture capital. Government funders directed capital to cutting-edge labs and firms commercializing new breakthroughs. Rather than crowd out private investment or distort the incentives of entrepreneurs, government proved an essential partner in the Valley’s success. Even after venture capital had boomed, government remained an essential investor in technology. Federal programs like the Small Business Innovation & Research program filled critical funding gaps for early-stage tech firms, investing where even the boldest venture capitalists would not.

The Valley Lies Fallow

A core tenet of market fundamentalism is the practice of rejecting all evidence of effective government support, suggesting that things might have been even better if policymakers had stayed out of the way. But with Silicon Valley, the counterfactual is also on disastrous display.

By the 1980s, federal policy had narrowed its focus to basic and applied research, leaving commercialization and scaling to the market. Without support and demand for domestic production, new technologies migrated offshore.

The breakthrough LCD technology underlying flat-panel displays was first discovered by military-funded researchers at Westinghouse in the 1970s. But after Westinghouse shut down its LCD research, no major computing companies were willing to fund it without immediate, commercially viable output. Apple, Xerox, IBM, Compaq, and others all denied requests to support the project, citing a lack of manufacturing capability necessary to create flat-panel displays at a competitive price.

ARPA (renamed “Defense Advanced Research Projects Agency” or DARPA in 1972) eventually picked up where private industry failed, funding research that would form the basis of portable electronic displays. But by then the damage had already been done. Japan had taken up the flat-panel technology, and Seiko was selling color pocket televisions in the United States—a direct infringement on Westinghouse’s patents. American industry refused to fight, neither challenging Japanese IP theft nor setting up production of its own.

Without adequate government support and left to their own devices, major American computer companies were uninterested in producing their own supplier base of flat-panel displays, and instead continued to rely on cheap imports from East Asia. To this day, all of the world’s flat-panel display factories are located in Korea, Taiwan, Japan, and China; none are in the U.S.

Other key critical technologies suffered a similar fate—even in sectors where the U.S. was an undisputed leader. Semiconductors eventually fell prey to the “market-based” policy program and its blithe disregard of its importance to the innovation ecosystem and the information economy. “Potato chips, computer chips, what’s the difference?” as Michael Boskin, chairman of George H.W. Bush’s Council of Economic Advisers, quipped. As U.S. support for semiconductors waned and rivals emerged in East Asia, the industry turned to offshoring fabrication and eventually major American chipmakers lost the ability to design and produce the most advanced chips.

Second, a free-market wave of new and better technological advances has failed to materialize. The frothy boom of V.C.-backed asset-lite firms in the 1990s yielded little and was built upon the hardware innovations of prior decades. In theory that decade’s boom would have enabled the buildout of digital infrastructure led by American hardware companies. But America’s would-be champions opted for a different course.

IBM, for example, had replenished and even grown its capital stock for decades. It was a leading innovator in mainframe computers, magnetic stripe cards, and P.C.s but had experienced no real financial boom since its IPO in 1980. That changed after the turn of the millennium, once it sold off Lenovo and acquired PwC consulting. It has essentially transformed into a consulting firm that hasn’t sustained its capital stock in nearly two decades.

The telecom company Cisco adopted a similar strategy. It was poised to build the internet’s hardware infrastructure after introducing modems and router during the Nineties internet heyday.

It was the most valuable company in the world at the height of the dot-com bubble. But it chose to migrate from hardware into software and began to erode its capital stock in 2003; it has yet to recover. While China has boosted Huawei as a national champion, the would-be American telecom leader has executed $101 billion in stock buybacks in the last 15 years but made just $15 billion in capital expenditures over the same period.

Today, Silicon Valley concentrates in capital-light, fast-scaling, derivative app development and “software-as-a-service”—a predictable course in the absence of public investment, demand, vision, and coordination. It’s “innovation in bits, but not in atoms” as Peter Thiel complains.

During the heyday of public-private partnership in the twentieth century, America’s corporate labs earned three Nobel prizes for their advances in microelectronics. Tech giants such as Google, Apple, and Facebook, deemed the “most important companies on the planet” by AEI’s James Pethokoukis, have never earned one—as if they ever aspired to.

The United States once marshaled its best minds to discover scientific breakthroughs and master engineering challenges. Today, under the banner of “market-driven innovation,” it has liberated them to more efficiently target ads for subscription underwear services to Instagram users.

Let a Thousand Subsidies Bloom

The passage of the bipartisan CHIPS and Science Act this past summer is a sure sign that policymakers are beginning to realize that the “design here, make there” business model doesn’t work for America—especially when the business in question is in a critical sector. Though a necessary measure, its deficiencies underscore just how much more policymakers must do if the United States is to retain technological dominance, retake the lead in key sectors, and spur a new generation of innovation. As a first step, they should suspend whatever deference they have to the purported geniuses of Silicon Valley.

The case of Elon Musk is illustrative. After making his fortunate founding PayPal, Musk went on to build his reputation on literal moonshot “hard tech” projects such as Tesla and SpaceX. Both are ambitious, but neither was viable on its own. The market’s ruthless demand for profit, rather than innovation, has only been trumped by “f*** you” money—and a little help from Uncle Sam.

The latter, as it turns out, is critical, especially for long-term viability. Market fundamentalists lament that involvement by government will distort incentives and crowd out private investment. But Musk shows that that’s exactly what we need.

Tesla is a success because the federal government decided to subsidize electrical vehicles. SpaceX depends on NASA contracts—much as Fairchild once did. Musk is a subsidy farmer, but that phrase does not deserve its negative connotation. The government plants subsidies precisely in the hope that entrepreneurs like Musk will harvest them, thereby aligning the pursuit of profit with the public interest. All of us enjoy the bounty.

This essay is adapted from a case study originally published at American Compass.

More Stories

Stay informed by joining TruthRow

24/7 coverage from 1000+ journalists. Subscriber-exclusive events. Unmatched political and international news.

You can cancel anytime