Sunday, December 21, 2025

Our Athenian American democracy

 

merica may be still a constitutional republic in name, but recently it has operated more as an unchecked Athenian-style democracy. Americans may appreciate the rich culture and the brilliant minds of classical Athens. But rarely do they learn that much of the money fueling the fifth-century-B.C. Athenian renaissance derived from tribute coerced from imperial subject states, and that Athenian democracy was inherently unstable and often quite self-destructive.

Democracy—the word itself means “people (demos) power (kratos)”—originated in late sixth-century Athens through the reforms of Cleisthenes. The Athenian popular leader transferred political power from the traditional tribal clans to the general Assembly of citizens. What followed, however, was a historic but insidious growth in power of the new citizen Assembly, which made, interpreted, and enforced laws—usually without many executive or judicial checks and balances.

America’s art of war

 

he Continental Army is often stereotyped as a ragtag group of irregulars. Thanks to the French, we are told, the army was sufficiently supplied, funded, and trained to defeat British professionals and win the war. Yet throughout the American Revolution, Patriots exhibited a singular ability to create their own capable military forces ex nihilo. And they ingeniously supplied and equipped local militias in addition to their eventual establishment of a Europeanized and professional army. That national legacy of military adaptation and innovation has characterized almost all subsequent American wars. Indeed, the spirited “army” that began the war at Lexington, Concord, and Bunker Hill in 1775 bore little resemblance to the multifaceted and lethal military that defeated the British at Cowpens and at Yorktown in 1781 to end the war.

Such radical evolution has also been characteristic of the later American military, which has often been faulted for entering a war unprepared, naive, overconfident, and poorly led—but never remained so for long. Despite democracy’s innate unease with military culture and standing armies, within a year of America’s entering wars, the nation’s assets of individualism, fluidity between classes, openness to experimentation and innovation, meritocracy, affinity for technological change, economic dynamism, and love of liberty have consistently resulted in rapid adaptation and eventual success over much more experienced enemies.

The American Revolution established the tradition of a flexible military that quickly evolves from a disorganized and confused peacetime militia to deadly wartime soldiers. For example, at the First Battle of Bull Run (First Manassas, July 21, 1861), in the initial engagement of the Civil War, an imposing but disorganized and poorly led Union force of 35,000 raw recruits was routed and humiliated by a smaller but more spirited Confederate army. Colonel William Tecumseh Sherman, one of the few heroes that day on the losing Union side, and soon to be a general, left the flotsam and jetsam of Bull Run utterly depressed but still eager to bring about changes. He feared that the North’s armies were so disorganized, poorly equipped, and badly trained that they might never be able to invade and defeat the Confederacy, then about three-quarters the size of Europe.

Yet just over three years later in the autumn of 1864, General Sherman’s Army of the West had accomplished just that, after cutting a destructive swath through the heart of the Deep South from Georgia through the Carolinas before approaching Lee’s army in northern Virginia to help General Grant end the war.

The Macon Telegraph memorably described the terrifying approach of Sherman with his army throughout Georgia: “unsated still in his demoniac vengeance, he sweeps over the country like a simoon of destruction.” Later, on the grandstand overseeing the huge Union victory parade through Washington, D.C., in May 1865, the German ambassador remarked of Sherman’s army, as its war-torn veterans marched by: “An army like that could whip all Europe.”

When the United States declared war on Germany in April 1917, the American military was little more than a frontier constabulary of 120,000 irregulars. Two years later, after the armistice of November 1918, the new American Expeditionary Force had landed two million doughboys in France with only two hundred enlistees or so lost to enemy naval action. The U.S. economy went from essentially zero pre-war production of artillery shells to manufacturing over 50 million shells a year—the greatest per annum output of any belligerent in the entire war.

When war broke out again in Europe on September 1, 1939, the Depression-era U.S. Army was only some 170,000 soldiers strong—seventeenth in size of the world’s militaries, behind even tiny Portugal’s. By September 1945, the military had swelled to 12.2 million soldiers.

The U.S. wartime economy, battered by the Great Depression, was creating by late 1945 a greater gross domestic product than all of the other major belligerents put together. At war’s end, the American fleet, in both total tonnage and number of ships, was larger than the rest of the world’s major navies combined. Americans created the two most expensive, sophisticated, and deadly weapons systems of the war—the B-29 long-range heavy bomber and the Manhattan Project’s atomic bombs. And GIs, with little prior military training, in horrific battles on D-Day, at the Battle of the Bulge, and on Okinawa proved themselves among the most skilled and daring on either side.

The precedents of this extraordinary, near-frenzied American military resilience and recovery—which persisted, for most of the country’s history, without an institutionalized military culture—were established during the Revolutionary War. The first Americans quickly embraced military innovations, welcomed adaptation, and were humble enough to learn from more experienced European commanders.

They rarely saw unconventional forces as antithetical to traditional armies or as connoting lesser social status. Instead, they envisioned irregulars as complementary and ideally suited to the geography and local populations of Colonial America. The key was to draw on all manpower available without prior conventional restrictions. Washington, who believed victory would be achieved only when his orthodox infantry matched the caliber of the British army in set battles, nevertheless encouraged irregular and ad hoc militias, raiders, and guerrillas to work in tandem with his own Europeanized army.

General Nathanael Greene, perhaps second only to Washington in tactical and strategic insight, gained Washington’s confidence to form an irregular hit-and-run force of about a thousand infantry and eight hundred cavalry, to be supplemented by local militia. Greene himself avoided pitched battle with the better-equipped and larger British contingents in South Carolina. He wrote Washington, “I must make the most of a kind of partizan war.”

Washington was impressed, and quickly responded, “[I] approve of your plan for forming a flying army.” Greene’s “flying army,” soon working in tandem with Brigadier General Francis Marion (the “Swamp Fox”), saw a greater number of battles and engagements in South Carolina than took place in any other theater of the war. Indeed, one in five American war deaths occurred in South Carolina, a state that the much-larger British forces could never completely subdue.

In the Northern theater, Ethan Allen’s Green Mountain Boys of Vermont similarly wore down British forces. They proved central in capturing Fort Ticonderoga at the critical junction of Lake Champlain and Lake George, during a now-famous late-night attack on May 10, 1775. The capture of the fort allowed General Washington to send Henry Knox to the stronghold to transfer its critical arsenal of cannon to Boston, where the captured artillery proved decisive in forcing the British to evacuate the city in 1776.

Traditional American generals such as Benedict Arnold, Horatio Gates, and Charles Lee all argued for the value of the militias and “special forces.” They sought to integrate these autonomous bands of infantry and cavalry that brought their own food, owned their horses, and served without regular pay into their own regular armies. The revolutionary impetus to use these assets was partly ingrained, given the successful prior use of militias and rangers in the earlier French and Indian War (1754–63), and partly tactical, drawing on colonials with specialized knowledge of terrain, local populations, and enemy intentions.

Later, John Adams famously noted that the keystones of the revolutionary America that had provided the new nation its stability and security were small towns, religious congregations, schools—and militias.

This early tradition of encouraging raiders to work in concert with regular forces also became a hallmark of later American militaries. In the Civil War—one of the rare major conflicts fought on American soil—both sides relied on guerrilla forces. Perhaps the most famous were, on the Union side, the use of sometimes brutal units such as Richard R. Blazer’s “Blazer’s Scouts” and Samuel C. Means’s “Loudoun Rangers” and the activities, on the Confederate side, of Elijah V. White’s “White’s Comanches” and “McNeill’s Rangers” commanded by John Hanson McNeill. In Virginia, General John Singleton Mosby (the “Gray Ghost”) and Turner Ashby for years tied down thousands of Union troops.

The mounted “Partisan Brigades” of “the devil” Nathan Bedford Forrest and John Hunt Morgan (sometimes called “the South’s Francis Marion”) worked throughout Tennessee and Virginia in conjunction with the Army of Tennessee throughout 1862 and 1863. Sherman felt that Forrest had caused more havoc to Union infantry operations than any other Confederate force.

In response, the Union’s devastating 1864 raids by Philip Sheridan’s mounted troops into the Shenandoah Valley of Virginia, and Sherman’s self-supporting infantry with its notorious “bummers” in Georgia and the Carolinas, detached from all logistical bases, were themselves unorthodox. In the spirit of the Revolutionary War’s flying columns, they were mixtures of raiding and traditional marching, designed to war on Confederate infrastructure and property, to free slaves, and to force civilians to withhold support from Southern troops.

The use of insurgents, guerrillas, and special forces continued from World War II to Iraq and Afghanistan. The idea of an “irregular war” involving all imaginable sorts of forces, both unorthodox and conventional, sometimes behind enemy lines, targeting infrastructure and manufacturing as well as armed forces, was a direct legacy of the revolution. From the long-range penetration efforts against the Japanese in Burma by Merrill’s Marauders to the efforts of Green Berets to organize Hmong resistance during the Vietnam War, such special forces had their American origins in the Revolutionary War.

Just as importantly, the outmanned and outgunned American forces in the revolution were especially open to ad hoc technological innovation, keen on scientific breakthroughs, and reliant on novel military protocols—in ways unexpected by the wealthier and better established British military. While the extent of the colonials’ use of the somewhat clumsy and difficult-to-load Pennsylvania Long Rifle has been exaggerated, the novel, rifled muskets nonetheless tripled the traditional hundred-yard range of the British “Brown Bess” smooth-bore musket. Soon American snipers learned to pick off British officers at the then-unheard-of distance of over three hundred yards. Such targeting of British commanders played a key role at the monumental American victory in 1777 at Saratoga, New York, a turning point that many historians cite as the catalyst that finally convinced the French to intervene on behalf of what they now saw might be the winning side.

Perhaps the most effective use of early rifles was by “Morgan’s Riflemen,” a militia led by Daniel Morgan. At the small but critical Battle of Cowpens (1781) in South Carolina, Morgan brilliantly combined a diverse force of regular troops, militia, and his rifled sharpshooters virtually to destroy the infamous “British Legion,” led by the feared and often reckless Banastre Tarleton. The crack British troops were led into a veritable American ambush of feigned retreats, multilayer defenses, sharpshooting snipers, and double envelopment.

Lacking the industrial base or scientific infrastructure of Britain, the Americans proved in many ways more innovative. General Washington was alarmed that his army suffered inordinate losses from smallpox epidemics over the first two years of the conflict. Nearly two decades before Edward Jenner’s use of safe and effective mass cowpox inoculations to prevent smallpox, Washington gambled by ordering thousands of soldiers of the Continental Army to be innoculated during the winter of 1777.

The Americans employed an early but dangerous method known as “variolation,” one that collected the pus from the lesions of infected smallpox patients and inserted the live effluvia into the skin of the healthy. The risky practice worked. Not only might the resulting immunity have saved the American army, but it also provided advantages over mostly unvaccinated British adversaries. After Washington’s vaccination program, fewer than 2 percent of American soldiers were lost to subsequent smallpox epidemics.

Some entrepreneurial efforts to find new weapons and protocols were encouraged even when they proved far ahead of their time and impractical. Tradition has it that the world’s first military submarine, “The Turtle,” was designed and built in 1775 by the Yale graduate David Bushnell. In conjunction with the Continental command, Bushnell had been experimenting with ways to attach mines to harbored British warships, and the effort soon led him to explore submarine delivery systems.The result was the Turtle—a one-man, egglike capsule, built of oak with reinforced iron bands. The idea was to submerge the hand-powered ovoid shell, steer it near to docked British ships, attach a fused mine, and then paddle away before the bomb went off. Allegedly the first use of a submersible craft in actual combat, the Turtle failed in its three attempts to place an explosive device near hms Eagle, docked in New York Harbor.

Finally, the Continental Army proved eager to incorporate military knowledge, tactics, and experience from more experienced European militaries. Seeking to hone a European-type army, Washington welcomed in 1777 the help of the expatriate veteran Prussian officer Friedrich Wilhelm von Steuben. He had served as the aide-de-camp to Frederick the Great and was intimate with the modern Prussian army, then the most innovative and disciplined force in Europe. Von Steuben not only brought Prussian order and drill to the American army, but he also schooled it in the use of sequential fire by rank, the famous Prussian “oblique order” of rapidly marching diagonally across the battlefield to concentrate the mass of the army on a fixed spot on the flank of the enemy, and the tradition of positioning the supreme commander near the front so he could lead by example.

Von Steuben’s training manual Regulations for the Order and Discipline of the Troops of the United States (1779) was widely read by Washington’s officers, who adopted its disciplined training regimen and fire control—and it even served as a blueprint for U.S. Army handbooks up to the modern era.

In July of 1781, Jean-Baptiste Donatien de Vimeur, comte de Rochambeau, arrived in America with four thousand French troops—soon to be joined by the French fleet. But unlike some prior French commanders with far greater experience than Washington, the fifty-six-year-old Rochambeau, a veteran of the War of the Austrian Succession and the Seven Years’ War, collaborated with him as an equal. He correctly advised that the ultimate American objective was not recapturing New York but destroying the British army at Yorktown.

The better-known Marquis de Lafayette, Marie-Joseph Paul Yves Roch Gilbert du Motier, joined Washington’s staff, fought in a number of early battles, was wounded, spent the hard winter with the troops at Valley Forge, and returned to France to help raise money, arms, and French soldiers for Washington. He returned to fight in the last battle of the war, the victory at Yorktown. Lafayette, like Rochambeau, got along well with Washington and the Americans, and like von Steuben he labored to make the American military comparable in numbers and skill to the British. Indeed, the later American institutionalization of separate infantry, cavalry, and artillery corps, the idea of a formal general staff, and the practice of breaking military divisions into regiments and battalions can be traced to French and Prussian advisors.

From the beginning of the U.S. military, foreigners found it easy to work with affable Americans, who were mostly devoid of class pretensions, titles, and hierarchies—a tradition that endures today. It is no surprise that the Americans in both World War I and II were seen as the most congenial of their allies, eager to learn from their more-experienced partners.

Indeed, this initial willingness to adopt, modify, and improve foreign military practices under allied consultation and instruction was enshrined in all later American armies. In the American Civil War, both sides had been schooled in Napoleonic practice and doctrines, which, albeit outdated, still formed the basis for the early training of the Union Army.

When the American Expeditionary Force arrived in France in 1917, after initial discord with the British and French over the preservation of American autonomy, it soon learned from its allies’ prior bitter experience that the machine gun and artillery, not well-trained American marksmen, were the key to victory in the trenches. The British also wisely informed the late-arriving Americans in 1942 that unescorted daylight bombing missions over Europe were recipes for disaster. The application of the British Merlin engine to the sluggish American P-51 fighter, and of the seventeen-pounder gun to the under-armed Sherman tank, made both weapons among the best in the war.

At the end of the Revolutionary War in 1783, the formal American army was trained to European standards. But it was also energized by indigenous minutemen, snipers, and special forces—all of which have parallels today. Contemporary American military mastery of machines and gadgetry also is a Revolutionary War inheritance. The French and European advisors certainly aided the Americans as part of their own self-interested efforts to weaken Britian. But they also did so in admiration of American idealism, openness, familiarity—and willingness to learn, consult, and collaborate.

Can the Dark Ages return?

 

Western civilization arose in the 8th century B.C. Greece. Some 1,500 city-states emerged from a murky, illiterate 400-year-old Dark Age. That chaos followed the utter collapse of the palatial culture of Mycenaean Greece.

But what reemerged were constitutional government, rationalism, liberty, freedom of expression, self-critique, and free markets – what we know now as the foundation of a unique Western civilization.

The Roman Republic inherited and enhanced the Greek model.

For a millennium, the Republic and subsequent Empire spread Western culture, eventually to be inseparable from Christianity.

From the Atlantic to the Persian Gulf and from the Rhine and Danube to the Sahara, there were a million square miles of safety, prosperity, progress, and science — until the collapse of the Western Roman Empire in the 5th century AD.

What followed was a second European Dark Age, roughly from 500 to 1000 AD.

Populations declined. Cities eroded. Roman roads, aqueducts, and laws crumbled.

In place of the old Roman provinces arose tribal chieftains and fiefdoms.

Whereas once Roman law had protected even rural people in remote areas, during the Dark Ages, walls and stone were the only means of keeping safe.

Finally, at the end of the 11th century, the old values and know-how of the complex world of Graeco-Roman civilization gradually reemerged.

The slow rebirth was later energized by the humanists and scientists of the Renaissance, Reformation, and eventually the 200-year European Enlightenment of the 17th and 18th centuries.

Contemporary Americans do not believe that our current civilization could self-destruct a third time in the West, followed by an impoverished and brutal Dark Age.

But what caused these prior returns to tribalism and loss of science, technology, and the rule of law?

Historians cite several causes of societal collapse — and today they are hauntingly familiar.

Like people, societies age. Complacency sets in.

The hard work and sacrifice that built the West also creates wealth and leisure. Such affluence is taken for granted by later generations. What created success is eventually ignored — or even mocked.

Expenditures and consumption outpace income, production, and investment.

Child-rearing, traditional values, strong defense, love of country, religiosity, meritocracy, and empirical education fade away.

The middle class of autonomous citizens disappear. Society bifurcates between a few lords and many peasants.

Tribalism — the pre-civilizational bonds based on race, religion, or shared appearance — re-emerge.

National government fragments into regional and ethnic enclaves.

Borders disappear. Mass migrations are unchecked. The age-old bane of antisemitism reappears.

The currency inflates, losing its value and confidence. General crassness in behavior, speech, dress, and ethics replaces prior norms.

Transportation, communications, and infrastructure all decline.

The end is near when the necessary medicine is seen as worse than the disease.

Receive Daily Headlines FREESign up today!Do not fill this field

Such was life around 450 AD in Western Europe.

The contemporary West might raise similar red flags.

Fertility has dived well below 2.0 in almost every Western country.

Public debt is nearing unsustainable levels. The dollar and euro have lost much of their purchasing power.

It is more common in universities to damn than honor the gifts of the Western intellectual past.

Yet, the reading and analytical skills of average Westerners, and Americans in particular, steadily decline.

Can the general population even operate or comprehend the ever-more sophisticated machines and infrastructure that an elite group of engineers and scientists create?

The citizen loses confidence in an often corrupt elite, who neither will protect their nations’ borders nor spend sufficient money on collective defense.

The cures are scorned.

Do we dare address spiraling deficits, unsustainable debt, and corrupt bureaucracies and entitlements?

Even mention of reform is smeared as “greedy”, “racist”, “cruel”, or even “fascist” and “Nazi.”

In our times, relativism replaces absolute values in the eerie replay of the latter Roman Empire.

Critical legal theory claims crimes are not really crimes.

Critical race theory postulates that all of society is guilty of insidious bias, demanding reparations in cash and preferences in admission and hiring.

Salad-bowl tribalism replaces assimilation, acculturation and integration of the old melting pot.

Despite a far wealthier, far more leisured, and far more scientific contemporary America, was it safer to walk in New York or take the subway in 1960 than now?

Are high school students better at math now or 70 years ago?

Are movies and television more entertaining and ennobling in 1940 or now?

Are nuclear, two-parent families the norm currently or in 1955?

We are blessed to live longer and healthier lives than ever — even as the larger society around us seems to teeter.

Yet, the West historically is uniquely self-introspective and self-critical.

Reform and Renaissance historically are more common than descents back into the Dark Ages.

But the medicine for decline requires unity, honesty, courage, and action – virtues now in short supply on social media, amid popular culture, and among the political class.

Blocking Canada’s Silent Suicide From Creeping Into America

 

America’s neighbor to the north legalized euthanasia in 2016, and since then, more than 75,000 Canadians have participated in Canada’s Medical Assistance in Dying program. Canada also has no restrictions on abortion and only considers a baby a human after it passes through the birth canal. 

In 2021, Canada took its euthanasia laws a step further, allowing people not only facing a foreseeable death to end their lives, but anyone with a serious medical condition. And Canada is considering going even further in 2027 to allow those with mental illness to die by euthanasia. 

Canada has become a “totalitarian wild west,” according to Liana Graham, who grew up in Canada and now works as a research assistant in domestic policy at The Heritage Foundation. 

Areas that Canada should regulate, such as abortion and physician-assisted suicide, it does not; instead, it has created stringent regulations around freedom of speech and religion, according to Graham, who says America should heed a warning from Canada.  

Eleven U.S. states and Washington, D.C., allow physician-assisted suicide, and Illinois might soon become the 12th. Canada has proven, Graham argues, that once a society begins to deny the value of all life, policy can quickly devolve into something that looks and sounds like it is straight out of a George Orwell novel.  

On this week’s episode of “Problematic Women,” Graham joins the show to discuss the ways America can keep itself from becoming Canada 2.0 and protect the value of life that was intrinsic to America’s founding. 

Also on today’s show, we wrap up the year by discussing President Donald Trump’s Wednesday night address to the nation. 

Wednesday, December 17, 2025

Silicon Valley’s moral bankruptcy

 

The state of California is unrecognizable now from how it appeared in the era of Governors Ronald Reagan and Pete Wilson, when it was a model of conservative governance, had an expanding population, and was the site of visionary infrastructure developments. Much of the state’s decline has been driven by the metamorphosis of its Democratic Party into a political organization focused on favoring the richest citizens, massive global corporations, and elite, prestigious universities. This strange left-wing concoction of politics, influence, and wealth is centered in the corridor of Silicon Valley, Stanford University, and San Francisco. And it is now driving the national Democratic Party, and with it the nation, steadily leftward. How, then, does it operate?

By any classical definition, these companies operate as near monopolies.

Four of America’s ten largest corporations are headquartered in Silicon Valley. Three of these are in the top five, including the world’s wealthiest company, Apple (number one, with $2.5 trillion in market capitalization), along with Alphabet (number three, with $2 trillion) and Meta (number five, with $1 trillion). By any classical definition, these companies operate as near monopolies. Alphabet, for example, controls about 84 percent of all global daily internet searches. Apple sells about half of America’s cell phones, and accounts for almost a quarter of the world’s purchases of mobile devices. Facebook has captured 65 percent of the world’s social-media market. Nvidia (number eight, with $550 billion) currently dominates 80 percent of the graphics-processing-unit market. Never has so much money been concentrated in such a small place so quickly.

Tech companies forged these dominant market positions in the usual way: by squeezing out and bankrupting smaller companies or, more commonly, by buying them out. Over a decade ago, the former Google ceo Eric Schmidt, in slightly exaggerated fashion, joked that Google, now known as Alphabet, bought one company a day. The company still owns well over two hundred major internet concerns, the vast majority of which were once start-ups or rivals. Apple has absorbed over forty large corporations. Meta (formerly Facebook) has swallowed over ninety companies.

The modern world of technology certainly has the means of effecting monopolies well beyond those of even the nineteenth century’s deregulated industrial cartels. They have created their own online stores that are the sole vendors for the applications that are often essential for consumers to access social-media sites and email servers. Consumers also need their platforms to buy services and products. It only takes a few of these gatekeeper corporations to strangle any online company they choose.

Big Tech’s monopolistic ethos is insidious. Entrepreneurs intentionally seek to found new tech companies for the sole purpose of selling out for billions of dollars to Apple, Meta, Alphabet, or Microsoft. Often they assume that their product will be eliminated simply for the purpose of removing competition.

The result is that in a matter of hours a few tech companies can demolish any targeted business or ideological competitor. An upstart like Parler, a conservative free-speech alternative to Twitter, was on a trajectory in early January 2020 to sign up twenty million new users. Yet it was virtually strangled in a single weekend by the coordination of Amazon, Apple, and Alphabet. They colluded, operating with the shared idea that the tiny Twitter rival did not appropriately censor user content in the post–January 6 climate. The tweets did not suit the tastes of the left-wing Big Tech industry.

Parler went in hours from a rising conservative answer to Big Tech’s social-media monopoly to a nonentity.

Accordingly, Alphabet kicked Parler off its Google Play Store. That exclusion made it impossible for millions of mobile-phone users to download access to the application. Apple banned Parler from its Apple App Store, even though it was already ranking as the number-one free application. Amazon cut Parler off entirely from its Amazon Web Services. As a result, Parler went in hours from a rising conservative answer to Big Tech’s social-media monopoly to a nonentity.

Silicon Valley has so far been able to avoid antitrust legislation by its now-well-known political two-step. When conservatives and Republicans are in power and considering antitrust enforcement, Big Tech boasts of its all-American, free-market success. It brags that, if given free rein, it will easily outpace our foreign competitors. It poses as the best modern capitalist representative of the tradition of can-do Yankee entrepreneurialism.

Yet when Democrats and leftists achieve control of the White House or Congress, tech lords use a different tactic for achieving exemption. With a wink and a nod, Silicon Valley reminds the kindred Left that its monopoly-driven riches are inordinately put in service of Democratic candidates and their left-wing causes. And indeed they are.

Some fifteen Silicon Valley rich people sent more than $120 million to left-wing candidates between 2018 and 2020 alone.

For example, of those seventeen U.S. tech companies claiming a value of $100 billion, 98 percent of their aggregate donations are directed to Democrats. Dustin Moskovitz, a cofounder of Facebook and worth a reported $11 billion, gave Hillary Clinton $20 million in 2016 and Joe Biden $24 million in 2020. Karla Jurvetson, the former spouse of the tech mogul Steve Jurvetson (SpaceX, Tesla), sent some $27 million to Elizabeth Warren, Barack Obama, Hillary Clinton, and Joe Biden. Reid Hoffman (LinkedIn) pledged nearly $5 million to stop Donald Trump in 2020. Some fifteen Silicon Valley rich people sent more than $120 million to left-wing candidates between 2018 and 2020 alone.

Yet Silicon Valley, along with Big Tech in general, does not just monopolize the market and pour hundreds of millions of dollars into the Democratic Party. Its monopolistic control over social media also allows it to use shadow banning and account cancellation to prune away the online expression of millions of conservatives deemed unhelpful to the progressive project.

Once Facebook and Twitter kicked Donald Trump and many of his supporters off their social-media platforms (but not the Taliban or the Iranian theocracy), there was a stampede of other social-media entities such as Snapchat and TikTok to do likewise. Recently, Microsoft’s advertising subsidiary Xandr, under pressure, finally promised that it would cease blocking sites deemed conservative from earning their advertising revenues. Xandr had used a third-party leftist “disinformation” service, the “Global Disinformation Index”—a United Kingdom–based organization affiliated with two other U.S. nonprofit groups—to bankrupt conservative websites, in efforts to censor their supposedly improper expression.

Often social media’s partisan suppression of First Amendment rights serves at the beck and call of government agencies that are otherwise barred from violating the Bill of Rights. Recently, among the so-called Twitter Files, a trove of internal corporate communications released by Twitter’s new owner, Elon Musk, the fbi popped up as having hired Twitter for a sum of over $3 million to suppress unwanted sites and individuals supposedly spreading “disinformation.”

Of course those rules don’t apply when political enemies are the target.

The fbi has served as a sort of social-media doorkeeper for additional censorious government agencies such as the Department of Defense, the Centers for Disease Control, and the cia. They all wished to pass through to Twitter their respective enemies lists—those deemed worthy of censorship or banning. Even congressman Adam Schiff (DCA) sought to use Twitter to silence his media critics. In the case of the cia, it disguised its improper role by adopting the euphemism oga (“Other Government Agency”) in its dealings with Twitter. It is illegal for the agency to surveil domestically, much less suppress the expression of U.S. citizens, but of course those rules don’t apply when political enemies are the target.

The operative progressive vocabulary was not “censorship” and “suppression” but rather fighting “disinformation.” Translated, that meant accusing political opponents of either engaging with “the Russians” or simply peddling “conspiracy theories” in their ignorance. Take, for example, the recently revealed project known as Hamilton 68, whose board of advisors includes Stanford professors and fellows as well as Bill Kristol, John Podesta, and an array of former government officials. Hamilton 68 sought to smother 644 accounts purportedly tied “to Russian influence activities online.” Yet even that ruse proved too much for Twitter’s notorious auditors. In the words of Twitter’s former chief censor, Yoel Roth, the Hamilton 68 list was “bullshit.” In this incestuous relationship, the Washington bipartisan establishment, at best, goes easy on Silicon Valley, and, at worst, hires it out as a third-party contractor to suppress individual citizens’ First Amendment rights.

Indeed, Silicon Valley has devolved into a revolving-door landing pad for former government officials.

The state actors also develop expectations that lucrative sinecures await them after government service. Indeed, Silicon Valley has devolved into a revolving-door landing pad for former government officials seeking a profitable retirement or breathing space between government appointments. Some twelve high-ranking fbi officials who worked closely with Twitter employees to supply the names and accounts of perceived political enemies of the Biden administration eventually found high-paying jobs with the social-media concern. The former fbi general counsel James Baker is now facing a storm of criticism and congressional inquiries about his agency’s role in seeding the mythical Steele dossier to the media and within government. In 2018, he simply retired and rotated into a role as head legal counsel at Twitter. Once there, he garnered a reported $8 million a year in total compensation. That was likely fifty times what he earned per annum at the fbi.

The array of liberal luminaries ending up in Silicon Valley is quite astonishing. Big Tech invests in them either on the surety that they will again be useful when they return to the administrative state, or as compensation for their past ideological service, or simply for their celebrity left-wing bona fides. Indeed, Silicon Valley does for Democratic grandees what General Dynamics, Lockheed Martin, Northrop Grumman, Raytheon, and other defense contractors do for retiring three- and four-star generals and admirals who join their lobbying teams or become board members, on the premise that their contacts in the Pentagon will help win lucrative arms contracts.

Ryan Metcalf, a former senior analyst in the Obama White House, on leaving government gravitated immediately to PayPal. Nick Shapiro, the deputy chief of staff at the cia, landed at Airbnb in “crisis communications.” The top Obama advisor David Plouffe went to Uber. Lisa Jackson, the Environmental Protection Agency head, found her way to Apple. Attorney General Eric Holder also ended up working for Airbnb. The Facebook executive Sheryl Sandberg was the former chief of staff to Treasury Secretary Larry Summers. Michelle and Barack Obama in their last year in the White House negotiated a deal reportedly worth $100 million to work as “content creators” for the Los Gatos–based Netflix.

Yet the left-wing–tech fusion involves more than just freedom from regulation, huge political donations to Democratic candidates, and cushy sinecures. More disturbing are the gargantuan cash infusions from Silicon Valley into “nonprofits” and legal firms that seek to alter the way Americans vote. In a now-notorious 2021 essay in Time, Molly Ball boasted of the way in which Silicon Valley had sought to brand its efforts to alter the 2020 balloting process as being in the service of “election integrity.” In truth, Silicon Valley’s money and expertise ensured that nearly 70 percent of the balloting in many states did not occur on Election Day. Yet the ensuing flood of early and mail-in ballots somehow resulted in a reduction of the usual rejection rate for improper, incomplete, or illegal ballots.

Ball praised the effort as a salutary “cabal” and “conspiracy.” Indeed, she bragged that her essay was “the inside story of the conspiracy to save the 2020 election.” This noble complot was “unfolding behind the scenes” and “touched every aspect of the election.” It “got states to change voting systems and laws and helped secure hundreds of millions in public and private funding.”

Ball turned giddy when detailing how Democratic operatives “successfully pressured social media companies to take a harder line against disinformation and used data-driven strategies to fight viral smears.” Again, note that “disinformation” and “smears” are the usual euphemisms for incorrect expression deemed worthy of censorship.

Ball noted that one Laura Quinn, a progressive founder of something called Catalist and former deputy chief of staff for Al Gore, organized a “secret project,” which “she has never before publicly discussed.” It

tracked disinformation online and tried to figure out how to combat it. One component was tracking dangerous lies that might otherwise spread unnoticed. Researchers then provided information to campaigners or the media to track down the sources and expose them.

Ball gleefully details how leftist-funded groups smeared social-media expression as “disinformation” so as to curtail narratives they found unhelpful to their electoral agendas. For that end, Ball notes that activists like Quinn agreed that the “solution was to pressure platforms to enforce their rules, both by removing content or accounts that spread disinformation and by more aggressively policing it in the first place.”

And it worked. Silicon Valley indeed took a “harder line,” although one asymmetrically focused on conservatives. After a meeting of activists with Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey, one participant said it

took pushing, urging, conversations, brainstorming, all of that to get to a place where we ended up with more rigorous rules and enforcement. . . . It was a struggle, but we got to the point where they understood the problem. Was it enough? Probably not. Was it later than we wanted? Yes. But it was really important, given the level of official disinformation, that they had those rules in place and were tagging things and taking them down.

The most famous result of their “struggle” was to censor the truth about Hunter Biden’s laptop, on grounds that it was “Russian disinformation.” One poll taken after the election suggested such suppression may have swayed 2020 presidential voters. The New Jersey–based Technometrica Institute of Policy and Politics reported that nearly 80 percent of Americans who knew of the Hunter Biden laptop scandal prior to the 2020 presidential election believe that “truthful” coverage would have changed the outcome of the election.

As part of this effort to alter pre-election expression and the traditional way Americans vote, Zuckerberg injected over $400 million into key voting precincts in 2020. Ostensibly, his nonprofit recipients “just wanted to help” overworked registrars. In fact, they often coordinated their own work with left-wing poll workers paid for by groups that had received his enormous donations.

Silicon Valley has been successful in using its clout and money to mask its interference in American elections, its fusion with governmental agencies, and its suppression of First Amendment rights by adopting its own Orwellian vocabulary of “protecting election integrity,” combating “voter suppression,” and fighting “hate speech,” “misinformation,” and “disinformation.” These efforts seem more redolent of 1920s Chicago politics than the hipster image of techie Silicon Valley.

Yet nearby Stanford University helps to offer Big Tech a patina of academic credibility for their election meddling.

Yet nearby Stanford University helps to offer Big Tech a patina of academic credibility for their election meddling. It often does so with joint projects bearing utopian titles such as the “Stanford Internet Observatory” (directed by the former Facebook security head Alex Stamos), which is joined to the “Election Integrity Partnership,” supposedly to bring together “misinformation researchers.” And thus we come to the second leg of the powerful troika of Silicon Valley, Stanford, and San Francisco Democratic Party politics.

Stanford University often reminds the nation that it birthed Silicon Valley’s high-tech miracle. Such boasts are certainly credible. The legacy of combining cutting-edge university research with risk-taking entrepreneurship was the idea of the Stanford engineering professor, dean, and provost Frederick Terman (1900–82). His visions fueled the landmark careers of two notable students, William Hewlett and David Packard, and birthed university-sponsored and spin-off tech enterprises such as the Stanford Research Institute, the Stanford Office of Technology Licensing, and the Stanford Industrial Park.

That partnership continues today. Stanford graduates have established more start-up companies, raised more Silicon Valley capital investment, and led more tech companies than any other university’s graduates in the United States over the past fourteen years. A recent study of one hundred and fifty of Silicon Valley’s largest publicly traded corporations found that a fifth of all the chairmen of their boards of directors were Stanford alumni.

Until recently, Stanford contented itself with offering a steady stream of engineering and computer-science graduates and future financiers and corporate managers to the adjoining high-tech companies. The public face of this campus–corporate merger is not just the legions of current Stanford professors and administrators who enjoy a number of lucrative Silicon Valley board memberships. Grandees from neighboring tech companies, venture-capital firms, and investment companies also serve on Stanford’s own array of university boards and advisory committees.

Yet lately Stanford’s role has expanded from jump-starting and fueling the growth of Silicon Valley to adding a prestigious intellectual gilding to the raw money, power, and excess of a tech world that has grown to a market capitalization worth some $9 trillion. Within just a fifteen-mile radius of the campus hub are the spokes of major tech headquarters in Palo Alto, Menlo Park, Mountain View, Cupertino, and Sunnyvale.

The left-wing campus culture that then permeates the tech companies masks the greatest concentration of wealth in the history of civilization.

The university’s complex and layered associations with Silicon Valley also goad the companies into promoting university-prompted commitments to woke social priorities. The left-wing campus culture that then permeates the tech companies masks the greatest concentration of wealth in the history of civilization. The osmosis, now characteristic of the Democratic Party in general, can appear bizarre. Tech billionaires in faded jeans, tie-dye T-shirts, and flip-flops stroll on University Avenue. They are surrounded by the Audis, bmws, Mercedeses, Lexuses, and Teslas—a few with blm stickers—that crowd Stanford’s student parking lots.

Yet in the last decade, the Stanford pipeline to Silicon Valley has been turning malodorous. In part, the unease is due the precipitous decline of the reputation of the university after a series of scandals. Stanford’s long-serving president Marc Tessier-Lavigne—an accomplished neuroscientist, multimillionaire, and former executive vice president of the San Francisco–based bioengineering firm Genentech and cofounder of Denali Therapeutics, also of San Francisco—is currently reportedly under investigation for prior scholarly misconduct.

The accusations involve a group of papers that Tessier-Lavigne coauthored that were accompanied by supposedly enhanced and misleading illustrations. Or, as the left-wing Stanford Daily put it:

The European Molecular Biology Organization (embo) Journal was reviewing a paper co-authored by Tessier-Lavigne for alleged scientific misconduct. Science misconduct investigator Elisabeth Bik and other experts also identified “serious problems” in three other papers, including two where the president was the lead author.

Like the former Stanford president John L. Hennessy, who now serves as chairman of the board of Alphabet, Tessier-Lavigne is active in corporate governance, currently serving on the boards of Denali Therapeutics, Regeneron Pharmaceuticals, and Agios Pharmaceuticals. Before his presidential tenure, he was on the Pfizer board.

Stanford’s undergraduate admissions also continue to be a source of controversy after the 2019 scandal in which it was revealed that several wealthy families had paid millions of dollars in attempting to gain their children’s acceptance by being recruited as “athletes” through the university’s sailing program. Stanford’s current website boasts that, with its reparatory admissions policy, its incoming freshman class of 2026 has only 22 percent American whites (who account for some 67–70 percent of the U.S. population). The makeup is 29 percent Asian American, 17 percent Hispanic, 7 percent black, 10 percent “multiracial” (likely Asian-white), and 13 percent “International.” The black admission rate of 7 percent is identical to that of Harvard, Yale, and Princeton, suggesting a consensus among these schools. Stanford’s Asian and Hispanic admission rates are roughly twice that of the schools in the Ivy League, probably reflecting the demographic differences between the two coasts.

Meanwhile, Stanford’s graduate engineering program, perhaps the best in the country, has the following demographic mix: 28.5 percent American white, 17.2 percent Asian American, 1.8 percent black, 7.8 percent Hispanic, 6.5 percent multiracial, and a whopping 38.2 percent international. The engineering graduate school supplies Silicon Valley with its talent on demand. And, so far, Stanford has not applied much of its woke diversity, equity, and inclusion quota formulas to its graduate programs in Silicon Valley–related fields. Tech moguls still rely on the university to supply them with qualified Ph.D.s, and, in turn, they want to send their kids to Stanford’s undergraduate school.

Yet Stanford strangely is less reticent in boasting that it now rejects 96 percent of all applicants.

So Stanford, along with other schools, recently made the sat and act optional for undergraduate admissions, giving the admissions office complete freedom to use its own subjective criteria—a freedom that it had not enjoyed since the 1950s. The university will not disclose how many of those who were accepted chose not to take the now-optional standardized tests as part of the application process. Yet Stanford strangely is less reticent in boasting that it now rejects 96 percent of all applicants. In the past it has publicized that it has rejected 60 to 70 percent of the 1 percent of sat takers who received a perfect score and applied to Stanford.

A cynic might conclude that the university has good reason not to disclose how many admitted applicants who were willing to take and submit the sat scored poorly. And it now attempts to square the circle of lax admittance standards and high rejection rates by bragging that even a perfect test score usually won’t get one into Stanford—at least for some students.

As a result, members of the white working classes, who may be superbly qualified but without such connections or money, are de facto barred from admission.

The basic truth is that Stanford uses the prestige of its research graduate schools to “monetize” its undergraduate program into a way to reward its powerful friends in corporate America, especially Silicon Valley, and politics, especially Democratic Party politics. Stanford’s engineering graduate school sends its product to Silicon Valley, Silicon Valley sends money to Democratic politicians who protect Stanford and Silicon Valley, and Stanford admits the children of tech moguls and politicians to their undergraduate school. And as a result, members of the white working classes, who may be superbly qualified but without such connections or money, are de facto barred from admission.

Silicon Valley’s trillions of dollars in capitalization may help explain Stanford’s huge endowment of $37 billion, the third-largest in the country (after Harvard and Yale). Yet such staggering sums in such a small radius have also contributed to an epidemic of financial and moral corruption that has brought disrepute to the university and its liaison with Silicon Valley.

Elizabeth Holmes—the charismatic and savvy founder and ceo of the now-bankrupt and disbanded Theranos—was recently sentenced to eleven years in prison for defrauding investors of hundreds of millions of dollars. Holmes applied for her first patent as a Stanford sophomore. And while she soon dropped out to found Theranos, Holmes remained a frequent visitor to campus.

For a time, that association helped fuel media puff pieces about the then-youngest female billionaire in the world. The twentysomething Holmes usually appeared on campus in a Steve Jobs–esque all-black getup, courting Stanford-affiliated luminaries. In the end, she successfully charmed the campus–corporate nexus into steering billions of dollars of investment into her mobile, miniaturized blood-testing device, the “Edison.” Holmes kept insisting, without any proof, that the Edison would revolutionize blood testing by using automatized and miniaturized kits requiring only microscopic volumes of blood.

Holmes proved no Wizard of Palo Alto.

But Holmes proved no Wizard of Palo Alto. Instead, she and her co-conspirator and paramour, the Theranos coo Sunny Balwani, at an early date realized that their Edison was a bust and a fraud. Yet for years the two still engineered a complex con by altering data, suppressing incriminating internal reviews, and misleading investors. Before its utter collapse, Theranos had reached nearly $9 billion in capitalization and lured in some of the great investing families in the nation—the Waltons, the DeVoses, and the Murdochs. They ended up losing hundreds of millions of dollars. Balwani, like Holmes, is appealing an impending long prison sentence.

How did the medically ignorant Holmes and Balwani pull off such a brazen scheme among the supposedly savviest investors on the planet? It surely helped that Holmes charmed her Stanford community, especially the renowned George Schultz of the Hoover Institution. The lauded statesman and economist brought to her board an all-star cast of Stanford and Hoover luminaries. None of them, however, had much experience in medicine or technology, and few had any in corporate governance. But it was likely this façade of Stanford legitimacy that explains why so much money was poured into such a patently suspect idea.

Sam Bankman-Fried easily outdid Holmes. He is the architect of likely the greatest financial scandal in U.S. history. His net worth peaked at $26 billion, before the collapse of his house-of-cards ftx cryptocurrency scam. Bankman-Fried grew up on the Stanford campus, the son of two noted Stanford Law professors, Joseph Bankman and Barbara Fried. Caroline Ellison—Bankman-Fried’s erstwhile partner, occasional girlfriend, and the ceo of ftx’s sister investment company Alameda—was a Stanford math major. She was also the scion of two professor parents, both at mit, where Bankman-Fried did his undergraduate work. Ellison has now pleaded guilty to various counts of fraud. And in exchange for a lighter sentence, she is working with federal prosecutors as they build their case against Bankman-Fried. He is currently confined to his parents’ home on the Stanford campus, while on bail awaiting trial.

Bankman-Fried purportedly stayed out of jail by posting a $250 million bond. But, in fact, he and his family and friends put up very little money, although they were helped by two Stanfordites, the former law school dean Larry Kramer and Andreas Paepcke, a Stanford senior research scientist who signed a $500,000 guarantee. Somehow that was enough to satisfy the government’s quarter-billion-dollar bond.

Stanford Law School itself has recently suffered a series of public-relations embarrassments. In early 2023, an ethics complaint was filed against Professor Michele Dauber for posting a series of ad hominem attacks on Camille Vasquez, the recent attorney of Johnny Depp. The Stanford law professor tweeted that Vasquez was a “Pick Me Girl” for defending the actor against sexual-assault allegations. The ethics complaint further alleged that Dauber posted death threats against Depp by fantasizing about the actor’s murder and hoping that his corpse would be devoured by rats.

Another Stanford law professor, Pamela Karlan, in testimony before the House Judiciary Committee’s hearing on the second impeachment of President Trump, out of the blue gratuitously attacked the president’s thirteen-year-old son, Barron Trump: “The Constitution says there can be no titles of nobility, so while the president can name his son Barron, he can’t make him a baron.” In 2021, a graduating Stanford law student sent the entire law school student body a fake invitation, appearing as if it were sent from the school’s small conservative Federalist Society. It read in part:

The Stanford Federalist Society presents: The Originalist Case for Inciting Insurrection. . . . Riot information will be emailed the morning of the event. . . . Violent insurrection, also known as doing a coup, is a classical system of installing a government. . . . Although widely believed to conflict in every way with the rule of law, violent insurrection can be an effective approach to upholding the principle of limited government.

In mid-March, the Federalist Society invited Fifth Circuit Court of Appeals judge Kyle Duncan to speak at the law school (as discussed in “Notes & Comments” in The New Criterion of April 2023). The judge was not allowed to finish his lecture. Law-school students drowned him out. They flashed obscene placards in his face. Some gave their pseudo-radical game away when the self-important protestors mocked Duncan for being without the supposed qualifications to be admitted to their own Stanford Law School. And then, mission accomplished, they smugly stomped out.

When an exasperated Duncan had called out for a university administrator to restore calm, the judge’s podium was instead arrogated by the associate dean for diversity, equity, and inclusion, Tirien Steinbach. She then gave her own pre-planned and scripted lecture, expressing empathy with the scheduled disrupters. Steinbach asked the startled judge whether it was even worth supporting his free-speech rights, given he and his views were deemed abhorrent by the new absolutist Stanford community. Stranger still, one Stanford administrator urged any hurt Federalist Society member to contact Dean Steinbach for consolation. And when Dean Jenny S. Martinez finally offered Justice Duncan an apology, her class was disrupted by her own furious law students. Martinez wrote several apologies for the debacle, claiming it was inconsistent with the law school’s commitment to freedom of speech. But never once did she suggest that any of the rowdy heckling students would face consequences.

Despite these contretemps, the Stanford brand continues to work for those privileged enough to bear it. It did for Bankman-Fried and Ellison what it had done for the similarly young Holmes. It helped to lure investors into handing over millions of dollars to a near adolescent who had allegedly mastered the esoteric world of cryptocurrency but otherwise simply shuffled money around investor accounts, always desperate to draw in more new investment capital from the naive than what was being withdrawn by the increasingly skeptical.

Bankman-Fried’s togs differed from those of Holmes. His look was more affected Stanford slob—cut-off jeans, flip-flop sandals, baggy T-shirt, and wild hair—and was intended to lend an air of Einsteinian mad genius to his otherwise age-old shell game. sbf, as he is sometimes known, currently faces a possible one hundred years in prison for multiple counts of campaign-finance violations, money laundering, and wire and commodities fraud.

How did Bankman-Fried, like Holmes, for so long elude scrutiny?

How did Bankman-Fried, like Holmes, for so long elude scrutiny? He too hit all the right Stanford, tech, and political buttons. Holmes hosted a Hillary Clinton fundraiser, while sbf lavishly donated millions of dollars to Democratic candidates—and, more importantly, promised hundreds more millions to come while touting his left-wing credentials and Stanford-parents pedigree.

His mother, Barbara Fried, is a dark-money bundler of Silicon Valley millions and the founder of “Mind the Gap.” That effort promised the Silicon Valley rich both anonymity and an approved list of needy progressive candidates and causes. Her husband Joseph Bankman is a well-known progressive tax-law expert and a frequent consultant to Democratic lawmakers like Elizabeth Warren. In the end both Stanford professors are facing federal investigations concerning purchases of property in the Bahamas, specifically their joint ownership of a $16.4 million home that was apparently transferred to their names by their son or his company or both.

Trillions of dollars and hundreds of tech companies have long drawn Chinese-government-affiliated companies to Silicon Valley, and thus inevitably also to Stanford. In July 2020, a Stanford visiting neurology researcher named Chen Song was arrested for not disclosing that she had been an agent of China’s People’s Liberation Army. Song had managed to become a temporary faculty member at Stanford, tasked with engaging in military espionage. As far back as 2014, Stanford had come under pressure to shut down its Confucius Institute, funded indirectly by the Chinese Communist Party and allegedly monitoring Chinese student activities on campus.

Nonetheless, such laxity in campus security made little impression on the university faculty, given that in September 2021 some 177 professors and researchers signed a petition to the Department of Justice demanding it stop investigating potential Chinese spies at U.S. universities. A year earlier, Stanford had been investigated by the Department of Education for some $64 million in alleged Chinese-affiliated donations over a decade, all from previously unidentified and anonymous Chinese donors, most of them believed to be government-associated. Song eventually had all charges dropped, was reissued a passport, and silently returned to China.

Sometimes, however, the campus’s left-wing politics go too far, even for Stanford, causing as much scandal as those infusions of Chinese money did. Recently, the university quietly took down from a school-affiliated website the embarrassing “Elimination of Harmful Language Initiative” list of taboo vocabulary “compiled by a group of administrators working over some eighteen months.” In Orwellian fashion, they boasted of an effort to excise “harmful” words from campus usage:

The goal of the Elimination of Harmful Language Initiative is to eliminate many forms of harmful language, including racist, violent, and biased (e.g., disability bias, ethnic bias, ethnic slurs, gender bias, implicit bias, sexual bias) language in Stanford websites and code.

The list included words such as “immigrant,” “citizen,” and “American.” Yet Stanford acted to disassociate itself from the list only after The Wall Street Journal mocked the campus thought police. The embarrassing wsj exposé ended with a Parthian shot by implying such nonsense was the logical result of a bloated staff full of idle woke administrators:

We can’t imagine what’s next, except that it will surely involve more make-work for more administrators, whose proliferation has driven much of the rise in college tuition and student debt. For 16,937 students, Stanford lists 2,288 faculty and 15,750 administrative staff.

As a footnote to the vocabulary embarrassment, the Elimination of Harmful Language Initiative also included ways to offer Stanford snitches “financial rewards for finding/reporting” any who violated the provisions of the list. The language policing was known in Stanfordspeak as “The Protected Identity Harm (pih) Reporting” system.

The most egregious faculty and administrative embarrassment, however, involved Stanford’s response to the high-profile commentaries of three of its most renowned immunologists, epidemiologists, and public-health experts, Drs. Scott Atlas, Jay Bhattacharya, and John Ioannidis. Very early in the covid epidemic—whose onset resulted in government lockdowns, mandates, and quarantines—all three questioned the wisdom of policies by local officials and President Trump. They kept that criticism up when the administration changed but America’s covid policies stayed much the same.

The three experts cited various scientific data that suggested the quarantines would not stop the pandemic. They argued that natural immunity was as effective as or superior to acquired protection via vaccination and that those under eighteen were at little risk of serious infection, but that young men under forty might be inordinately at risk for side effects from the mrna vaccinations. Therefore, they argued that medical therapies and isolation protocols should be focused primarily on the most vulnerable, those over sixty years old. They wrote that the vaccinations would not prove to offer absolute defense against either infection or infectiousness. And most importantly and controversially, the three were not shy in insisting that the government-shutdown reaction to covid would do far more damage and eventually might kill far more people than the covid-19 virus itself—through increased suicides, spousal and familial violence, drug and alcohol abuse, economic recession, and the deprivation of the nation’s youth of two critical years of schooling.

Their voices often resonated in the conservative and libertarian communities. President Trump eventually brought in Dr. Atlas to offer independent assessments of the lockdowns. He was often at odds with those in the White House task force headed by Drs. Anthony Fauci and Deborah Birx, who insisted on closing the schools and shutting down much of the economy, all while pushing mandatory vaccinations on the entire population.

Rather than being honored that the nation’s three most prominent researchers of the quarantines were Stanford-based, the university attacked them. A faculty senate resolution and a petition from ninety-eight members of the medical school faculty impugned the integrity of Atlas, most likely because he had become a presidential health advisor to Trump. Indeed, Stanford’s faculty senate went on record condemning Atlas for various tweets blasting state and federal lockdowns and the damage those restrictions incurred without measurably stopping the spread or lethality of the virus.

University President Tessier-Lavigne, along with the provost and the dean of the medical school, criticized Atlas’s skepticism of then federal policies. Indeed, the medical-faculty petition alleged that Atlas has promoted “falsehoods and misrepresentations of science.” And the signatories accused Atlas of seeking to “undermine public health authorities and the credible science that guides effective public health policy.”

The government’s response to the pandemic indeed did more damage—just as Atlas had warned—than the virus itself.

Only when Atlas threatened to sue the university faculty for character defamation did the petitions and faculty senate resolutions cease. In retrospect, the consensus of the now “credible science” has found that the government’s response to the pandemic indeed did more damage—just as Atlas had warned—than the virus itself. As the three Stanford doctors had stated repeatedly, natural immunity did prove as or more effective in warding off serious covid illness and general infectiousness than the vaccinations. And they were presciently aware that the national shutdown of schools and businesses would do historic damage that will resonate for decades well beyond the toll of covid.

The final leg of California’s model triangle of power is political. More specifically, Bay Area progressive politicians both protect and benefit from the money, influence, and prestige of the Silicon Valley tech industry and its veneer of university high culture. And their resulting emergence over the last three decades ended southern California’s former hold on the state’s politics, in the eras of conservative governors Ronald Reagan, George Deukmejian, and Pete Wilson, all from the Los Angeles area.

More astonishingly, by the end of 2022, no single American city had produced more recent powerful and influential leaders than San Francisco. All were fueled by the rise of the Bay Area–Silicon Valley tech and financial corridor, and in turn drove the hard-left trajectory of the national Democratic Party.

San Francisco left-wing paragons include the recent Speaker of the House Nancy Pelosi (DCA) and Vice President Kamala Harris. The San Francisco–based Dianne Feinstein is the third-longest-serving senator and a former chairwoman of the Senate intelligence and judiciary committees. Gavin Newsom, the former San Francisco mayor and current California governor, is a likely 2024 Democratic presidential candidate. The former Bay Area resident Barbara Boxer was a U.S. senator and congressional representative for thirty-four years. Jerry Brown, a four-term California governor, was also a four-time Democratic presidential candidate. The San Franciscan Willie Brown, the mentor of Vice President Harris, was a thirty-year veteran and former speaker of the California assembly and a two-term mayor of San Francisco.

In other words, the Bay Area has birthed the second- and third-most-powerful officials in the U.S. government in Harris and Pelosi, and both of California’s senators during the last thirty years. If Newsom runs for president, he will be the third recent Bay Area politician to do so. These nationally known politicos have collectively served six gubernatorial terms, and Willie Brown, Dianne Feinstein, Gavin Newsom, and Jerry Brown spent an aggregate thirty-three years as recent mayors of San Francisco or Oakland. Many have Stanford ties. Feinstein is a Stanford graduate. Harris’s father was a longtime Stanford professor. Newsom’s father graduated from Stanford Law School, and Jerry Brown’s sister Kathleen was a Stanford undergraduate.

Yet their prominence offers a paradox of sorts, when one considers that annually three hundred thousand residents are fleeing from their California. Meanwhile, San Francisco has become synonymous with the ongoing national epidemic of blue-city crime, homelessness, vagrancy, health concerns, vacant downtown office space, high taxes, and declining population.

Indeed, in the last half century, Bay Area politicians brought to the national scene their hard-left California politics on an array of issues such as climate change, immigration laws, identity politics, environmental regulation, restrictive zoning, the prosecution of violent crimes, bail laws, gun control, border security, gay marriage, abortion, and transgenderism. As such, they marked the dividing line between a former California of centrist and conservative governors and senators, and a now-permanent left-wing state that claims itself as the Democratic model for a new American nation.

The marriage of San Francisco politics, corporate culture, and wokeness—the “Green New Deal,” dei (diversity, equity, and inclusion), and esg (environmental, social, and governance)—is perhaps best typified in the recent collapse of Silicon Valley Bank, the second-largest bank meltdown in American history. At its height, the bank invested billions of dollars in Silicon Valley technology and green-energy start-ups. All were eager to capture easy contracts from budget-busting massive “infrastructure” bills during the Obama and Biden administrations. Bank executives boasted of their diversity profiles, their green investments, and their contributions to left-wing groups and Democratic candidates and pacs—all meant to mask the reality that they were issuing subprime business loans to risky start-ups, ignoring the reality that their locked-in, long-term government bonds paid little interest in a period of 6–8 percent annual inflation. That time bomb went off when edgy depositors demanded higher returns and clients grew fewer and were increasingly buffeted by stagflationary layoffs. A March run on the bank by depositors drained the institution of cash, while a massive sell-off of newly low-yield government bonds incurred huge losses and further fueled the panic.

The old network of liberal San Francisco politicians and Silicon Valley moguls immediately lobbied the Biden administration to cover their huge losses, given 96 percent of all svb depositor accounts vastly exceeded the Federal Deposit Insurance Corporation’s $250,000 insurance limit. Governor Newsom badgered the Biden administration the hardest, saying that a massive multibillion-dollar bailout for the depositors was necessary. He omitted the fact that the bank had given $100,000 to his wife Jennifer Siebel Newsom’s California Partners Project. He did not disclose that the Newsoms had numerous personal accounts with the bank, and that his three wineries—cade, Odette, and PlumpJack—were clients of svb.

Like Newsom, most of these progressive politicians are multimillionaires, having raised enormous amounts of money from Silicon Valley sources, having spousal or personal ties to lucrative California businesses, and having profited from Chinese investments.

California leads all states in Chinese investments.

Indeed, California leads all states in Chinese investments, totaling well over $5 billion in the last twenty years. So it is no wonder that the former senator Boxer had been a registered agent-lobbyist for a number of Chinese-government-controlled companies. Senator Feinstein was clueless that her chauffeur of some twenty years was a spy for the Chinese communist government, all while she served as chairman of the Senate intelligence committee. The Bay Area congressman Eric Swalwell was romantically involved with a Chinese spy called “Fang Fang” (Christine Fang), either before or during his tenure on the House Intelligence Committee.

Nancy Pelosi’s son has sizeable investments in Chinese-government-related companies. Gavin Newsom greenlit a $1.4 billion contract to a Chinese consortium to provide the state with protective KN-95 masks during the covid lockdowns. That Chinese group included byd, a company that has lavished money on Sacramento politicians, including $40,000 to Newsom’s own gubernatorial campaign. The former governor Jerry Brown in 2019 launched a joint Chinese/American climate-change think tank at the University of California, Berkeley—in partnership with China’s top climate official, Xie Zhenhua.

All of these politicians have treated China as a friendly partner to California and kept mum about vast Chinese Communist Party espionage operations in Silicon Valley. Nancy Pelosi may now talk grandly in declaring that “the era of self-regulation is over,” but she proved the chief impediment to bipartisan legislation that, inter alia, would have banned tech companies from giving preference to their own products and companies in search results and monopolizing markets.

The signature policies of the California Democratic Party establishment helped to welcome in millions of impoverished people from south of the border, to ensure trillions of dollars in Big Tech market capitalization and thousands of new tech employees using H-1B visas in Silicon Valley, and, over the last forty years, to drive out over ten million largely middle-class and conservative Californians who could no longer afford the high taxes and onerous regulations that paradoxically seemed to guarantee dismal schools, ossified infrastructure, high crime, and soaring home prices. That demographic trifecta—welcoming in the foreign poor, courting the rich, and ousting the middle class—explains in large part current-day California and the direction of the national Democratic Party itself.

The Democratic Party has adopted a hard-left progressive agenda, one far more radical than at any time in its history. It is now a party dominated by bicoastal globalized wealth. It is deeply embedded within and compromised by Chinese investment. It has become both the protector and beneficiary of monopolistic Big Tech. Much of its woke politics were inherited from pet elite universities.

It owes a great deal of this metamorphosis to the current politics of California that were birthed in San Francisco and Silicon Valley, all burnished with the sheen of Stanford University. The Democrats now seek to make California politics the operating principles and ethos of the United States.