Reflections after 60 Years

A front page story in the Dallas Morning News (11/21/2023) reports that the population of the Dallas-Fort Worth area in Texas has reached 8,000,000, most of which is in suburbs. That is around five times that of 1963. The city of Dallas has doubled its population during that period from around 679,000 to 1,303,000. A lot else has changed during that time, in Dallas, the State, and the Nation. It was a different world sixty years ago.

How much difference flowed from one horrible event that occurred here has been and probably will be an ongoing topic of research and speculation, though most rational persons know the case is closed.

Sixty years ago this day, November 22, 1963, President John F. Kennedy was killed on a downtown street. It is difficult to believe that so many years have passed since that horrible event. President McKinley’s assassination in 1901 was as distant for those in 1963 as Kennedy’s is for us today.

The Kennedy assassination events spawned widespread condemnation of Dallas and its citizens collectively. Such condemnation was practically worldwide. It ascribed collective guilt, or guilt by association, for the evil acts of an individual to a whole community of other individuals who had a connection to the place.

Dallas further was condemned because the assassin Lee Harvey Oswald was killed while in police custody shortly after committing his foul deed. To the extent there was culpable negligence on the part of Dallas Police for this second killing, it must be viewed in context. City officials at the time believed it was important to show the world via the news media that there was timely identification and capture of Oswald, at the time a prime suspect, and allay the fears there might be a conspiracy threatening the government similar to the cabal that killed Abraham Lincoln and attempted to kill several of his cabinet members. Many would regard Oswald as receiving swift justice at the hands of Jack Ruby, albeit without due process. Ruby stated that he killed Oswald because he wanted to spare Jackie Kennedy having to be a trial witness and relive the horror of seeing her husband’s brain blown out while sitting next to him. One can sympathize with Ruby, though not excuse him for that.

It has been asserted that the downstream effect of Oswald’s killing spawned numerous conspiracy theories. Perhaps, but such theories would have doubtless come forth anyway. It is difficult for many persons to accept that a lone pathetic individual, a serial loser, could by himself kill a President and change the course of the country’s — and perhaps the world’s history.

Assigning collective guilt or collective responsibility — for evil deeds the concept amounts to the same thing — is endemic to the human condition, even in a country and culture that values and is supposed to assign responsibility to the individual rather than to associations or collectives. The same goes for the guilt of which numerous pundits accused Dallas and its citizens.

Part of the reason for such accusations was because Dallas in 1963 was seen to be a hotbed of conservative, even far-right, politics that provided the climate for political violence

. In 1963, like the resurgence in recent years, most of our national intelligentsia and media leaned to the left and often opined that those to the right-of-center politically were an “existential threat” to democracy. Kennedy on the other hand was a celebrity politician and regarded as a liberal who leaned leftward. It was reported that Mrs. Kennedy said she was disappointed that her husband was deprived of martyr status for his stand on civil rights for black citizens because his killer was a “silly little communist” rather than the right-wing hater the President’s assassin was at first assumed to be. The reality was the President was essentially moderate, particularly in economic matters (he cut taxes, drastically). He turned out to be an effective President in foreign affairs. No conservative would fault his staring down the Soviets during the Cuban missile crisis or his stance on West Berlin.

There is little question that there are violent crazies on each of the far ends of the political spectrum. Possibly more on the left, as recent events have born out.

As for Presidential assassinations in this country, in the 20th Century there have been two successful and no fewer than a half dozen attempts. Successful assassins Leon Czolgosz, President William McKinley’s assassin, and Oswald were both leftist fanatics. Other Presidents who were attacked but not killed include Theodore Roosevelt in 1912 (out of office but campaigning for another term) whose assailant was adjudged insane by a court; Franklin Roosevelt, two weeks before his first inauguration was shot at by a leftist who said he hated “capitalists”; Puerto Rican nationalists tried to kill President Harry Truman in 1950; Gerald Ford was shot at once and threatened close-up on two occasions within a month of each other 1975 by leftist women (one of whom was a follower of Charles Manson); President Ronald Reagan was shot and wounded in 1981 by a man who was tried and adjudged to be legally insane.

There have been other alleged plots, security incidents, and expressions of intent to kill or harm other Presidents, including Carter, both Bushes, Obama, Trump, and Biden. These would-be assassins have come from both extremes of the political spectrum, or suffered from various stages of mental derangement.

What about the speculative theory that Oswald was a hireling or dupe of some national or international conspiracy? The bipartisan Warren Commission concluded that Oswald acted alone. Later, after exhaustive research and analysis detailed in his book Case Closed (1993) Gerald Posner concluded that Oswald and Ruby acted alone and independent of each other. Former Los Angeles district attorney Vincent Bugliosi (who prosecuted Charles Manson and his acolytes) concluded the same in his 1,632 tome Reclaiming History: the Assassination of President John F. Kennedy (2007). There are those who will continue to believe in conspiracy, whether by the CIA, Fidel Castro, or venal businessmen. Such persons are like the medieval alchemists who insanely sought the Philosophers’ Stone.

See also Investigation of a Homicide: The Murder of John F. Kennedy by Judy Whitson Bonner (1969); Death of a President by William Manchester (1967).


No Country

“Dope. They sell that shit to schoolkids.”
“It’s worse than that.
“How’s that?’
“Schoolkids buy it.”
— Cormac McCarthy, No Country for Old Men

There has been a lot of attention around this region and elsewhere about deaths and near deaths from overdose of the super-potent synthetic opioid fentanyl, especially by teenagers and young adults. An overdose of this drug can fit on the tip of a pencil. Drug trafficking cartels disguise fentanyl as pills of legitimate medicine and other, less potent, street drugs that used to control pain.

No matter what law enforcement can do to interdict the supply side, it is up to the parents primarily (and secondarily to the educators) to stanch the demand side. So longs as there are huge profits to be made, illegal manufacturing and vending will occur. Even if it we possible to locate and destroy every drug cartel’s headquarters and lab, and arrest and incarcerate (or execute) every kingpin and their lieutenants, new ones would take their place, sooner or later.

“Here’s the elephant in the room. All those who have died — who were poisoned or who overdosed — had purchased or accepted something that was not legally prescribed for them. There is an element of personal responsibility.” — Eduardo Chavez, DEA Special Agent-in-Charge, Dallas.

We should be compassionate towards the parents of minors who are drug abuse casualties, but neither they nor their children are absolved from all blame. Some persons — schoolkids — who are casualties are at least partially excused by parents or loved ones because “they only thought the were taking [some less potent, but illegal pill.]” But why use a medicine that is not prescribed? No one should buy or use any prescription medication unless it prescribed by a licensed practitioner and sold by a licensed pharmacy.


Oppenheimer and the Dead Cities

The latest ballyhooed movie Oppenheimer has generated quite a bit of angst among certain traditional media and social media denizens. It ranges from whether it was necessary to use nuclear bombs on two Japanese cities to was it a war crime. These questions also relate to the use of conventional weapons in “carpet bombing” enemy cities and other tactical methods that put noncombatant civilians at risk.

The protagonist of the film is the nuclear scientist J. Robert Oppenheimer who provided the technical expertise in designing the bombs. He is reported to have suffered similar angst. (Full disclosure, this writer has not seen the film yet, but has read several well-regarded accounts of the Manhattan Project that developed the weapons.)

On August 6, 1945, and again on August 9, United States Air Force dropped the atomic bombs on the Japanese cities of Hiroshima and Nagasaki. Three days later Japan surrendered, thus ending World War II.

In the 78 years since, not one nuclear weapon has been used in anger, despite many escalations of tensions over this time, and proliferation of those weapons, even to those that may legitimately be called “rogue states.” Nearly forty years of Cold War saw a number of wars between client states of the Soviet Union and the United States. Despite the ferocity in which they were fought at times, none resulted in an exchange of atomic weapons. Of course, that could change at any time. There has been some nuke-rattling by some Russian officials with respect to Vladimir Putin’s military adventure in Ukraine.

Perhaps one reason hostile nations have so far foregone the use of nuclear weapons is that Hiroshima and Nagasaki demonstrated the sheer horror of the use of those weapons.

During this annus horribilis of 2023, we have the usual suspects calling for a “conversation” (which really means recrimination) questioning whether Robert Oppenheimer and others involved in developing those weapons, President Harry Truman’s ordering the use of them, and military officers and men who deployed them, was necessary, or perhaps profoundly immoral. Some have even accused the President and those involved in dropping the bombs of being war criminals.

That is nonsense. British writer A. C. Grayling wrote Among the Dead Cities: The History and Moral Legacy of the WWII Bombing of Civilians in Germany and Japan in 2006. I wrote a review of the book for a local blog. It bears an updating on this occasion of the release of the Oppenheimer film proximate to the anniversary of the Hiroshima/Nagasaki bombings. It follows:

A friend once opined that if the United States and its British ally had pulled their punches in World War II as they have in every war since, including the present one, we’d all be speaking German and/or Japanese. Rhetorical hyperbole this might be, and it would in no sense justify a no-holds barred approach to current conflicts. It should be undeniable nevertheless that the total war Britain and America fought was necessary to beat the Axis. After all, Nazi Germany and Japan began the concept with a vengeance and fought ferociously until the bitter end. Air Marshal Sir Arthur “Bomber” Harris, head of the Royal Air Force Bomber Command is reported to have observed while watching the fires around St. Paul’s during the London Blitz “they’ve sown the wind and will reap the whirlwind.” Harris, of course, was the chief wind maker, the architect if one can use that appellation in such circumstances, of the utter devastation of German cities in the air war that ensued. His bombardiers sowed the seeds of the tornadic firestorms that engulfed Hamburg, Dresden, and other cities, incinerating tens of thousands of civilians and reducing houses, shops, museums, and public buildings to hideous skeletons. The U.S. Air Force in the Pacific, once islands in range of Japan had been captured, carried out similar raids on Japanese cities creating even greater destruction. The final two raids witnessed the only wartime use ever of nuclear weapons.

Among the Dead Cities is one of a number of histories of the strategic bombing in World War II. Its dramatic title (possibly an allusion to I Samuel 31:7) alone sets it apart from the prosaic works by more methodical historians. Grayling styles himself a philosopher rather than a historian and focuses on the morality of the area bombing – sometimes called “saturation” or “carpet” bombing – of German and Japanese cities. That such bombing was indiscriminate and served to terrorize the targeted populations, kill civilians in great numbers, and destroy their cities makes the whole concept morally repugnant to Grayling. He and others claim that, while the stated purpose was to break the enemy’s morale and spirit and disrupt the daily lives and economy of the German and Japanese people, it served only to increase the resolve of the Germans – much like the 1940-41 Blitz steeled the British to resist. All that the bombing accomplished was wanton and useless mass destruction of centuries’ old cultural treasures and wanton slaughter of civilians, and had little effect on the outcome of the war.

That thesis is nothing new. A postwar assessment of the effect of strategic bombing indicated that German industrial production continued to increase almost up the end in spite of nearly continuous attacks during the last year of the war. Grayling’s conclusion, however, is that the area bombing was unjustified by military necessity, and thus amounted to a moral outrage and a war crime. Harris, Churchill, and other commanders who carried out their orders (Grayling, perhaps protesting too much, includes a disclaimer that he does not intend to impugn the RAF and American pilots and crews’ bravery or morality) perhaps escaped prosecution because no international protocols like the Geneva Convention proscribed aerial bombing of civilian targets, and, most important, because the Allies won the war.

Similarly, Grayling believes the area bombing raids of Japanese cities were American war crimes, and, by implication, Roosevelt, LeMay, and Nimitz were war criminals. The firebombing of Tokyo and other cites, even more destructive, is of course overshadowed by the nuclear devastation of Hiroshima and Nagasaki, criticism of which from time to time is the subject of unctuous breast beating by certain elements – but that is another story.

Area bombing in Europe was destructive and deadly. Did it win or hasten the end of the war? Did it have any salutary effect at all? Was the loss of civilian life and the ruining of historic structures and artifices worth the cost? Was there any justification to continue the bombing after late 1944 in Germany or after April or May of 1945 in Japan when the war was all but won? While a majority of Germans never voted for Hitler (when it was still possible to vote for leaders prior to 1933), few protested Nazi policies, most acquiesced in the anti-Jewish laws, and probably a huge majority were thrilled by Hitler’s diplomatic and early military victories. So-called terror bombing was first used by the Nazi controlled Luftwaffe against Holland and Britain. When tit was given for tat, the bombing in Germany was not carried out wantonly against a defenseless people. The German military fought back ferociously. More than 50,000 British airmen (and a considerable number of Americans) were casualties of the campaign and thousands of aircraft were shot down. Until the United States geared up sufficiently to help in Europe (remember, the American military had its hands full with Japan in the first two years of the Pacific war, while the Russians were reeling from a withering German offensive) Britain was essentially alone. It had itself been subjected to a Nazi terror bombing campaign from May 1940 through June 1941 that was halted only when Hitler turned his attention to the Soviet Union. The British fought the only way they could. Given the technology of the time – a far cry from the kind that allowed U. S. forces to pinpoint and kill terrorists with drones and computer guided missile resulting in minuscule collateral damage – and the European weather conditions, nighttime area bombing was the only method that could be remotely effective. The main accomplishment the British wanted was to create sufficient disruption to discourage the Nazi bombers from coming back to their homeland. The diversion of resources to air defense, particularly after the Cologne raid of 1942 and Operation Gomorrah over Hamburg in 1943, surely kept the German air force from attacking Britain again, at least with manned aircraft. The bombing likewise surely hindered the effort on the Russian front. The Soviets begged the western Allies to open a western front for more than two years before the invasion of France on D-Day. Aerial bombing was the best that Britain and the U. S. could do until sufficient resources were marshaled for the Normandy invasion.

Grayling, and others, argue that after the establishment of a western front, and the liberation of most of France by September 1944, combined with contemporaneous Russian drives into Poland, every indication was that Germany was defeated, and all was over but the shouting. Continuing the relentless bombing of German was thus unnecessary.

This is hindsight; it was not all that apparent at the time. To illustrate this point, in September 1944, the Anglo-American forces were dealt a severe setback in Operation Market Garden in the Netherlands, and in December of that year, the German army launched a fierce offensive in the Battle of the Bulge. All the while, Great Britain again was subjected to air raids; this time by the unmanned V-1 and V-2 missiles, the latter being supersonic and striking without warning of any kind. The only defense against the V-2s was to prevent their being available to be launched in the first place. Area bombing, haphazard as it was, was the only possible way. Even after the Bulge, the Anglo-American-Canadian forces were fiercely opposed every step of the way. The Soviet Army in the East was even more ferociously opposed. The Russians suffered nearly a half million casualties in the final drive to Berlin and had to fight for the city block by block. It is incontrovertible that Britain and the U.S. had to use everything at their disposal to end — and win — the European War.

As for Japan, the resistance of the enemy was even stronger. Japan began its war with the United States with a sneak attack. Japanese forces contested every battle by fighting, almost literally to the last man. American and British prisoners of war were treated abominably. And when the war was clearly going against Japan, the Kamikaze suicide campaign began. After the liberation of the Philippines, the U.S. was faced with the necessity of invading the home islands of Japan to end the war. Given this situation, could any rational American commander in chief not conclude that serious softening up of the Japanese homeland was a necessary prelude to an invasion? When the atomic bombs became available, why should it have been preferable to spare Hiroshima and Nagasaki so the United States and its allies could suffer a million casualties (the estimate at the time, never seriously refuted) invading Japan?

The measureless human suffering caused by the bombing is evident. The loss of lives, particularly innocent children who could not have made the world they are born into, is an unfortunate reality of war. The resultant loss, particularly in Germany, of cultural treasures is one of the saddest legacies of the area bombing. Photographs of the pre-war German cities – Berlin, Hamburg, Dresden, and others — reveal charm and beauty that had been utterly destroyed. Berlin suffered the most, not only from the bombing, but the devastation of the last battle, and division between two hostile powers for the next two generations. During my first visit to Berlin twenty-one years after the end of the war, the scars were still there, and where rebuilding had taken place, it was mostly soul-less modern. At the time of my visit a few years before the Wall fell, it had not changed much. Even in 1995, large tracts were still rubble-strewn vacant areas. But Berlin has come back, much of the city has been restored to its pre-war appearance, and the newer architecture has its own beauty. Dresden was more remarkable for the restoration of the old city area, including plans, much delayed by the former East German communist regime, for the restoration of the totally destroyed Frauenkirche (which now is complete and was rededicated in 2005 last year, Britain’s late Queen – during the war, Second Lieutenant Elizabeth Windsor – sending her best wishes). Nuremberg’s old city center, especially targeted because it was a Nazi hotbed, has been almost completely restored to its pre-war appearance. This demonstrates that artifacts can be rebuilt. Civilizations, however, might well not be. World War II was a struggle for civilization, Western Civilization as it had advanced in its highest and finest order. One of its finest exemplars had been hijacked by evil forces that harkened back to barbarism, superstition, and savagery. The reasons this happened are the subject of a surfeit of writings with many more doubtless to follow, so this phenomenon will not be examined here. But happen it did, and was an evil that had to be defeated, at whatever cost, for our civilization to survive.

Naziism was a profound evil, and Japanese militarism was no better, and both committed worse atrocities than could ever be laid at the feet of the British and Americans. He maintains, however, that two wrongs do not make a right, and there is no justification for sinking to the same moral level as the Nazis. True enough, but beside the point. Allied strategic bombing was not calculated genocide or wanton cruelty toward conquered people and prisoners of war. It had the legitimate goal of defending against and defeating the forces that practiced such atrocious conduct.

A final point ignored by many commentators is what would happen when the fighting was over. Winning the war was one thing; maintaining a peace afterward is quite another. After the World War I armistice, which occurred while the German army was intact and still on French and Belgian soil, and no part of Germany had been invaded, gave credibility to the Nazi explanation that the victorious German army was “stabbed in the back” by reformers, bankers, pacifists, and, especially, Jews. At the end of World War II, Germany and Japan knew they had been beaten – badly. While the comparison is apt, the victory did not quite impose a Carthaginian Peace as the Romans did after being troubled three times by the same foe in the Punic Wars. The defeated German and Japanese adversaries were devastated to the point that they had to be rebuilt from the ground up. But they were. And with the help of former foes, were reconstructed in the image of capitalist republican democracies. For nearly eighty years after World War II ended, the world, beset by conflict and bloodshed as it has been, was not to be troubled by military aggression emanating from Germany or Japan. Perhaps, then, at least in two corners of the earth, Arthur Harris’ and Robert Oppenheimer’s whirlwinds managed to uproot the grapes of wrath.


Technology: Some Past; Some Present

Today we look back hundred years. On this August 2, 1923, the 30th President of the United States Calvin Coolidge was sworn into office at 2:43 AM. He was sworn in by his father, a justice of the peace in Plymouth, Vermont. Coolidge succeeded Warren G. Harding, who had died while on a trip in California. The ceremony took place in Coolidge’s father’s farmhouse, where the room was illuminated only by kerosene lamps. In those days, long before the Internet and satellites, even rural telephone and electricity service did not exist. News of Harding’s demise had to be transmitted by telegraph to the village of Plymouth Notch and delivered by messenger to the house. It is a reminder of how far technology has brought this country, and the world, in the past century. Automobiles were not ubiquitous. Functional aircraft were barely 20 years old and scheduled airlines available for the general public, or even government officials, did not exist. Running water, and sewage were installed only in urban areas, and not all of them. Newspapers and other documents, duplicated by technology whose basis was over 500 years old, was the only mass media, although broadcast radio stations had been established in a few cities by 1923. Coolidge was elected in his own right in 1924 and served until March 4, 1929. He famously stated in a note that he did not “choose to run for president” in 1928. He was known as “silent Cal” and his presidency was probably the most low-key tenure seen in the 20th century. Lots of changes in 100 years.

On another note regarding technological achievements, and those that are proposed, it is reported that Italy has tentatively approved a suspension bridge across the Strait of Messina to connect Calabria (the toe of the boot) with Sicily. That bridge would have a 2-mile-long main span that would be the longest in the world. It would be, of course, subjected to high winds and seas around the 1,400 foot towers planted in the channel.

That Strait has been known to be treacherous because of the current and rocks on the Calabrian shore. Ancient myths related the story of Scylla and Charybdis who guarded the Strait were a peril to sailors. Scylla had been a beautiful sea nymph who was changed into a monster by Circe, the jealous suitor of a certain demigod in love with the nymph. She became a monster with six dog-heads on the end of snakelike appendages. She would snatch and devour sailors from their ships when they passed too close. Charybdis was a whirlpool that would sink a ship drawn into its vortex. Homer’s Odyssey related how Odysseus elected avoid Charybdis to save his entire ship at the cost of losing six sailors to Scylla’s maw. Aeneas avoided the peril by sailing around Sicily to Rome. Scylla and Charybdis have become a metaphor for difficult choices, along with the “Hobson’s choice” and “between a rock and a hard place.”

Perhaps the Italians should take twice about their bridge. The rock that Scylla became can be easily avoided these days, and modern watercraft used to ferry people and vehicles can resist the currents. The engineering and logistical problems might result in a bridge too far. Nevertheless, in the land that gave us Da Vinci, Marconi, and Fermi great feats of technology and engineering are possible.


The Key to the Bastille

Today, July 14 is observed in France, and elsewhere, as a holiday. Here in the United States we call it “Bastille Day” I understand in France they simply referred to as 14 July. Like so many other events in France, especially after the late 18th Century, the day has a checkered history there.

That day is popularly regarded as the onset of the French Revolution in which the monarchy of the old regime was overthrown and a republic established. Much like the Russian Revolution of 1917, however, this one quickly got out of hand. It began as a movement to establish a limited monarchy in which the people, here meaning the Third Estate which primarily were the bourgeois — middle-class merchant, tradesmen and artisans — would have the primary say in governance of the nation. Because of the power vacuum created, the movement deteriorated into the mob rule known as the Reign of Terror. King Louis XVI and his wife were executed along with numerous aristocrats and some of the senior clergy. After that Terror burned itself out, the subsequent governments frequently changed a number of times. For the next 150 years, France had a restored monarchy, two Empires, and five republics — they are in the Fifth Republic— as forms of government. Accordingly, any observance of the day the Bastille fell, brings forth different sentiments among the French.

Not so in America. Though we have had our share of tribulations and internal controversies, the Republic established here the same year the Bastille fell has survived its essential form. To the extent that it symbolizes liberty and the end of despotism, Bastille Day is one of our holidays too.

One aristocrat who survived, and took an important role in Revolution, and later French governments, was none other than Marquis the Lafayette, who earlier had supported and aided the American war of independence from Great Britain. Shortly after the Bastille fell, Lafayette obtained a one pound, three ounce wrought iron key to the demolished fortress. He entrusted the key to Thomas Paine, who played a part in both the American and French Revolutions. Paine brought the key to America and in the late summer of 1790 it was presented to the new President of the United States, George Washington. Upon leaving the Presidency in 1797, Washington brought the key to his home at Mount Vernon where the key to the Bastille remains displayed to this day.


The Arc Bends

“In order to get beyond racism, we must first take account of race. There is no other way. And in order to treat some persons equally, we must treat them differently.”
    — Justice Harry Blackmun, in Regents of the University of California v. Bakke (1978) concurring and dissenting in part.

“There is no caste here. Our constitution is color-blind, and neither knows nor tolerates classes among citizens. In respect of civil rights, all citizens are equal before the law.”
    — Justice John Marshall Harlan, Plessy v. Ferguson,(1896), dissenting

“The way to stop discrimination on the basis of race is to stop discriminating on the basis of race.”
    — Chief Justice John Roberts, in Parents Involved in Community Schools v. Seattle School District No. 1 (2007) 

The Arc of the Moral Universe Is Long, But It Bends Toward Justice.
    — Attributed to Martin Luther King, Jr, and others

This Independence Day will be celebrated as it has for the past 247 years, but it is especially significant this year. This is thanks to the United States Supreme Court for re-affirming several of the core principles of the Declaration of Independence as ensconced in law by our Constitution. This past week the Court re-affirmed our freedom of speech and expression, or more particularly, freedom not to speak or express a viewpoint; it ruled that the President of the United States is not a dictator who can give away tax money; that is, private property lawfully collected for public purposes; and, most momentously, ended a patently racist policy that had continued to exist despite the end of legal segregation and enactment of laws against racial discrimination.

There are many talking heads who say that we have “systemic” or “institutional” racism in this country. If one believes that, then he must agree that a truly “systemic” example of racism is the so-called affirmative action programs that many institutions of higher learning (and indeed government agencies and many private businesses) use in an attempt to remedy past discrimination on the basis of race. Programs that do currently penalize some individuals on the basis of their race or skin color do no favors to those such programs are supposed to help. Affirmative Action programs have existed for both virtue-signaling, and for political purposes. As the famous Watergate era informant said: “Follow the money.” There is a lot of money to be made in race-baiting.

In the ruling and opinion announced in Students for Fair Admissions, Inc. v. President and Fellows of Harvard College and a companion case Students for Fair Admissions, Inc. v. University of North Carolina (June 29, 2023), the Supreme Court abolished the use of race as a basis for choosing who will be admitted to colleges and universities. This result affirmed the ideals of our Declaration of Independence and requirements of our Constitution.

In his opinion, Chief Justice John Roberts affirmed his earlier view quoted above, vindicated Justice Harlan, and repudiated Justice Blackmun. The majority opinion and the concurrences further repudiated the concept of collectivism based on an individual’s immutable physical properties.

Among the observations Chief Justice Roberts made in his majority opinion is that “Harvard’s admissions process rest on the pernicious stereotype that ‘a black student can usually bring something that a white student cannot offer.’ [citations omitted] UNC is much the same. It argues that race in itself ‘says something about who you are.” Roberts goes on to approvingly quote a 1995 opinion ‘One of the principal reasons race is treated as a forbidden classification is that it demeans the dignity and worth of a person to be judged by ancestry instead of by his or her own merit and essential qualities.’ But when a university admits students ‘on the basis of race, it engages in the offensive and demeaning assumption that [students] of a particular race, because of their race, think alike.’”

It should be obvious to anyone who keeps up with current events and reads/sees/hears media — especially outside their bubble — that such a stereotype is untrue. Justice Clarence Thomas is the prime example here. (His concurrence, albeit lengthy, is worth a read.) But economists like Thomas Sowell and the late Walter Williams, educators Ward Connerly and Marva Collins, and commentators Jason Riley and Shelby Steele when contrasted with the opinions of Ta-Nehisi Coates and Ibrim X. Kendi, certainly belie any notion that skin color is determinative of an individual’s ideas and attitudes.

Racism, properly defined, is anathema to a free society. Here it is worth quoting parts of a 1963 essay by Ayn Rand written in the middle of the civil rights movement of that era.

Racism is the lowest, most crudely primitive form of collectivism. It is the notion of ascribing moral, social or political significance to a man’s genetic lineage — the notion that a man’s intellectual and characterological traits are produced and transmitted by his internal body chemistry. Which means, in practice, that a man is to be judged, not by his own character and actions, but by the characters and actions of a collective of ancestors.

The Court’s opinion was specific to college admission and arguably not precedent for other contexts such as hiring in private industry. Nevertheless, the color-blind principle appears to be applicable there, particularly where a private firm receives government or state largesse, but that is a controversy and case for another time. But in this case, the arc has indeed bent toward justice. It is indeed a happy Independence Day

Note: the July 2, 2023 issue of The Wall Street Journal included the commentary “Is Your Company’s DEI Program Lawful” Austin, Texas lawyer Michael Toth concludes, in view of these Students for Fair Admissions cases’ application of Title VI, probably not. DEI will DIE — ignominiously.


D-Day at 79

On this June 6, it is again appropriate to repeat an essay I wrote and published almost a decade ago (with an update). Here goes:

The pleasant town of Bayeux in northern France is famous for its eponymous tapestry depicting the events leading to the Norman Conquest of England in 1066. Across from the railway station there is a café that serves cold beer and the apple cider the region is also famous for. That establishment bears a sign in English “Welcome to our liberators.”

The sign might appear to be incongruous to some of us, except that ten kilometers to the northwest is a bluff overlooking a sandy expanse along the English Channel that for the past seventy-five years has been known to the world as Omaha Beach.

Many words will be written and spoken on this 79th anniversary of D-Day, the beginning of what General Eisenhower called the “Great Crusade” to end the Nazi
occupation of Europe and ultimately win World War II. Today, the word “crusade” is politically incorrect in some circles, being offensive to some who, incidentally, have vowed to kill us and actually have achieved some success in doing so. We have become accustomed to euphemisms, direct and to-the-point speech being too harsh for our sensitive ears. That is just as well. The loudest and most eloquent statements to be made come from the 10,000 American graves at the top of the cliff and the sound of the waves below.

When visiting the beach even this long after the fact, it is not difficult to picture the horror and chaos experienced by the soldiers and sailors who stormed ashore that day. The Germans had fortified nearly the entire coastline of France, as well as the coasts of other occupied countries, into what was called the Atlantic Wall. Various barriers and obstacles had been placed in the water offshore to prevent landing craft from reaching dry land, and to channel invaders into killing zones covered by machine gun bunkers dug into the 100 feet high cliffs above. This required the assault to be made at low tide, leaving a 300-yard open expanse of sand to traverse before the slightest natural cover could be reached. Above the high tide line is another 50-yard stretch of loose sand. Walking unencumbered on loose sand can be difficult; running with 60 pounds of weaponry and equipment, all the while facing withering small arms and artillery fire, is a nearly superhuman feat. Many of the invaders did not make it. That so many did is a credit to the quality of the military training and preparation, as well as the fortitude and power of the survival instinct of the troops. The actual film footage in the Normandy episode of the Victory at Sea documentary demonstrated some of the difficulty, but the bloodiest parts had to have been edited to make it suitable for a 1950s home audience.

The fictional first 24 minutes of the film Saving Private Ryan might more accurately portray the horror and difficulty of the assault, but still may be an understatement. Eisenhower said in his address to the American, British, and Canadian service members who were about to land on the beaches: “Your task will not be an easy one. Your enemy is well trained, well equipped, and battle-hardened. He will fight savagely.” They were about to discover that he got that right.

It could have been worse. A major part of the plan was to deceive the defenders as to where and when the attack would be made. The entire coast-line was fortified, the defending German army indeed was battle-hardened and exceptionally well-led by Field Marshals Gerd von Runstedt and Erwin Rommel. Their main problem was manpower and munitions. Five years of war, and the continuing demands of the Russian front in the east made critical to the defenders the knowledge of the place and time of the landings. The deception, with some cooperation from the weather, worked. The German defenders were caught off guard at Normandy and were unable to bring the full weight of their forces to bear until a beachhead was established. In spite of the withering fire and the obstructions, even Omaha Beach was taken by day’s end. The Americans didn’t get much farther that day, though, and the casualties were huge. This beachhead, established by those soldiers, whose ranks are now thinning day by day, made it possible to end the war in Europe. Nazi Germany unconditionally surrendered eleven months and two days after D-Day. Those few that are left, and those who passed before them, merit the gratitude of us all.

For every victor there is a vanquished. So it must be added that within five years of the victory, the United States, and to some degree Great Britain and France, have become allies, if not friends, with Germany during forty years of Cold War, and beyond. There was no doubt then, or today, that the German Army was fighting on behalf of evil masters and a bad cause. Soldiers, most of whom in World War II were not fighting because they wanted to, can nevertheless fight honorably for an ignoble cause (or dishonorably for a good cause, for that matter). Soldiers know this, and once the fighting is over, they are often more inclined than the civilians far from the horrors to let bygones be bygones.

A poignant story related in a British history magazine relates the ordeal of two soldiers, an American and a German defender who shot him at Omaha Beach. Both survived the war. Heinrich Severloh manned a machinegun in a bunker in the cliff. He estimated that he fired over 12,000 rounds before he ran out of ammunition for it, and then picked up his carbine to continue shooting at the attacking Americans. Three of Severloh’s rounds hit David Silva, as he and other GIs were scrambling for cover on the beach. The German was later captured and held in a POW camp until sometime after the end of the war. He was repatriated in 1946 and took up farming. After reading Cornelius Ryan’s book The Longest Day, published in 1959, Severloh learned that he was the one who shot Silva. In 1963, the two former adversaries met each other in Germany. Silva, by that time had taken Holy Orders as a Catholic priest. The two formed a friendship, as former soldiers who fought honorably for opposing sides are often known to do, and corresponded for many years. They both suffered of the circumstances that attend the fog and maelstrom of war.

But the story of Severloh and Silva’s later relationship is only an aside. The honor today goes to Silva and his fellow servicemen who stormed the beaches on the fateful day. They we salute.


Dallas in Super Bowl?

It’s been over a quarter of a century since the Dallas Cowboys were in the Super Bowl — or even in an NFC Championship game. Why? Well, I’ll leave that to real sports fans and pundits, of which I am not.

Dallas, however, has a presence in this year’s Super Bowl LVII (57). The Kansas City Chiefs are in and may well win. Many of us will remember the Chiefs all too short sojourn here in 1960 – 1962 as the Dallas Texans in the fledgling American Football League (now the American Conference of the National Football League. Lamar Hunt, son of the legendary oilman H. L. Hunt, frustrated in efforts to land an NFL franchise for his city, helped start a rival league and formed his own team, the Texans. The NFL, at that time consisting of only 12 teams (most of which were in the Northeast), awakened and saw the expansion potential. Hunt’s rival oil baron Clint Murchison Jr obtained an expansion franchise for his Dallas Cowboys.

Although the Texans won the AFC championship in 1962, Hunt saw writing on the wall. At the time, Dallas’ fanbase could not support two professional football teams. One had to go. The Cowboys, though the team did not win a single game in its first season, was in the older, established league, and had outstripped the Texans for attendance. Hunt moved the team to Kansas City where he faced no rival for fans. Since the moniker “Texans” would have been absurd there, the team became the Chiefs.

Lanar Hunt died in 2006. But his family, most of whom live in Dallas, still owns the Chiefs, and his son Clark is the team’s Chief Executive Officer.

In a way, Dallas is in the Super Bowl this year.

For more, see: How The Dallas Texans Became The Kansas City Chiefs | Texas Standard


So what?

Though it has not been commemorated — or even noticed by most Americans — there is a steadily diminishing number among us that might regard January 27 of fifty years ago a day of infamy. On that day in 1973, the United States formally ended its involvement in the Vietnam War with an accord signed in Paris. Many regarded that end, and some still do, as an American military defeat — we turned tail and ran, the first time in history. While it took two years for the Communist North Vietnam to consolidate its victory and unify the country under its rule, it was inevitable once the U.S. ended its involvement.

Was it a defeat for America? Like so many things, it depends on the definition. Defeat in war implies surrender, occupation by the enemy, reparations, regime change, and other humiliations. France under Napoleon was defeated; Germany and Japan certainly were defeated in World War II. America suffered none of these catastrophes. In this country the domestic fury and civil strife over its military intervention to support the South Vietnam regime was quickly abated. America focused on its domestic issues for the rest of the decade. The only serious foreign scrape was the Desert One debacle when President Carter attempted to rescue the hostages from the U. S. Embassy in Iran, totally unrelated to the Vietnam situation.

Assessment of whether ending the U. S. involvement was a defeat depends on the context. The Vietnam War occurred in the middle of the Cold War. The raison d’etre for American involvement in Vietnam was to halt the spread of Soviet Communism that was seen to be a threat to world peace and freedom in the developing countries — the Third World, as it was termed. After World War II the USSR and its Comintern actively sought world domination — Soviet Premier Nikita Khrushchev famously declared “we will bury you” to the United States and its allies. Khrushchev appeared to be serious. Cuba — 90 miles from Florida — became a Soviet client state under Fidel Castro in 1959 and soon threatened the U. S. with intermediate-range missiles. Soviet sponsored aggression in the Third World, thinly disguised as “liberation” movements, was on the march. Eastern European nations, China, and North Korea were client states of the Soviets. A decade before American full-scale involvement in Vietnam, the United States, sanctioned by the United Nations, took the lead in the effort to stop Communist aggression in Korea. Though it became a stalemate, to the extent that intervention prevented the North Korean takeover of the South it was successful. Successive administrations in Washington believed a similar result in Vietnam was possible.

Nineteenth Century Prussian soldier and military theorist Carl von Clausewitz wrote a treatise On War (Vom Kriege in German) in which he theorized war is “a true political instrument, a continuation of political intercourse, carried on with other means.” In other words, war is one means to achieve a political end. The North Vietnamese military commander Vo Nguyen Giap and his political leader Ho Chi Minh’s aim was unification of Vietnam under a Communist regime as a client of the Soviet Union. The United States and its allies’ policy was to curtail the global spread of Soviet influence.

After the American withdrawal, which was precipitated mostly by domestic politics, the South Vietnamese were not able to fend for themselves for long. Thus, Giap and Ho achieved their policy aims.

After the normalization of the United States’ diplomatic relations with Vietnam in the 1990s, the late Senator (and one-time Presidential candidate) John McCain met with General Giap. McCain, who had been a prisoner of war in the North, reportedly told Giap that the North and its Viet Cong guerilla allies never defeated the U. S. Military in battle. Giap agreed that was true, but irrelevant. In succinct words: “So what?”

Clausewitz’s theory works both ways. The overarching containment policy of the U.S. and the West in the 1980s was continued “with other means.” These means were the economic, cultural, and moral forces led by Ronald Reagan, Margaret Thatcher, Pope John Paul II, and the courage of Polish patriot Lech Walensa and many others. November 1989 saw the fall of the Berlin wall. Two years later the hammer and sickle was hauled down from the Kremlin signaling that the USSR was no more. Vo Nguyen Giap had won his battle, but Soviet Communism ultimately lost the global war. To the extent America might have lost in Vietnam, we may also respond “So what?”


— Clausewitz’s expanded view has been translated from the German as “that war is nothing more than a continuation of the political process by applying other means. By applying other means we simultaneously assert that the political process does not end with the conclusion of the war or is being transformed into something entirely different, but that it continues to exist and proceed in its essence, regardless of the means, it might make use of.” See On War, translated and edited by M. Howard and P. Paret, Princeton University Press (1984 ed.) Clausewitz’s views have not been unchallenged.
— After the Soviet Union collapsed, Dallas’s eccentric restauranteur Harvey Gough obtained a statue of Vladimir Lenin during a visit to Russia and placed it in front of his hamburger restaurant on Lovers Lane inscribed with the words “America Won” on its base.
— Full disclosure. This writer served in the United States Army during the height of the Vietnam war. The vagaries of military personnel assignment sent me to Korea and a stateside post, not Vietnam. In that sense, I am not a “Vietnam veteran” though I could have been.



If Elvis were still with us, he would be 88 years old today. I remember well the day his death was announced on the radio. He was only 42 at the time he died in his Memphis, Tennessee mansion known as Graceland. While having been through and to Memphis several times, I have never visited Graceland, which became and still is a tourist spot.

This past November, however, on our way to the Smoky Mountains in western North Carolina, we spent the night in Tupelo, Mississippi, the place where Elvis was born. The next day on our way out we visited Elvis’ birth home in Tupelo. That house, where his mother actually gave birth, is as modest as Graceland is reported to be opulent. Excluding the front porch, the Tupelo house exterior measurements are about 16 x 21 feet, the size of an average middle-class living room today. Modest though it was, Elvis’ father was unable to repay the loan, and it was foreclosed only a few years after he was born. The family lived in various places in Tupelo until they moved to Memphis when Elvis was 13. The rest, of course, is history.

Elvis’ house is in a park located at 306 Elvis Presley Drive. The park has a museum dedicated to Elvis’s memory, as well as the church building where the family attended and where he sang in the choir as a child.

Except as a pilgrimage by the dedicated Elvis fan (which I am not, particularly) the visit is probably not worth a special trip. But when passing through or otherwise being in Tupelo, it would be an interesting visit to show that humble beginnings in our country do not foreclose the possibility of fame and fortune. Also, during his career, especially at the beginning stages, his “cultural appropriation” of what was called Rhythm & Blues, or “race music” probably was instrumental in the civil rights movement that began in the 1950s.

Elvis Birthplace


Note: During my high school years, I, and at least one of my readers, worked a part time job at the Circle Theater, a neighborhood cinema in Dallas (the building still exists, but hasn’t been a film venue for several decades). One of our tasks was to change the marquee when a new movie was to be showing. That required putting up a ladder and spelling the name of the show and star with individual 10″ letters on a lighted background above the theater entrance. When one of Elvis’ films was to be shown, I complained to the manager that there were not enough of the right letters in stock to spell the name of the film and “Elvis Presley” on both sides of the marquee. The manager said to just use “ELVIS” — there is no one around who will not know who he is. Of course, the manager was right – celebrity rules; then and now.