Monday, December 18, 2017

Things That Will Not Survive the Crisis

Regular fans will know that I like a generational theory of history that predicts we will soon experience a climactic Crisis which will involve the destruction and rebuilding of key institutions.

I've started a mental list of things that won't survive the Crisis, and when you start carrying around a mental list, it's time to write a blog post. I've tried not to focus too much on my personal pet peeves, but as with anyone, I can only give my own perspective. The general theme is institutional structures that protect the older "haves" at the expense of the younger "have-nots".

1. College in its present form/role
      a. $65,000/year, government-subsidized tuition
      b. College degree needed to manage a small office or shop
      c. Private colleges with tax-exempt, eleven-digit endowments
2. The health care system
      a. MDs treating minor illnesses and injuries and writing routine prescriptions
      b. Anti-competitive practices of the American Medical Association
      c. Medical insurance will probably be obsoleted by some form of universal, socialized medicine
3. Internet, social media, web search
      a. If private companies are allowed to continue to have a role, they will be regulated utilities like          the old AT&T
 4. Software
     a. End user agreements that are plainly illegal but so long that nobody reads them
     b. Strong intellectual property protections, i.e. software patents
5. Zoning and land-use restrictions
6. Licensing laws for occupations like barbers and hair braiders

Then there's the big one: government debts and other obligations. The federal government will lose the ability to perpetually turn over a huge debt, and governments at all levels, as well as some big corporations, will be unable to pay promised pensions to retirees. This will probably be the big trigger that sets off all the other changes, but exactly how it'll start and pan out are beyond my ken.

Saturday, December 16, 2017

Crashes: A Failure of Prediction

When overall economic production decreases (technically two quarters in a row), we call it a recession, a crash, or a crisis. This is accompanied by a sharp increase in unemployment, and sometimes a fall in the dollar value of goods and services.

The world's economists do not understand recessions well enough to prevent them. So we're talking about a complicated phenomenon that can be approached from many different perspectives. Here I explore one perspective that might provide a simpler understanding of the causes of recessions.

Money looks like a fixed quantity of green pieces of paper that get passed from person to person. In a recession, suddenly everyone has less money, which means money "disappeared". But how can that happen? It's not like people are burning it in bonfires.

Of course, what I just described is currency, and most money counted by the economy is not currency; it's numbers in a computer. Here is where intuitive concepts of money start to fail. What do those numbers mean? Can I buy a hamburger with them? It seems that I can. I can pay for the burger with a credit card, and then make an electronic payment to pay the card balance at the end of the month. There is no currency involved; it's just numbers in computers changing.

If the law required your bank to have a dollar bill in its vault for every dollar on your bank balance, and all the electronic payment systems were perfect, then the electronic numbers would be just like currency. But that's not how banks work. If everyone went to the bank tomorrow and tried to withdraw their balance in dollar bills, the bank would have to shut down. They don't have that many dollars. They predict that people will only ask for some small fraction of their balances on any given day, and that's how many dollar bills they keep on hand.

So the numbers on your bank balance aren't quite the same as dollar bills, but they're close. Most stuff you own isn't close to the same as dollars. Think about how you might go about calculating your net worth. You have some actual dollar bills, plus a bunch of stuff like numbers on your bank statement, a house, some stocks, and so on, that you could give someone in exchange for dollar bills. You also might have some things that give others the right to take dollars from you, like a mortgage.

Normally, you would look at something like your house, and try to predict (there's that word again) how many dollars you could get for it. That number of dollars would be the house's contribution to your net worth. But this is a surprisingly difficult thing to do. If there were a house absolutely identical to yours and it sold today for $200,000, then $200,000 is maybe not a bad prediction for the value of your house. But no two houses are exactly alike. And unless you live in a densely populated area, probably no houses that are even vaguely similar to yours sold recently. So it'll be hard to estimate your house's selling price. Also, there are expenses associated with the sale such as realtor fees, and you'll have to move, which will also cost you dollars.

Maybe a house is too tough a prediction. Let's look at the stocks you have in your IRA. Now, unlike houses, a share of IBM common stock is exactly the same as any other share of IBM common stock. You can look at the closing price from yesterday - is that a good prediction of how many dollars you could get for your share? Not at all, because you can't get dollars for the stocks in your IRA until you turn 59 1/2. Now you have to predict what a share of IBM will trade for years from now, when you turn 59 1/2. That's hard to do with any kind of accuracy, yet you have to do it because your net worth is an important number. If your net worth is high, it means you can afford a vacation this year. If your net worth is low, maybe you have to take a second job.

There is much more. Even if you could precisely estimate the dollars you could get for your stuff, you would then have to estimate how much stuff you could get for your dollars (i.e. the inflation rate).

Now picture thousands of companies trying to figure out their market value, and the market value of their competitors or clients, and they're running into even more prediction difficulties you had in trying to figure your net worth. They have to predict the value of things like inventories of unsold goods, intellectual "property" and so on. Being human beings, the accountants in these companies can easily fool themselves into thinking they could get more dollars for their assets than they really could. And they can all overestimate their position at the same time, because they all read the same websites and magazines. Then when the time comes for them to actually convert some of those assets into dollars, they get a bucket of cold water in the face and they suddenly don't want to spend any money on anything, not new equipment or new workers or anything.

There is your recession: solely a failure of prediction. Better predictions should lead to fewer recessions. Maybe with big data and all that, we are on our way to smoothing out the business cycle. But I kind of doubt it.

Sunday, November 19, 2017

Stack Cake Recipe

It's that time of year when a house isn't a home without some kind of freshly baked pie or cake in the air. Bonus if it involves spice. Here is a very traditional Appalachian cake that I've blogged about before  but never given a full recipe. Without further ado: the Appalachian apple stack cake.

It looks like a stack of pancakes, but it's way better than pancakes.

The filling: 
Peeled, cored, quartered apples to yield 3 cups of thin slices
2/3 cup white sugar
2/3 tsp ground ginger
2/3 tsp ground nutmeg

Mix together in a big bowl and set aside while you make the cakes. Traditionally, this filling is made with dried apples, but nobody makes dried apples at home any more and if you buy them, they're ridiculously expensive. The bigger the apples, the less work in this step. I discourage using applesauce for the filling. It's just too runny. My grandmother used it decades ago, but applesauce was less watery back then.

The cake layers:
3 1/3 cups all-purpose flour
1/3 cup white sugar
1 1/3 tsp baking powder
1 1/3 tsp ground ginger
2/3 tsp ground cinnamon
2/3 tsp salt
2/3 cup melted shortening
2/3 cup sweetener (details later)
2 medium eggs
1 1/3 tsp vanilla

Preheat your oven to 350 F.

Whisk together dry ingredients in large bowl. Beat the eggs and add eggs and sweetener to bowl, stirring to combine thoroughly. For the sweetener, sorghum syrup is traditional, but hard to find. This summer I found some at an Amish supply store in Mesopotamia, Ohio and have been holding on to it until now. If you can't find sorghum syrup, molasses will also work but is more strongly flavored. Once I got crazy and used apple syrup that I made from frozen apple juice concentrate - it was too appley.

Finish by stirring in the melted shortening and vanilla. You will end up with a dough, not a batter, and you'll have to finish it by kneading with your hands.

Divide the dough into four equal parts. Grease and flour a 9-inch cake layer pan. Press the dough down into the pan and bake at 350 F until the top surface is dry. It won't take long - less than 10 minutes. When it's done, run a knife around the edge to loosen it, then flip it onto a cooling rack. If you have more than one pan, you can do it in batches. The cakes will cool quickly because they're so thin. Be careful handling the layers - they'll be a little crumbly.

Now put a layer on your cake plate and spread one third of the filling evenly on it. Put another cake layer down, being careful to align it with the first one, and spread another third of the filling on top. Repeat, then top it with the fourth cake layer. You want cake on top, not filling.

Now you need to set aside the assembled cake and let things sort of meld together, maybe overnight. The filling will put off a decent amount of liquid which will soak into the cake layers and keep them from being too dry. In fact, that liquid may end up making things too moist, in which case you can pop the whole cake into the oven again for a little while to tighten things up. To make it extra fun and traditional, you can pour a quarter cup of applejack or whiskey on the top layer.

The icing:
5/6 cup powdered sugar
1/4 cup water
1/4 cup butter
1 tsp ground cinnamon
1 tsp vanilla extract

Mix the sugar and water in a saucepan and heat to the boil and then to 235 F, a soft ball stage. Remove from heat, stir in butter, cinnamon and vanilla, and set aside to cool. But don't cool it all the way down, or it'll be hard to get onto the cake.

Spread the icing over the top of the cake. You can get fancy and let it artfully drip down the sides as well. Now you've got a real Appalachian fall treat. Slice it thin, because you can only take so much at a time.

Saturday, November 4, 2017

John Kelly and Compromise

A few posts ago I argued that the crises that occur in America about every 80 years may be due to the arrival of a generation that scorns compromise. In other words, the crises happen less because of external events than because of how the generation in power reacts to them. I discussed how the Transcendental Generation which rose to power shortly before the Civil War was marked by moral intuitionism, and how that intuitionism was expressed as harmless, hippie-like eccentricity when the generation was young and powerless, but as warmongering when the generation was older and in charge.

I then talked about how the Baby Boomers have followed a similar path as the Transcendentals and how if the path continues, a crisis could arise because of the refusal or inability of the Boomers (and their proteges, the Millennials) to compromise on some - you pick it - issue. The Boomers and Transcendentals are/were very concerned with taking strong and public moral stands, and not at all concerned with developing rational arguments that could be the basis for compromise. At their worst, they portray rational arguments as weakness. They often make a show of putting certain topics off limits, aggressively punishing those who even raise issues for discussion with hate speech regulations and taboos around certain words. Remember the push to "Ban Bossy"? (That one was actually cooked up by a Gen-Xer, but it could only exist in a world created by the Boomers.)

Last week there was a stark demonstration of my point. Gen. John Kelly went on the radio and said that the Civil War was caused by the inability to compromise, and the respectable media went apeshit.  They seemed to take the position that the Civil War was a war of African-American liberation waged on the South by the North, and to even speak of compromise was like condoning slavery. Compromise is a dirty word to them. It is not enough for them to reject compromise themselves; they must also humiliate anyone who talks about it.

Compromise had staved off war for decades and there is no reason it couldn't have continued. This may seem like making a deal with the devil, but one forgets that the Civil War cost nearly a million lives, proportionate to ten million dead today. But it's not my point to argue for or against any particular compromise. I merely observe that asking, "How many war dead was it worth for each year slavery was shortened?" is enough to set many people off. It's like asking them how much they'd be willing to sell a child for.

Of course, most people who blew up at Kelly really have no interest in the Civil War per se. The reason for their freedom-fries-style spectacle was to demonstrate that they will never compromise with Trump, who Kelly represents. As a bonus, they got to denigrate the very idea of compromise. Is it any wonder why Congress can't get anything done these days?

Refusal to compromise eventually snowballs into a death-or-dishonor situation. Maybe the Boomers will be willing to die for whatever cause they latch onto. Or maybe they'll decide, in extremity, to compromise, but when they grab for the tools needed to do it, they'll be far out of reach.

Saturday, October 28, 2017

Clean-Keys: An Opinion Management System

Nobody should have to put up with opinions that are offensive, hurtful or just plain annoying. These days, when someone states an opinion you don't like, you don't waste time trying to persuade others that he's wrong. That's old-fashioned and possibly won't work. Now, you just cut him off economically. This can be done at the personal level (firing, blacklisting, boycotting a store), at the national level (moving an event from a state whose government you do not wholly agree with) and even at the international level (embargoes).

But the new way is costly. If Jim's Restaurant serves great food at low prices, but Jim writes a blog post stating he is not altogether convinced that Black Lives Matter has a good point, then obviously you have to stop eating at Jim's. That means you have to switch to a restaurant that is a worse value. When you're eating at the worse restaurant, you can console yourself by imagining Jim going bankrupt and having to tell his kids that he won't be able to send them to college. But there has to be a better way.

What if it were literally impossible for people to post problematic opinions? Then you'd never have to read them. And, even if someone did have bad opinions, it wouldn't matter because he'd never be able to spread them. No bad opinions means you can eat at Jim's again. It means you don't have to fire the genius coder who writes a libertarian blog. It means you don't have to move your convention to pricy Toronto just because Nashville issued a press release that uses the word "chief".

But, you ask, how can this work? The natural place to cut off the hurtful opinions is at the keyboard itself. Yes, I have a prototype. I call it "Clean-KeysTM". Clean-Keys is a device driver for any keyboard that scans the input for fraught ideas. It is based on the same technology that says, "Showing results for artisanal cheese" when you mash-type "aet8sitnl cheee" into the Google search bar.

But Clean-Keys is much better than just a spellchecker. It's an idea-checker. It fixes ideas on the fly, to ensure correctness. For instance, I fired up Clean-Keys and started typing...

YOU TYPE: There are too many illegal immigrants. They use resources but don't pay taxes.

Here's what showed up on the screen:

CLEAN-KEYS: Undocumented workers help our economy. Without them, crops would rot in the fields.

Clean-Keys is technically deep:

YOU TYPE: Globalized capital has been on a crusade to destroy the American middle class for the last forty years, and has damn near succeeded

CLEAN-KEYS: Every reputable economist accepts the doctrine of Ricardian comparative advantage.

Sometimes the output has a bank-shot quality to it:

YOU TYPE: Elton John seems like a nice guy, but when he adopted that boy, it kind of creeped me out.

CLEAN-KEYS: Michael Jackson was never convicted of any crime.

Clean-Keys gets a little flustered if you provoke it:

YOU TYPE: Trump rules!!! Suck it, demonrats!

CLEAN-KEYS: Tru$%^$$%microaggression  not who we are huddled masses na+ion of immmmigrants cultural appro<Ctrl-C received on console>

YOU TYPE: Republicans would prostitute their own grandmothers in exchange for tax cuts.

CLEAN-KEYS: Fair share capppppital gainsfdfe #&^TRG death tax.

I'm still working the bugs out.

I don't want to give away the secret, but roughly speaking, Clean-Keys uses a neural network trained on old church sermons, scripts from John Wayne movies, the writings of biologists other than Stephen Jay Gould, and such filth. You can add other items depending on your politics; Noam Chomsky might be a choice if your politics skew to the right. Clean-Keys reads them all, so you don't have to. When it detects similarities, it replaces the offending material using a text generator designed by the sociology department at a leading community college.

I have to admit that this is not a totally original idea. I read a great book called 1984 where a country invented a language in which it was impossible to express bad thoughts. The words and grammar just didn't exist. It was such a great idea I dropped the book and immediately set to work on Clean-Keys. I never finished the book but I'm sure the protagonist ended up living a happy life free of annoying disagreements.

Now to the business model. The problem we had to get around is that the people who would benefit from Clean-Keys (readers) are not the people who own the keyboards (writers.) It's hard to induce people to install Clean-Keys on their own keyboards. I tried an advertising model, but people found it intrusive:

YOU TYPE: Councilman Smith is playing the race card.

CLEAN-KEYS: Councilman Smith is an outstanding voice for the rights of all citizens and noncitizens THIS MESSAGE BROUGHT TO YOU BY RADICAL BEANS COFFEE HOUSE.

The strategy I settled on is to underwrite a 1% cash back campaign in cooperation with the leading online retailers. If you place an order using a Clean-Keys enhanced keyboard, you get 1% cash back. A business guru told me this was a recipe for insolvency. Well, look at this sequence:

YOU TYPE: Clean-Keys is a menace to free expression and threatens the very foundations of our culture.

CLEAN-KEYS: Clean-Keys (TM) is a great way to earn cash back on every purchase! All real Americans use Clean-Keys (TM). My cousin Tina uninstalled Clean-Keys (TM) and she started gaining weight.

Who's the guru now?

Clean-Keys contains its own marketing and will create a bootstrapping effect once it reaches a certain market penetration. People will want to rant about Clean-Keys, but the only ones who'll be able to will be the real fanatics who can afford to pass up the 1% cash back. After hearing an overwhelmingly one-sided argument for Clean-Keys for a few months, people will demand a constitutional amendment requiring every keyboard to have Clean-Keys, and then the investment begins to turn, shall we say, profitable. I have the IP locked up tight.

So how about it?  I'm currently entertaining offers from venture capitalists...but NOT PETER THIEL! (Thanks, Clean-Keys!)

Saturday, October 21, 2017

Genealogy of Richard Feynman

I get the feeling that Feynman was slightly hostile to the idea that family history could play a role in a person's development. In his fascinating but seemingly little-known American Institute of Physics interview from 1966, he shrugs off the interviewer's questions about his ancestry. He professes not to remember things like which cousins were in his household during his childhood forty years earlier. And he pretends not to recall which year his father died (it was 1946.) But he did often speak fondly of his father's efforts to introduce him to science.

There are some entries for Feynman on paywalled genealogy sites, but they may or may not be complete, I don't know. Here is what I was able to scrape from free sources:

FEYNMAN, Richard Phillips. 5/11/1918 (New York, NY) - 2/15/1988 (Los Angeles, CA)

FEYNMAN, Henry Phillips 1/24/1924 - 2/25/1924 (Queens, New York, NY)

FEYNMAN, Melville A. 3/15/1890 (Minsk, Russia) - 10/8/1946 (Queens, New York, NY)
Birthdate from his WWI draft card, dated 6/5/17. The card also states he lived at 302 Convent Avenue, which is in Manhattan near Columbia University, and was responsible for a shirt manufacturing business, "M. Feynman", with 75 employees at 19-27 W. 21st near the Flatiron Building. Arrived in US 1893. His WW2 draft card is inconsistent as to the birthdate.
[Edit --- The 1900 census, which looks very careful, states "Mella" was born 2/1890 and came to the US in 1893, the same year as his mother and sister, while his father had been here since 1890. The birth years of the children: 1888 in Russia, 1890 in Russia, then a gap to 1894, 1896, 1898 in New York - are consistent with the parents having been separated between 1890 and 1893.]

PHILLIPS, Lucille 3/22/1895 (New York, NY) - 11/11/1981 (Pasadena, CA)
Married 3/26/1916 in Manhattan; they went to Bermuda on their honeymoon.

PHILLIPS, Isidore (1878), Ida, Pearl, Murray. Pearl was Richard Feynman's aunt who lived with them for a time.
FEYNMAN, Laura, Addie, Arthur, Bessie

FEYNMAN, Lewis 8/1862 (Minsk) - 10/13/1947 (Los Angeles). Arrived in US 1890.
---, Anna 9/1862 (Russia) - 10/19/1938 (Brooklyn, New York). Arrived in US 1893. Anna and Lewis were divorced at the time of her death.

I couldn't find a definite arrival record for the Feynmans, but did find a Yankel and Basche Feinmann, husband and wife, 26 and 22 years old, arrived at Ellis Island from Minsk on the Suevia, 3/28/1892. These were likely relatives.

[Edit: I found a passport application in 1906 for a Lewis J. Feynman, born 9/25/1863 in "[unreadable] Minsk, Russia-Poland," naturalized at Riverhead, NY in 1900, living in Patchogue, NY. This is undoubtedly RPF's grandfather. It states he arrived in the US on 5/15/1890 aboard the Bohemia, sailing from Stettin, the present-day port of Szczecin, in far western Poland.

Then there's another very interesting passport application from 1921 from Lewis Jacob Feynman, naturalized at Riverhead, NY in 1900. Here, he says he was born 5/15/1865 in "Minsk, Russia", emigrated from the port of Hamburg in 1890, and wants a passport to visit Poland to see his mother, and Palestine to "study". He says he had a passport issued in 1906 but never used it.

Assuming his mother was back in Minsk, why would he go to Poland to visit her? Minsk was in Russia in 1921; it's in Belarus today. It turns out there's another, smaller Minsk that was in Poland in 1921 and indeed still is today. Was RPF's father born in the Polish Minsk, not the more famous Belarusian one as has always been assumed?

In Perfectly Reasonable Deviations from the Beaten Path, RPF says his grandparents divorced and that his grandfather, who he refers to as Jacob instead of Lewis, ended up in Long Beach, California, where he remarried. But it looks like Grandpa spent some time overseas in between. A marriage record for his daughter states that his second wife was Eva Soltanowsky of Russia, and that their daughter was born in 1925 in Palestine.

Lewis Feynman died 10/13/1947 in Los Angeles.

RPF also stated that Lewis's last name was originally not Feynman, but possibly Pollock, and that he changed his surname to Anna's surname, Feynman, on arrival in the US. The death record of Anna seems to bear this out as it lists her father as Jacob Feynman. I ran across at least one other example of this happening: in the fascinating book Al Jaffee's Mad Life, it states that a son-in-law took the father-in-law's name because he was intended to inherit the estate. ]

PHILLIPS, Henry 4/1840 (Germany) Arrived in US 1855.
LEVENSKI, Johanna 6/1844 (Austria or Germany) Arrived in US 1850. Her parents were born in Poland.

FEYNMAN, Jacob (Russia)
WENDROFF, Sarah (Russia)
LEVENSKI, Mary 10/1826 (Austria) - Arrived in US 1845.

A comment: The census entries for the Henry Phillips household are strangely inconsistent in terms of birthdates, birthplaces and children's names. If they were any less consistent I would question whether I was looking at the same family. Henry and Johanna are on the old side to be Lucille's parents - I wonder if they are possibly an aunt and uncle. Henry was in New York by 1880.

Wednesday, October 18, 2017

Zombie Institutions

I've talked about the generational theory of history in some recent posts. The generational theorists divide history into rough 20-year phases or "turnings" that correspond to a human generation, and many will tell you we're about ten years into a "fourth turning," a period of crisis and upheaval during which failed institutions are destroyed and rebuilt. The last fourth turning was the Great Depression, Second World War and its aftermath, about 1929-1949, which featured the birth of the "liberal international order" exemplified by a mixed, trade-oriented economy in the US, the UN, the World Bank and International Monetary Fund, the national security state, and so on.

A big tell that the generational theory is correct is what happened when the Soviet Bloc collapsed. That was a fourth turning or crisis event for Eastern Europe, but it came during a third turning in the West, which is not a time for rethinking institutions. All those institutions set up to fight the Soviets: the CIA, the DOD, the Peace Corps, VOA, etc. hardly even slowed down. They just kept on rolling down the highway, doing what they were set up to do, even though the guy they were racing was now broken down by the side of the highway and going no further.

But now we're getting close to a housecleaning. Halfway into this fourth turning, it should be increasingly obvious that all these institutions that were set up to solve the problems of 80 years ago are exhausted and failing, and will never be able to solve the problems of today. I call them zombie institutions. Their life blood is gone; they're dead but they don't know it yet.

I argue that we do have many zombie institutions. A major sign of a zombie institution is a sort of desperate casting about for purpose that results in trying to do a bunch of different, new things that have little to do with its original purpose. An example is local newspapers. They used to exist to disseminate local advertising, but the internet pretty much killed that 20 years ago. Some local papers died a dignified death, but a lot of others insist on carrying on, like the last drunk at 2 am on New Year's. They curtailed their printed editions and tried to reinvent themselves as "local media" but it isn't helping them regain lost ad revenue. They can't afford to do much real reporting, so now you go to their websites and see listicles of photographs from their historical files, instead of news.

Given more time, they could have regrouped and become real local media hubs with a different business model, covering local politics and sports, but most of them didn't. Why not isn't the topic of this post; my point is that they didn't, but they refuse to go away. Zombie institutions.

Another zombie institution is NASA. Here, I'm talking about the "operational" part of NASA, the part that sent men to the moon. There's a research part of NASA that is just a bunch of people trying to advance the state of the art in their little corner of science. That part was mature before the moon program was ever conceived and it'll go on long after the rest of NASA is gone. But the operational part of NASA was set up for a very specific Cold War purpose: to demonstrate space technologies with potential military spinoffs, like the ability to sit in low earth orbit for weeks or months and watch the Russkies, fiddle with their satellites or drop bombs on them "like rocks from a highway overpass," in the immortal words of House Speaker John McCormack. The moon program was the climax and culmination of that purpose.

Yet the operational part of NASA soldiers on, because "that is how we do things in space." NASA is a really discouraging example of this zombie-institution mission drift. At one point NASA had a one-sentence mission statement: Land a man on the moon and return him safely to the earth, by 12/31/69. There were other things going on in NASA in the 60s, but they were all pretty closely related to the Apollo program, and when a priority call had to be made, Apollo always won.

Now, NASA has many competing priorities. They're trying to build a new spacecraft and launch vehicle, but there is no consensus on what to do with it. They have an aeronautics program that is really important to the nation, but it gets just enough funding to be a distraction. They are supposed to support industry and small businesses. They're supposed to do outreach to groups around the world. They're supposed to promote STEM education. In the 60s, NASA did all those things too, but they were a side effect of the main purpose. Now the side programs are the whole show.

I do not argue that any of those things are unworthy of effort, only that NASA isn't set up to do them. And they get in the way of NASA's original charter. Pulling off spectacular feats in space requires total institutional focus, and NASA doesn't have it any more.

I also do not argue that this situation is the fault of any person or group. If you accept the generational hypothesis, these turnings are beyond the power of anyone to resist. In the case of NASA, there have been at least three major studies in the last 20 years that have concluded NASA lacks a clear vision, but there still isn't a clear vision. When institutions like NASA disintegrate, the new institutions will be built from mostly the same people who were in the old ones. But the people will be reshuffled. The NASA folks who are big on STEM education will be incorporated into some new institution that rethinks public education with STEM as an integral part. The NASA space folks will go to the new space agency.

Another sign of a zombie institution is that it tries to do everything but actually does nothing. You might think of Silicon Valley startups as the exact opposite of a zombie institution, but they're all a product of a certain financing setup that is pretty much out of ideas. So they have these grandiose mission statements that are some form of "we're changing the world." This was specifically mocked on the Mike Judge series Silicon Valley. They seem to think that an extremely ambitious mission statement will somehow substitute for real vision. But it doesn't work that way. When Intel started, it was about one thing: making integrated circuits. There was little of this changing the world talk. They made the integrated circuits and along the way, maybe they really did change the world.

Tuesday, October 10, 2017

Erdos-Bacon Number

The Erdos number measures how closely associated you are with the late number theorist Paul Erdos, who collaborated with hundreds or thousands of other people and thereby sort of sits at the center of the mathematical universe. If you wrote a paper with Erdos, your Erdos number is 1; if you wrote a paper with someone with an Erdos number of 1, your number is 2, and so on.

The show business equivalent of Erdos is Kevin Bacon. Supposedly every actor can connect to Kevin Bacon in a small number of steps. For example, Paul Newman has a Bacon number of 2 because he was in Fort Apache, The Bronx with Clifford David, who was in Pyrates with Kevin Bacon.

A person's Erdos-Bacon number is the sum of his Erdos number and Bacon number. Not many people have an Erdos-Bacon number. You have to have done something in both math and show business.

I have a shaky claim to an Erdos-Bacon number of just 7. That is not bad; even the legendary Carl Sagan could only manage a 6.  How did I get a 7?

My Erdos number is an indisputable 4, because I wrote a paper with Greg Forest of UNC who has the link Forest>Richard Montgomery>Persi Diaconis>Erdos.

But I can conceivably claim a Bacon number of 3. I have to stretch it a little here. When I was in the 7th grade, I was in a school production of Macbeth with Tammy Pescatelli. (She was Lady Macbeth; I was Banquo.) It does too count as a movie, because the AV club taped it. Tammy has a Bacon number of 2 (Pescatelli>Dan Cortese>Bacon.) So that gives me 3, and my EB number is 3 + 4 = 7. That ties me with radical motormouth Noam Chomsky!

Now, if by chance I were to appear in a real production with Tammy and not just a school play, I could really cement my claim to an epically low E-B number. I just need a little more luck. God knows I was lucky to get an Erdos number at all, let alone a low one. I don't want to game it just to get the number. As my son said, "Anybody who games their Erdos-Bacon number, there's something wrong with."

I can't remember if I ever collaborated on a math or science project with Tammy, but I probably did at some point, because we were in school together for 12 years. Tammy, if you can dig one up, I will back your claim to an EB number of 8. [Correction: 7]

Thursday, September 14, 2017

Letdowns After Streaks

I've been telling people for a few days now that the Indians need to lose a game and break their streak, so that they have time to get through the letdown and get back to normal before the playoffs.

But is there really a letdown after a streak? It feels like there is, but maybe that's just an effect of elevated expectations.

I checked the won-lost records of teams in the 11 games after the 10 longest streaks after 1900. Ignoring the game that ended the streak, which by definition has to be a loss, what was the overall record? (Two of the streaks ended very late in the season and there weren't 11 games left.)

It turns out to be .471 (41-46), which is not great baseball, especially for a team strong enough to pull off a long streak. But it's not quite as dire as I imagined.

For the record, the streaks I looked at were

1916 Giants (both the famous 26-game record streak and an earlier 17-game streak that same season)
1935 Cubs
2002 A's
1906 White Sox
1947 Yankees
1904 Giants
1953 Yankees
1907 Giants
1912 Senators

This is a small sample, so take it for what it is.

As I write this the Indians are down 2-1 to KC. We can only hope...

Streaks Part 2

In my last post I estimated the odds of winning streaks of various lengths by simulating a large number of seasons. I came up with a 0.75% chance per season of a win streak of 19 games or longer, but the actual history is 8 such streaks in 137 seasons (6%). That is significantly more than my estimate.

One obvious correction would be to tweak my uniformly distributed team strengths to fatten up the tails. An "outlier" good team would be more likely to have a long win streak. But my distribution was already uniform. A .450 team is as likely as a .500 team, which is to say that my distribution has very fat tails. (I verified that this reasoning was true with a simulation, because I never trust my statistical intuition.) If I fattened up the tails any more, you'd have teams winning 120 and 130 games a season, which never happens.

So I did two things, both based on the fact that a season is not made up of 162 random matchups as my model originally assumed, but of about 50 3- and 4-game series with each series being either all home games or all away games against the same team. That seems like it would increase the likelihood of a streak, because you could line up a bunch of home series against weak teams.

First, I changed the season from 162 individual matchups to 54 3-game series between the same teams. That had basically no effect on the likelihood of a 19-game win streak. Then, I gave the home team a slight edge by increasing its strength 5% and decreasing the visiting team's strength by 5%. This is based on the average home record being about .550 compared to .450 on the road, which I got here. That barely moved the needle. The likelihood of a 19-game win streak was still a little less than 1%, compared to historical experience of 6%.

I didn't include the effects of home stands or road trips; that is, the fact that teams usually play three or four series in a row at home or away instead of cycling between the two. But I don't see that being a big player.

A couple of other possible explanations are that streaks either psychologically build on themselves (probably impossible to verify with any rigor, due to the luck factor) or that team strength waxes and wanes during the season instead of being a fixed value throughout.  This second effect seems promising because many injuries take a few weeks to heal. The worst teams at any given time probably include good teams that have a lot of injured players. When those players get better, all of a sudden the team is good again.  Then there's the streakiness of individual players. I speculate that many slumps are due to players being injured but functional and not telling anyone.

At some point I'll build up my team strengths from player stats, instead of assigning them randomly. Then we'll be cooking' with gas, as they say.

One correction: I said that the 1916 26-game winning streak of the Giants was interrupted by a tie. That is not exactly correct. The "tie" was actually a suspended game that, by the rules of the day, had to be replayed from the start instead of picked up from where it left off as it would be today. They did replay the game (I didn't know this when I made my original remarks) and the Giants won. That's not a tie in my book. So the record really is 26 wins in a row and there should be no asterisk by it.

Tuesday, September 12, 2017

Streak Odds

The simulation I developed to find the effect of luck in baseball can be used to estimate the odds of various streaks. As it happens, the Cleveland Indians are currently sitting on a 19-game winning streak, which is the sixth-longest winning streak since 1880.

In an earlier post I said the all-time longest winning streak was 26 by the Giants in 1916, but it turns out that streak was 27 wins interrupted between wins 15 and 16 by a tie with Pittsburgh. (A tie? According to Retrosheet they finished the top of the 9th tied at 1-1, but the Giants didn't bat in the bottom of the 9th, for unrecorded reasons. I'm guessing it started raining, and then they never completed the game because neither team contended that year.)

The 1916 Giants also had a 17-game winning streak earlier in the season, but they only came in fourth!

What are the odds of any team getting a 19-game win streak or better in a given season? I set up my team strengths as shown in the scatter plot on the left, and then ran 1000 simulated 162-game seasons. The histogram of longest streaks is shown on the right. There were 15 win or loss streaks of 19 or more, so that would be 15/2/1000 = 0.75% chance per season.

The actual number of streaks of 19 or more since 1880 (137 seasons but most were fewer than 162 games) is 8 (6%). So there's a fat tail effect, or something, going on that I'm not accounting for.

A window company in Cleveland offered free window jobs to anyone who bought windows in July, if the Indians had a 15-game winning streak. You got the deal if you bought by July 31, at which time the Tribe had 58 games left to play. What are the odds of a 15-or-better winning streak by one of the 30 teams in 58 games?

I calculated it by looking at the longest streak for "Team 1" of my ensemble over 10,000 58-game "seasons". (It took 10,000 simulated seasons to get a stable value.) That streak was 15 or greater just 14 times, so the chance of a 15-game winning streak was 14/2/10,000 = 0.07%. The figure below shows on the left the strength and actual wins over the 58 games for the 30 teams, and on the right the histogram of longest streaks by Team 1 in each season. The caption should read 10,000 seasons, not 1000 seasons.

I didn't use all the information available. I could have only looked at teams that happened to have 57 wins in the first 104 games (as the Indians did), which would have taken a lot more simulations but probably wouldn't have changed the results much because 57 wins out of 104 is not much better than average.

As is typical of these kinds of promotions, the window company itself didn't take on the risk of having to pay out. They paid a promotion company, which took the risk. What would have been a fair price to pay the promotion company? They sold about $2 million worth of windows, so the expected payout would be 0.07% x $2 million or $1400. Even if they paid $10,000, that promotion company had to eat a very spicy meatball when the Tribe won their 15th game.

Now, a philosophical excursion. It only makes sense to talk about probability and odds when there is some degree of ignorance. On July 31, everyone was ignorant of how the Indians would actually play, but there were varying degrees of knowledge about their record so far, their injuries, which teams they were scheduled to play, how many home versus away games and other information that a sophisticated model could use to estimate the odds. Given what we know today, what are the odds the Indians would have won their 19th straight last night?


Monday, September 4, 2017


I trained a neural network on the lyrics of a certain popular songwriter and then had it generate a short song:

Got in a little favor for him.
I wanna find one place, I wanna find one face that ain't looking through me.
Down in the U.S.A.
Born in the shadow of the refinery.

I'm a cool rocking Daddy in the face of these....
Whoa whoa whoa badlands!. Whoa whoa whoa badlands!. Whoa whoa whoa whoa whoa badlands!. Whoa whoa whoa.
For the ones who had a woman he loved in Saigon.

I was born in the shadow of the penitentiary.
I was born in the night, with a fear so real, you spend your life just covering up.
Learned real good right now, you better listen to me, baby.
I'm a long gone Daddy in the shadow of the penitentiary.

If you can't figure out whose lyrics I trained the network on, you must not be between the ages of 30 and 80. I used this guy's code.

Are You Ready For Some Football?

tl;dr: The small number of games in the NFL season strongly exaggerates the differences between teams. The NFL rule of scheduling six of a team's games between division rivals would have no effect on the actual results of a season determined by coin-flips. But division scheduling very slightly exaggerates differences when real differences already exist. 

Major League Baseball teams play 162 games a season, which are clearly enough to separate the truly good teams from the merely lucky ones. In the NFL it's only 16 games. Is that enough to separate the great from the lucky?

First, I ran the same simulation I used in my last two posts but set the number of teams to 32 and the number of games per season to 16. With each game decided by the flip of a fair coin (therefore, no ties), here is one example season (sorry about the formatting, Blogger has a fixed column width):

North South East West
Cincinnati 12-4 Indianapolis 9-7 NY Jets 14-2 Denver 8-8
Cleveland 12-4 Tennessee 9-7 New England 9-7 LA Chargers 8-8
Pittsburgh 9-7 Jacksonville 8-8 Miami 5-11 Oakland 7-9
Baltimore 7-9 Houston 6-10 Buffalo 7-9 Kansas City 3-13
North South East West
Minnesota 11-5 Atlanta 11-5 Philadelphia 8-8 Arizona 10-6
Chicago 10-6 Tampa Bay 8-8 Washington 7-7 Seattle 9-7
Detroit 7-9 Carolina 7-9 Dallas 7-7 LA Rams 6-10
Green Bay 6-10 New Orleans 4-12 NY Giants 6-10 San Francisco 6-10

Two things you notice right away is that there seem to be too many teams within a game of .500 (7-9, 8-8 or 9-7), and that there isn't enough separation between the teams in most of the divisions. There are 17 teams within one game of .500, but in 2016 there were actually only 11 teams like that. And in two divisions, no team is more than two games from .500. That's unusual. 

Obviously, if I assigned unequal strengths to the teams, this would tend to create some separation. But there is another thing that might work. In my simulation, the schedule ignores divisions. That is,  each of the 16 games a team plays is a random matchup with one of the other 31 teams. The Browns are as likely to play the Saints as they are the Steelers. But in the real NFL, 

1. A team plays its division rivals twice
2. A team plays all four teams in another division in its conference once
3. A team plays all four teams in another division in the other conference once
4. A team plays its remaining two games against teams from the two remaining divisions in its conference.

Rule 1 seems like it might be important in creating separation within a division. In effect, 3/8 of the season is played between just four teams, and each of those games separates two teams in a division by one game. There is a 100% chance of creating a one-game separation. In contrast, when two teams play opponents outside the division, there's a 50% chance of a one-game separation (one team wins, one loses) and a 50% chance of no separation (both win or both lose).

I almost bought that argument. But when the games are decided by coin flips, the expectation value of separation per game is still zero regardless of the number of teams. If that doesn't convince you, consider that in a simulation of 1000 seasons, the coefficient of variation of wins per team was 0.4065 for a 6-game, 4-team season and 0.4093 for a 6-game, 32-team season - not a statistically significant difference.

Anyway, I re-ran the simulation continuing to decide games by coin flips but taking into account Rule 1. Here's how it came out:

North South East West
Cincinnati 12-4 Jacksonville 11-5 Miami 12-4 Oakland 10-6
Baltimore 9-7 Houston 10-6 New England 11-5 Kansas City 8-8
Pittsburgh 6-10 Indianapolis 8-8 NY Jets 7-9 Denver 5-11
Cleveland 5-11 Tennessee 6-10 Buffalo 8-8 LA Chargers 4-12
North South East West
Detroit 9-7 Tampa Bay 11-5 Philadelphia 10-6 San Francisco 9-7
Green Bay 7-9 Carolina 10-6 NY Giants 7-9 Seattle 8-8
Chicago 7-9 Atlanta 8-8 Washington 7-9 LA Rams 7-9
Minnesota 6-10 New Orleans 7-9 Dallas 6-10 Arizona 5-11

It made very little difference. There are now only 15 teams within one game of .500, but there are still two tightly bunched divisions. 

What happens if we assign random team strengths instead of just flipping a coin? I'll just base it on the CV. For uniformly distributed team strengths between 4 wins/season and 12 wins/season, the CV of wins per team for a league without divisions (no Rule 1) was 0.46 in 1000 simulated seasons. With Rule 1, it was 0.49.  So Rule 1 does exaggerate the differences between teams when a real difference already exists. But it's a weak effect. 

The range of 4-12 wins/season for team true strengths seems about right. So now you want to see the cloud plot of wins for the NFL. Here it is for 100 simulated seasons:

The scatter in wins per season is huge. An average team wins anywhere from 3 to 13 games a season. And the plot of "luck ratio" on the right is cleaner than it was for baseball and clearly shows that there's more random variation in wins for weaker teams than for stronger ones. 

Sunday, September 3, 2017

More Baseball Simulations

One question that was raised from my post yesterday is what shape the distribution of true strengths is. Is it a bell curve, a uniform distribution, or something in between?

We can't answer that question directly, because we can never observe the true strengths, only the actual win-loss records. But the shape of the true strength distribution might have an effect on the shape of the actual distributions of wins per season, which we can observe.

If I assume the following bell curve for true strength

then I get the following distribution of wins per season (this was over 137 seasons for reasons I'll explain later):

But if I assume the following flat distribution of strengths:

then I get this distribution of wins per season:

This example looks "blockier" than the one from the bell curve, but in fact its coefficient of variation is 0.12, compared to 0.13 for the bell curve result. So it's not really possible to tell from the actual outcomes whether the distribution of true strengths is bell-shaped or flat - and if you can't tell, then it doesn't matter, at least for the purpose of predicting the distribution of wins.

The histogram of actual wins for the last six MLB seasons is

and its CV is 0.135. You could probably do some more sophisticated tests, but in my experience doing this kind of modeling, if a result isn't apparent to the eye, no fancy test is going to be convincing. One thing that's interesting about the actual MLB histogram is the "dip" in the middle. This could just be random chance and might go away if more seasons were included, but it could be that as the season goes on, talent tends to drain from the weaker teams and go to the stronger teams, which could make the win histogram bimodal. Teams that have big payrolls but are out of the playoff hunt by August are often looking to unload what talent they have to the teams that are going to make a run for October, so bad teams get worse and good teams get better.

I ran 137 seasons because I wanted to get some statistics on win streaks. Here's a histogram of the longest win streak by any team during each of the 137 simulated seasons:

This distribution is definitely skewed. Its mode (the commonest value) is 12, but in no season was the longest streak less than 10. There are 22 streaks of 16 or longer, and the longest streak of all the 137 seasons is 26. This is not far from reality. In the past 137 seasons, there are 30 MLB streaks of 16 games or longer, and the all-time longest streak during that span of time was by the New York Giants of John McGraw, who won 26 in a row in a ridiculous September 101 years ago. 

Saturday, September 2, 2017

The Role of Luck in Baseball

In Major League Baseball, there is decent parity. The span between the worst and best teams in baseball right now is the 51-83 (.381) Phillies to the 92-41 (.692) Dodgers. In contrast, the worst and best teams in the NBA last year were .244 (Brooklyn) and .817 (Golden State), and the worst and best teams in the NFL were .063 (Cleveland, eeegh) and .875 (New England).

There is an element of luck in every game. When the Phillies play the Dodgers, the Dodgers will probably win, but nobody is really shocked if the Phillies pull one out. Maybe the Dodgers stayed out too late the night before, or had a rough flight to Philly.

But in the long run, the "better" team will beat the "worse" team more often than not. I put better and worse in quotes because I haven't exactly defined a team's true strength yet. Here is my definition: the true strength of a baseball team is the average number of wins it would get over an infinite number of seasons. That way, the effect of luck washes out completely. For example, an average team would get 81 wins per 162 games, if they played forever. By forever, I mean the same roster, at the same age and skill level, playing hypothetical repeated seasons forever. Obviously, they aren't getting older and older in these hypothetical seasons, as they would in real life.

Considering the effect of luck, you can see how the shortness of the NFL season (16 games) might tend to exaggerate differences between teams. The Browns clearly suck, but over a large set of seasons they might average 2 or 3 wins instead of the single win they got last year.

How does luck affect the number of wins a baseball team gets in one season, compared to its true strength? The baseball season has 10 times as many games as the NFL season, so the effect of luck should be a lot less than in the NFL. I ran some simulations to find out.

I ran 100 full seasons where 30 teams play each other in random matchups for 162 games. At the beginning of each season, I assign true strengths to the teams from a normal distribution with a coefficient of variation of 0.2. That results in true strengths running from about 40 to about 120 expected wins per season. Then I run through all 162 x 15 = 2,430 games per season. (Remember that on each game day, 30 teams play a total of 15 games.)

Each game goes like this: I draw a number from a uniform distribution between 0 and 1. If that number is less than

Team A's strength / (2 * Team B's strength)

then Team A wins. Otherwise, Team B wins. From this formula you can verify that if Team A has strength 90, and plays average teams (strength of 81) over and over, then Team A will win an average of 90 games per season in the long run. So this satisfies my definition of the team's true strength.

But the outcome of each game has an element of chance. Drumroll, please...

From left: Distribution of team strengths, actual wins versus strength for all teams and seasons, and ratio of actual to expected wins for all teams and seasons

When the team strengths are normally distributed, an average team (average 81 wins per season over infinity seasons) won as few as 65 and as many as 95 games during the 100 simulated seasons. That's the difference between first and last place. The plot of actual divided by expected wins was a check. It should average to 1 for all strength values, which it does, except for the very weak teams (not sure what's going on there, maybe a problem with my random number generator.) But it shows that the scatter is bigger for weak teams. That is, it's more likely for a weak team to do unexpectedly well or unexpectedly poorly than for a strong team. That is good - it means luck plays the least role for the strongest teams, which are the ones that get the glory. If a really crappy team gets lucky, it probably still won't be enough to affect a championship.

I then repeated the simulation but instead of choosing normally distributed strengths, I chose them from a uniformly random distribution on an interval. I set the interval width such that the standard deviation of the uniform distribution matched that of the normal distribution used previously.

Uniformly random draw of team strengths and the resulting actual wins and "luck ratio" versus team strength

In this simulation, the scatter was a little smaller, as might be expected. A team of average true strength (81 wins expected) got between maybe 68 and 90 wins over the 100 simulated seasons. It looks like the actual/expected plot shows the same narrowing of the scatter as team strength increases, but it's hard to say. 

By setting all the strengths equal to 81 (average), the outcome of each game is essentially decided by a coin flip. If a team won more than 81 games, it would solely be due to luck. In this case I found that on average, the winningest team had 90-95 wins per season, which is a very solid year. This would suggest, for instance, that probably every season, one of the division champions is a complete fluke. It took a large number of seasons (more than 10,000) to get a stable value for this number and I didn't have the patience to narrow it down further. The type of distribution used for team strengths didn't seem to matter. 

You could do all kinds of things with this simulation - and I'm sure serious gamblers do. For example, you could estimate the likelihood of a 10-game winning streak and then try to find someone to bet against who underestimated the true odds. With a lot of bets like that, I suspect you could make money consistently. But that's a suspicion I probably shouldn't pursue until my kids are out of college. 

Saturday, August 26, 2017

The Great American Eclipse

Yes! We went to the path of totality for the great eclipse of 2017. This trip was five years in the making, but we only seriously started planning about a year ago. Our first idea was to drive to the nearest location of totality, which would have been in maybe Kentucky or Tennessee. But the humidity of the eastern US made me worry about cloud cover, so we decided to go west. As it turned out, most places in the East had good weather on eclipse day, but I have no regrets about the extra travel to get west. We figured that if we were going to take time off work and shell out for a trip, it was worth a few dollars more to maximize the chance of a good view. 

It came down to either eastern Oregon or Wyoming. Eastern Oregon had slightly better weather odds, but it looked hard to get to without a whole lot of driving. So we went with Wyoming, specifically eastern Wyoming away from the mountains which tend to generate and trap clouds. We could get there in about three hours from the Denver airport, which is easily reachable for us on Southwest.

We ended up staying in Guernsey, Wyoming in a very nice, new hotel at reasonable rates. But that took some doing, and it’s a good thing we planned ahead. Shortly before the eclipse, rooms were going for $500-$1,000 a night with multi-day minimum stays, and rental cars at the Denver airport were $1,500 a day. That is not an exaggeration. There were thousands of tents in campgrounds and on ranchland, all along the roads. One place in Guernsey was asking $150 for (I assume overnight) parking space.  

More power to them if they made money, but this seems like a bad deal 

Tents pitched in a grassy area between a hotel parking lot and the North Platte River on the morning of the eclipse (Guernsey, Wyoming)

The National Weather Service had a cloud cover prediction that was updated twice a day or so. Guernsey was well within the path of totality, but 48 hours before the eclipse, the NWS was calling for over 30% cover versus 5% in Casper, about an hour and a half west. So we planned on getting up really early on Monday and driving up to Casper. But on Sunday night the prediction changed, to 15% around Guernsey and 25% in Casper, but with a wide band of 5% in between. We decided to go to Glendo State Park, in the 5% zone, keeping open the option of moving around if there was a reason to. It's a good thing we didn't try to go to Casper, because we'd never have made it. We would have watched the eclipse from our car on the side of I-25. By 8 a.m., I-25 west of Glendo was at a standstill, filled with people from Denver who'd left at 3 or 4 a.m. That was too late; we talked to people who'd left at 2 a.m. and they'd reached their viewing spots just as the traffic was locking up. People who didn't leave Denver until 7 a.m. never made it out of Colorado. 

Local totality time was 11:45 a.m. When we pulled out of Guernsey at 7 a.m., the roads were moving well. There was some congestion near the park, but no real line at the ticket booth (the state of Wyoming charged a very reasonable $6 a car, compared to some private viewing sites in Casper that were asking $50 or more.) 

View from Wyoming Route 319 just south of Glendo, parallel to I-25, which is where the line of traffic is sitting. This was about 7:30 am.
Glendo State Park was set up really well – I’d estimate there were 5,000 cars there and space was available for many more, although there was not enough road to accommodate entry and exit (more on that later.) Several college astronomy departments had tents up, and the University of Wyoming gave us free t-shirts. Some people had large (10-inch or bigger) fancy-looking telescopes. We parked and walked over to a pavilion where we struck up a conversation with a guy from Italy. He invited us to observe some sunspots using filtered binoculars on a tripod. We also met a space weather specialist from the Johns Hopkins Applied Physics Lab and I had a long talk with her about the ins and outs of government-funded research. But most importantly – there was not a cloud in the sky!

Eclipse watchers near Bennett Hill, Glendo State Park. The tripod holds the filtered binoculars of our new Italian friend.

Panoramic view at the foot of Bennett Hill
Near the pavilion was flat-topped, rocky Bennett Hill with a path leading up. We trekked up the hill around 10:00, and found about 200 people at the top, some in folding chairs, some standing around, and some sitting on the bare ground. The view from up there was tremendous, eclipse or none  – it must have been 50 miles from horizon to treeless horizon. The atmosphere was very slightly hazy from some wildfires several states away, but that was only noticeable along the ground. The overhead sky was clear and blue.  We had heard that if you have a wide enough view, you could see the shadow of totality advancing across the ground at something like twice the speed of sound. I venture to say that the only way to improve on this viewing spot would have been to go airborne, which some people did in a helicopter and a hot-air balloon we saw overhead. Many videos taken from the hill can be found on YouTube.

View from atop Bennett Hill
On Bennett Hill, pre-eclipse
Eclipse watchers on Bennett Hill, Glendo State Park, Wyoming
The boys bought these t-shirts at the Casper Eclipse Festival the day before. 
One lady had a big white sheet spread on the ground to capture the shadow bands that are supposed to happen just before totality. But the buildup to totality had us all a little bored. You could see the moon covering the sun using eclipse glasses, but it was so clear and sunny that it didn’t get noticeably darker until about 15 minutes before totality. We listened to “Brain Damage/Eclipse” by Pink Floyd, just as I'd planned it five years ago. Then the light became…the only word is unworldly. An antelope was spotted near the top of the hill within a couple of minutes of totality, and it drew everyone’s attention. I wanted to yell out, “Forget the antelope!”

Totality came on very suddenly, which is characteristic of being right in the middle of the path, which we were. It happened too fast to look for shadow bands, and I didn’t see the approaching shadow or any Baily’s Beads. (In fact, I'm skeptical that the approach of a distinct shadow is ever visible, because we didn't see it under these near-perfect conditions.) The temperature dropped but I didn’t notice any change in the wind. The sky darkened as if someone had quickly turned down the knob on an adjustable room light. The stars came out, and the horizon took on the pink-orange of a sunset at all azimuths, not just in the west. We had rehearsed the taking of a single picture of my boys and me with the corona in the background, and got that out of the way quickly.  Then we just looked at the corona.

The one photo we took during totality. You can just make out the dark spot in the middle of the sun. In real life, the dark spot covered the entire photosphere, leaving only the corona. But the brightness of the corona almost obscures the dark spot in the photo.

I can only partially capture it in words. There was an illusion of the sun only being a few thousand feet high. There were three very long white streamers from the corona, much longer than you see in pictures. A high airplane crossed the corona, leaving a faint contrail. The corona looked like a bright, white, round fire with a perfectly circular hole in the middle of it. The boundary between the umbra (dark circle in the center) and the corona was very slightly dynamic, not like a flame. The corona streamers were stable. It could easily be viewed without eclipse glasses; the brightness was not harsh on the eyes. I could have watched it for hours, but of course it ended after two and a half minutes. Then we were treated to a very distinct “diamond ring” before the sun’s photosphere was uncovered again, and the lights went back on. There was an artificial quality to it – like a very high-quality planetarium show, only it covered the entire goddamned sky. It is no exaggeration that a Siberian tiger could have waltzed through the crowd during totality and nobody would have noticed.

After totality, people started down from the hill. The rest of the eclipse was anticlimactic and only the real astronomy buffs continued to observe it. Everything had gone perfectly, just as planned, up to this point. Then…

It took about half an hour to walk back to our car, and I foolishly started the engine as if we were going to just drive off. But the cars were at a standstill on the only road out. So I turned off the engine and we waited another half hour. The next two hours were short periods of driving down the exit road interrupted by long periods of standstill waiting. It was only about 80 degrees, but inside the rental car it got hot quickly with the engine and A/C off. The fun was only starting.

We intended to go west on I-25, then cut south at Casper to Independence Rock and thence on to Steamboat Springs, Colorado, normally about a five-hour drive. But we didn’t even exit the park onto Wyoming Route 319 for two hours. We were moving so slowly that we were able to get out and visit the porta-johns between movements of the vehicle line. Kids were selling water and popsicles from wagons, and they were moving a lot faster than we were. Once on the road, nothing sped up. There was another line of jammed cars coming in the opposite direction, which we soon figured out were eclipse viewers leaving from Casper who had hit a huge traffic jam on I-25 south and had exited thinking the state route would be faster. There were trucks off-roading it, driving on the dirt path along the railroad that ran between I-25 and the state highway. People were hanging out windows and sunroofs, sitting on top of campers, and walking along the roadside. I got out and walked for a while myself, to stretch my back. Usually it was the car that had to catch up to me, not the other way around. It was like the traffic jam scene from Woodstock.

Traffic jam on Wyoming Route 319 north of Glendo, several hours after the eclipse

Three hours and fifteen miles later we came to US 20, which cut us over to I-25 north. At the intersection of Wyoming 319 and US 20 there was a stop sign, with nobody directing traffic. There must have been three or four thousand cars in that line of traffic, and every one of them was stopping at the stop sign. Assume the stop takes five seconds, multiply by 3,000 and you quickly understand the cause of the delay.

When we finally got onto I-25, we could see stopped traffic on the southbound side stretching for twenty or thirty miles. It was the biggest traffic jam I’ve ever seen, and I used to live in Los Angeles. I-25 north was clear, but our plan of going to Steamboat Springs was in the trash. We made it to Casper by 6 p.m., having spent seven hours in the car already, and called it quits. There is no way I was going to drive hundreds more miles of unlit two-lane Wyoming state highway after that much car fatigue.

The only problem was, we had no room reserved in Casper and there was no possible way to get south back towards Cheyenne with the traffic. There were only three towns of any size between Glendo and Casper, and they didn't look like they had any hotels. The distances in the western Great Plains are orders of magnitude longer than in other parts of the country, and there can be sixty miles between cities that have even basic services.

With the eclipse crowd not completely out of Casper, we ended up paying $250 for a smoking room at a low-end Days Inn. It may well have been the last room in town. We also had to forfeit a night’s room charge in Steamboat Springs because it was a nonrefundable reservation. We headed to Independence Rock and Colorado the next day, but the traffic cost us an entire day out of our vacation.

Verdict: Worth every penny and every iota of hassle. People have lived their whole lives without seeing a total solar eclipse and that's almost tragic.

Lessons learned: Stay near a big city if the path permits it. They’re set up to accommodate hundreds of thousands of tourists; Wyoming isn’t. You can keep your location flexible, to avoid cloud cover, until the day of the event, but don’t expect to be mobile on eclipse day. You're going to have to just hunker down and hope the sky is clear. Thus the importance of getting to an area with good overall weather odds. (If everyone in a large city ever had to leave suddenly due to some kind of calamity, and there was no special traffic pattern set up, the scene would be very, very bad. I have new respect for the people who do this type of disaster planning.) Reserve your room and car at least a year in advance, and try not to tack on side trips for a couple days on either side of the eclipse. But most of all…do it if you possibly can.