Was Sturgis a Covid-19 Superspreader Event?: Evidence Suggests That It May Well Have Been

A.  Introduction

The Sturgis Motorcycle Rally is an annual 10-day event for motorcycle enthusiasts (in particular of Harley-Davidsons), held in the normally small town in far western South Dakota of Sturgis.  It was held again this year, from August 7 to August 16, despite the Covid-19 pandemic, and drew an estimated 460,000 participants.  Motorcyclists gather from around the country for lots of riding, lots of music, and lots of beer and partying.  And then they go home.  Cell phone data indicate that fully 61% of all the counties in the US were visited by someone who attended Sturgis this year.

Due to the pandemic, the town debated whether to host the event this year.  But after some discussion, it was decided to go ahead.  And it is not clear that town officials could have stopped it even if they wanted.  Riders would likely have shown up anyway.

Despite the on-going covid pandemic, masks were rarely seen.  Indeed, many of those attending were proud in their defiance of the standard health guidelines that masks should be worn and social distancing respected, and especially so in such crowded events.  T-shirts were sold, for example, declaring “Screw Covid-19, I Went to Sturgis”.

Did Sturgis lead to a surge in Covid-19 cases?  Unfortunately, we do not have direct data on this because the identification of the possible sources of someone’s Covid-19 infection is incredibly poor in the US.  There is little investigation of where someone might have picked up the virus, and far from adequate contact tracing.  And indeed, even those who attended the rally and later came down with Covid-19 found that their state health officials were often not terribly interested in whether they had been at Sturgis.  The systems were simply not set up to incorporate this.  And those attending who were later sick with the disease were also not always open on where they had been, given the stigma.

One is therefore left only with anecdotal cases and indirect evidence.  Recent articles in the Washington Post and the New York Times were good reports, but could only cover a number of specific, anecdotal, cases, as well as describe the party environment at Sturgis.  One can, however, examine indirect evidence.  It is reasonable to assume that those motorcycle enthusiasts who had a shorter distance to get to Sturgis from their homes would be more likely to go.  Hence near-by states would account for a higher share (adjusted for population) of those attending Sturgis and then returning home than would be the case for states farther away.  If so, then if Covid-19 was indeed spread among those attending Sturgis, one would see a greater degree of seeding of the virus that causes Covid-19 in the near-by states than would be the case among states that are farther away.  And those near-by states would then have more of a subsequent rise in Covid-19 cases as the infectious disease spread from person to person than one would see in states further away.

This post will examine this, starting with the chart at the top of this post.  As is clear in that chart, by early November states geographically closer to Sturgis had far higher cases of Covid-19 (as a share of their population) than those further away.  And the incidence fell steadily with geographic distance, in a relationship that is astonishingly tight.  Simply knowing the distance of the state from Sturgis would allow for a very good prediction (relative to the national average) of the number of daily new confirmed cases of Covid-19 (per 100,000 of population) in the 7-day period ending November 6.

A first question to ask is whether this pattern developed only after Sturgis.  If it had been there all along, including before the rally was held, then one cannot attribute it to the rally.  But we will see below that there was no such relationship in early August, before the rally, and that it then developed progressively in the months following.  This is what one would expect if the virus had been seeded by those returning from Sturgis, who then may have given this infectious disease to their friends and loved ones, to their co-workers, to the clerks at the supermarkets, and so on, and then each of these similarly spreading it on to others in an exponentially increasing number of cases.

To keep things simple in the charts, we will present them in a standard linear form.  But one may have noticed in the chart above that the line in black (the linear regression line) that provides the best fit (in a statistical sense) for a straight line to the scatter of points, does not work that well at the two extremes.  The points at the extremes (for very short distances and very long ones) are generally above the curve, while the points are often below in the middle range.  This is the pattern one would expect when what matters to the decision to ride to the rally is not some increment for a given distance (of an extra 100 miles, say), but rather for a given percentage increase (an extra 10%, say).  In such cases, a logarithmic curve rather than a straight (linear) line will fit the data better, and we will see below that indeed it does here.  And this will be useful in some statistical regression analysis that will examine possible explanations for the pattern.

It should be kept in mind, however, that what is being examined here are correlations, and being correlations one can not say with certainty that the cause was necessarily the Sturgis rally.  And we obviously cannot run this experiment over repeatedly in a lab, under varying conditions, to see whether the result would always follow.

Might there be some other explanation?  Certainly there could be.   Probably the most obvious alternative is that the surge in Covid-19 cases in the upper mid-west of the US between September and early November might have been due to the onset of cold weather, where the states close to Sturgis are among the first to turn cold as winter approaches in the US.  We will examine this below.  There is, indeed, a correlation, but also a number of counter-examples (with states that also turned colder, such as Maine and Vermont, that did not see such a surge in cases).  The statistical fit is also not nearly as good.

One can also examine what happened across the border in the neighboring provinces of Canada.  The weather there also turned colder in September and October, and indeed by more than in the upper mid-west of the US.  Yet the incidence of Covid-19 cases in those provinces was far less.

What would explain this?  The answer is that it is not cold weather per se that leads to the virus being spread, but rather cold weather in situations where socially responsible behavior is not being followed – most importantly mask-wearing, but also social distancing, avoidance of indoor settings conducive to the spread of the virus, and so on.  As examined in the previous post on this blog, mask-wearing is extremely powerful in limiting the spread of the virus that causes Covid-19.  But if many do not wear masks, for whatever reason, the virus will spread.  And this will be especially so as the weather turns colder and people spend more time indoors with others.

This could lead to the results seen if states that are geographically closer to Sturgis also have populations that are less likely to wear masks when they go out in public.  And we will see that this was likely indeed a factor.  For whatever reason (likely political, as the near-by states are states with high shares of Trump supporters), states geographically close to Sturgis have a generally lower share of their populations regularly wearing masks in this pandemic.  But the combination of low mask-wearing and falling temperatures (what statisticians call an interaction effect) was supplemental to, and not a replacement of, the impact of distance from Sturgis.  The distance factor remained highly significant and strong, including when controlling for October temperatures and mask-wearing, consistent with the view that Sturgis acted as a seeding event.

This post will take up each of these topics in turn.

B.  Distance to Sturgis vs. Daily New Cases of Covid-19 in the Week Ending November 6

The chart at the top of this post plots the average daily number of confirmed new cases of Covid-19 over the 7-day period ending November 6 in a state (per 100,000 of population), against the distance to Sturgis.  The data for the number of new cases each day was obtained from USAFacts, which in turn obtained the data from state health authorities.  The data on distance to Sturgis was obtained from the directions feature on Google Maps, with Sturgis being the destination and the trip origin being each of the 48 states in the mainland US (Hawaii and Alaska were excluded), plus Washington, DC.  Each state was simply entered (rather than a particular address within a state), and Google Maps then defaulted to a central location in each state.  The distance chosen was then for the route recommended by Google, in miles and on the roads recommended.  That is, these are trip miles and not miles “as the crow flies”.

When this is done, with a regular linear scale used for the mileage on the recommended routes, one obtains the chart at the top of this post.  For the week ending November 6, those states closest to Sturgis saw the highest rates of Covid-19 new cases (130 per 100,000 of population in South Dakota itself, where Sturgis is in the far western part of the state, and 200 per 100,000 in North Dakota, where one should note that Sturgis is closer to some of the main population centers of North Dakota than it is to some of the main population centers of South Dakota).  And as one goes further away geographically, the average daily number of new cases falls substantially, to only around one-tenth as much in several of the states on the Atlantic.

The model is a simple one:  The further away a state is from Sturgis, the lower its rate (per 100,000 of population) of Covid-19 new cases in the first week of November.  But it fits extremely well even though it looks at only one possible factor (distance to Sturgis).  The straight black line in the chart is the linear regression line that best fits, statistically, the scatter of points.  A statistical measure of the fit is called the R-squared, which varies between 0% and 100% and measures what share of the variation observed in the variable shown on the vertical axis of the chart (the daily new cases of Covid-19) can be predicted simply by knowing the regression line and the variable shown on the horizontal axis (the miles to Sturgis).

The R-squared for the regression line calculated for this chart was surprisingly high, at 60%.  This is astonishing.  It says that if all we knew was this regression line, then we could have predicted 60% of the variation in Covid-19 cases across states in the week ending November 6 simply by knowing how far the states are from Sturgis.  States differ in numerous ways that will affect the incidence of Covid-19 cases in their territory.  Yet here, if we know just the distance to Sturgis, we can predict 60% of how Covid-19 incidence will vary across the states.  Regressions such as these are called cross-section regressions (the data here are across states), and such R-squares are rarely higher than 20%, or at most perhaps 30%.

But as was discussed above in the introduction, trip decisions involving distances often work better (fit the data better) when the scale used is logarithmic.  On a logarithmic scale, what enters into the decision to make the trip of not is not some fixed increment of distance (e.g. an extra 100 miles) but rather some proportional change (e.g. an extra 10%).  A statistical regression can then be estimated using the logarithms of the distances, and when this estimated line is re-calculated back on to the standard linear scale, one will have the curve shown in blue in the chart:

The logarithmic (or log) regression line (in blue) fits the data even better than the simple linear regression line (in black), including at the two extremes (very short and very long distances).  And the R-squared rises to 71% from the already quite high 60% of the linear regression line.  The only significant outlier is North Dakota.  If one excludes North Dakota, the R-squared rises to 77%.  These are remarkably high for a cross-section analysis.

This simple model therefore fits the data well, indeed extremely well.  But there are still several issues to consider, starting with whether there was a similar pattern across the states before the Sturgis rally.

C.  Distance to Sturgis vs. Daily New Cases of Covid-19 in the Week Ending August 6, and the Progression in Subsequent Months

The Sturgis rally began on August 7.  Was there possibly a similar pattern as that found above in Covid-19 cases before the rally?  The answer is a clear no:

In the week ending August 6, the relationship of Covid-19 cases to distance from Sturgis was about as close to random as one can ever find.  If anything, the incidences of Covid-19 cases in the 10 or so states closest to Sturgis were relatively low.  And for all 48 states of the Continental US (plus Washington, DC), the simple linear regression line is close to flat, with an R-squared of just 0.4%.  This is basically nothing, and is in sharp contrast to the R-squared for the week ending November 6 of 60% (and 71% in logarithmic terms).

One should also note the magnitudes on the vertical scale here.  They range from 0 to 40 cases (per 100,000 of population) per day in the 7-day period.  In the chart for cases in the 7-day period ending on November 6 (as at the top of this post), the scale goes from 0 to 200.  That is, the incidence of Covid-19 cases was relatively low across US states in August (relative to what it was later in parts of the US).  That then changed in the subsequent months.  Furthermore, one can see in the charts above for the week ending November 6 that the states further than around 1,400 miles from Sturgis still had Covid new case rates of 40 per day or less.  That is, the case incidence rates remained in that 0 to 40 range between August and early November for the states far from Sturgis.  The states where the rates rose above this were all closer to Sturgis.

There was also a steady progression in the case rates in the months from August to November, focused on the states closer to Sturgis, as can be seen in the following chart:

Each line is the linear regression line found by regressing the number of Covid-19 cases in each state (per 100,000 of population) for the week ending August 6, the week ending September 6, the week ending October 6, and the week ending November 6, against the geographic distance to Sturgis.  The regression lines for the week ending August 6 and the week ending November 6 are the same as discussed already in the respective charts above.  The September and October ones are new.

As noted before, the August 6 line is essentially flat.  That is, the distance to Sturgis made no difference to the number of cases, and they are also all relatively low.  But then the line starts to twist upwards, with the right end (for the states furthest from Sturgis) more or less fixed and staying low, while the left end rotated upwards.  The rotation is relatively modest for the week ending September 6, is more substantial in the month later for the week ending October 6, and then the largest in the month after that for the week ending November 6.  This is precisely the path one would expect to find with an exponential spread of an infectious disease that has been seeded but then not brought under effective control.

D.  Might Falling Temperatures Account for the Pattern?

The charts above are consistent with Sturgis acting as a seeding event that later then led to increases in Covid-19 cases that were especially high in near-by states.  But one needs to recognize that these are just correlations, and by themselves cannot prove that Sturgis was the cause.  There might be some alternative explanation.

One obvious alternative would be that the sharp increase in cases in the upper mid-west of the US in this period was due to falling temperatures, as the northern hemisphere winter approached.  These areas generally grow colder earlier than in other parts of the US.  And if one plots the state-wide average temperatures in October (as reported by NOAA) against the average number of Covid-19 cases per day in the week ending November 6 one indeed finds:

There is a clear downward trend:  States with lower average temperatures in October had more cases (per 100,000 of population) in the week ending November 6.  The relationship is not nearly as tight as that found for the one based on geographic distance from Sturgis (the R-squared is 35% here, versus 60% for the linear relationship based on distance), but 35% is still respectable for a cross-state regression such as this.

However, there are some counterexamples.  The average October temperatures in Maine and Vermont were colder than all but 7 or 10 states (for Maine and Vermont, respectively), yet their Covid-19 case rates were the two lowest in the country.

More telling, one can compare the rates in North and South Dakota (with the two highest Covid-19 rates in the country in the week ending November 6) plus Montana (adjacent and also high) with the rates seen in the Canadian provinces immediately to their north:

The rates are not even close.  The Canadian rates were all far below those in the US states to their south.  The rate in North Dakota was fully 30 times higher than the rate in Saskatchewan, the Canadian province just to its north.  There is clearly something more than just temperature involved.

E.  The Impact of Wearing Masks, and Its Interaction With Temperature

That something is the actions followed by the state or provincial populations to limit the spread of the virus.  The most important is the wearing of masks, which has proven to be highly effective in limiting the spread of this infectious disease, in particular when complemented with other socially responsible behaviors such as social distancing, avoiding large crowds (especially where many do not wear masks), washing hands, and so on.  Canadians have been far more serious in following such practices than many Americans.  The result has been far fewer cases of Covid-19 (as a share of the population) in Canada than in the US, and far fewer deaths.

Mask wearing matters, and could be an alternative explanation for why states closer to Sturgis saw higher rates of Covid-19 cases.  If a relatively low share of the populations in the states closer to Sturgis wear masks, then this may account for the higher incidence of Covid-19 cases in those near-by states.  That is, perhaps the states that are geographically closer to Sturgis just happen also to be states where a relatively low share of their populations wear masks, with this then possibly accounting for the higher incidence of cases in those states.

However, mask-wearing (or the lack of it), by itself, would be unlikely to fully account for the pattern seen here.  Two things should be noted.  First, while states that are geographically closer to Sturgis do indeed see a lower share of their population generally wearing masks when out in public, the relationship to this geography is not as strong as the other relationships we have examined:

The data in the chart for the share who wear masks by state come from the COVIDCast project at Carnegie Mellon University, and was discussed in the previous post on this blog.  The relationship found is indeed a positive one (states geographically further from Sturgis generally have a higher share of their populations wearing masks), but there is a good deal of dispersion in the figures and the R-squared is only 27.5%.  This, by itself, is unlikely to explain the Covid-19 rates across states in early November.

Second, and more importantly:  While the states closer to Sturgis generally have a lower share of mask-wearing, this would not explain why one did not see similarly higher rates of Covid-19 incidence in those states in August.  Mask-wearing was likely similar.  The question is why did Covid-19 incidence rise in those states between August (following the Sturgis rally) and November, and not simply why they were high in those states in November.

However, mask-wearing may well have been a factor.  But rather than accounting for the pattern all by itself, it may have had an indirect effect.  With the onset of colder weather, more time would be spent with others indoors, and wearing a mask when in public is particularly important in such settings.  That is, it is the combination of both a low share of the population wearing masks and the onset of colder weather which is important, not just one or the other.

These are called interaction effects, and investigating them requires more than can be depicted in simple charts.  Multiple regression analysis (regression analysis with several variables – not just one as in the charts above) can allow for this.  Since it is a bit technical, I have relegated a more detailed discussion of these results to a Technical Annex at the conclusion of this post for those who are interested.

Briefly, a regression was estimated that includes miles from Sturgis, average October temperatures, the share who wear masks when out in public, plus an interaction effect between the share wearing masks and October temperatures, all as independent variables affecting the observed Covid-19 case rates of the week ending November 6.  And this regression works quite well.  The R-squared is 75.4%, and each of the variables (including the interaction term) are either highly significant (miles from Sturgis) or marginally so (a confidence level of between 6 and 8% for the variables, which is slightly worse than the 5% confidence level commonly used, but not by much).

Note in particular that the interaction term matters, and matters even while each of the other variables (miles to Sturgis, October temperatures, and mask-wearing) are taken into account individually as well.  In the interaction term, it is not simply the October temperatures or the share wearing masks that matter, but the two acting together.  That is, the impact of relatively low temperatures in October will matter more in those states where mask-wearing is low than they would in states where mask-wearing is high.  If people generally wore masks when out in public (and followed also the other socially responsible behaviors that go along with it), the falling temperatures would not matter as much.  But when they don’t, the falling temperatures matter more.

From this overall regression equation, one can also use the coefficients found to estimate what the impact would be of small changes in each of the variables.  These are called elasticities, and based on the estimated equation (and computing the changes around the sample means for each of the variables):  a 1% reduction in the number of miles from Sturgis would lead to a 1.0% rise in the incidence of Covid-19 cases; a 1% reduction (not a 1 percentage point increase, but rather a 1% reduction from the sample mean) in the share of the population wearing masks when out in public would lead to a 1.7% rise in the incidence of Covid-19 cases; and a 1% reduction in the average October temperature across the different states would lead to a 1.2% rise in the incidence of Covid-19 cases.  All of these elasticity estimates look quite plausible.

These results are consistent with an explanation where the Sturgis rally acted as a significant superspreader event that led to increased seeding of the virus in the locales, in near-by states especially. This then led to significant increases in the incidence of Covid-19 cases in the different states as this infectious disease spread to friends and family and others in the subsequent months, and again especially in the states closest to Sturgis.  Those increases were highest in the states that grew colder earlier than others when the populations wearing masks regularly in those states was relatively low.  That is, the interaction of the two mattered.  But even with this effect controlled for, along with controlling also for the impact of colder temperatures and for the impact of mask-wearing, the impact of miles to Sturgis remained and was highly significant statistically.

F.  Conclusion

As noted above, the analysis here cannot and does not prove that the Sturgis rally acted as a superspreader event.  There was only one Sturgis rally this year, one cannot run repeated experiments of such a rally under various alternative conditions, and the evidence we have are simply correlations of various kinds.  It is possible that there may be some alternative explanation for why Covid-19 cases started to rise sharply in the weeks after the rally in the states closest to Sturgis.  It is also possible it is all just a coincidence.

But the evidence is consistent with what researchers have already found on how the virus that causes Covid-19 is spread.  Studies have found that as few as 10% of those infected may account for 80% of those subsequently infected with the virus.  And it is not just the biology of the disease and how a person reacts to it, but also whether the individual is then in situations with the right conditions to spread it on to others.  These might be as small as family gatherings, or as large as big rallies.  When large numbers of participants are involved, such events have been labeled superspreader events.

Among the most important of conditions that matter is whether most or all of those attending are wearing masks.  It also matters how close people are to each other, whether they are cheering, shouting, or singing, and whether the event is indoors or outdoors.  And the likelihood that an attendee who is infectious might be there increases exponentially with the number of attendees, so the size of the gathering very much matters.

A number of recent White House events matched these conditions, and a significant number of attendees soon after tested positive for Covid-19.  In particular, about 150 attended the celebration on September 26 announcing that Amy Coney Barrett would be nominated to the Supreme Court to take the seat of the recently deceased Ruth Bader Ginsburg.  Few wore masks, and at least 18 attendees later tested positive for the virus.  And about 200 attended an election night gathering at the White House.  At least 6 of those attending later tested positive.  While one can never say for sure where someone may have contracted the virus, such clusters among those attending such events are very unlikely unless the event was where they got the virus.  It is also likely that these figures are undercounts, as White House staff have been told not to let it become publicly known if they come down with the virus.  Finally, as of November 13 at least 30 uniformed Secret Service officers, responsible for security at the White House, have tested positive for the coronavirus in the preceding few weeks.

There is also increasing evidence that the Trump campaign rallies of recent months led to subsequent increases in Covid-19 cases in the local areas where they were held.  These ranged from studies of individual rallies (such as 23 specific cases traced to three Trump rallies in Minnesota in September), to a relatively simple analysis that looked at the correlation between where Trump campaign rallies were held and subsequent increases in Covid-19 cases in that locale, to a rigorous academic study that examined the impact of 18 Trump campaign rallies on the local spread of Covid-19.  This academic study was prepared by four members of the Department of Economics at Stanford (including the current department chair, Professor B. Douglas Bernheim).  They concluded that the 18 Trump rallies led to an estimated extra 30,000 Covid-19 cases in the US, and 700 additional deaths.

One should expect that the Sturgis rally would act as even more of a superspreader event than those campaign rallies.  An estimated 460,000 motorcyclists attended the Sturgis rally, while the campaign rallies involved at most a few thousand at each.  Those at the Sturgis rally could also attend for up to ten days; the campaign rallies lasted only a few hours.  Finally, there would be a good deal of mixing of attendees at the multiple parties and other events at Sturgis.  At a campaign rally, in contrast, people would sit or stand at one location only, and hence only be exposed to those in their immediate vicinity.

The results are also consistent with a rigorous academic study of the more immediate impact of the Sturgis rally on the spread of Covid-19, by Professor Joseph Sabia of San Diego State University and three co-authors.  Using anonymous cell phone tracking data, they found that counties across the US that received the highest inflows of returning participants from the Sturgis rally saw, in the immediate weeks following the rally (up to September 2), an increase of 7.0 to 12.5% in the number of Covid-19 cases relative to the counties that did not contribute inflows.  But their study (issued as a working paper in September) looked only at the impact in the immediate few weeks following Sturgis.  They did not consider what such seeding might then have led to.  The results examined in the analysis here, which is longer-term (up to November 6), are consistent with their findings.

It is therefore fully plausible that the Sturgis rally acted as a superspreader event.  And the evidence examined in this post supports such a conclusion.  While one cannot prove this in a scientific sense, as noted above, the likelihood looks high.

Finally, as I finish writing this, the number of deaths in the US from this terrible virus has just surpassed 250,000.  The number of confirmed cases has reached 11.6 million, with this figure rising by 1 million in just the past week.  A tremendous surge is underway, far surpassing the initial wave in March and April (when the country was slow to discover how serious the spread was, due in part to the botched development in the US of testing for the virus), and far surpassing also the second, and larger, wave in June and July (when a number of states, in particular in the South and Southwest, re-opened too early and without adequate measures, such as mask mandates, to keep the disease under control).  Daily new Covid-19 cases are now close to 2 1/2 times what they were at their peak in July.

This map, published by the New York Times (and updated several times a day) shows how bad this has become.  It is also revealing that the worst parts of the country (the states with the highest number of cases per 100,000 of population) are precisely the states geographically closest to Sturgis.  There is certainly more behind this than just the Sturgis rally.  But it is highly likely the Sturgis rally was a significant contributor.  And it is extremely important if more cases are to be averted to understand and recognize the possible role of events such as the rally at Sturgis.

Average Daily Cases of Covid-19 per 100,000 Population

7-Day Average for Week Ending November 18, 2020

Source:  The New York Times, “Covid in the US:  Latest Map and Case Count”.  Image from November 19, with data as of 8:14 am.

 


Technical Annex:  Regression Results

As discussed in the text, a series of regressions were estimated to explore the relationship between the Sturgis rally and the incidence of Covid-19 cases (the 7-day average of confirmed new cases in the week ending November 6) across the states of the mainland US plus Washington, DC.  Five will be reported here, with regressions on the incidence of Covid-19 cases (as the dependent variable) as a function of various combinations of three independent variables: miles from Sturgis (in terms of their natural logarithms), the average state-wide temperature in October (also in terms of their natural logarithms), and the share of the population in the respective states who reported they always or most of the time wore masks when out in public.  Three of the five regressions are on each of the three independent variables individually, one on the three together, and one on the three together along with an interaction effect measured by multiplying the October temperature variable (in logs) with the share wearing masks.  The sources for each variable were discussed above in the main text.

The basic results, with each regression by column, are summarized in the following table:

Regressions on State Covid-9 Cases – November 6

     Miles to Sturgis and Temperatures are in natural logs

Miles only

Temp only

Masks only

Miles, Temp, &Masks

All with Interaction

Miles to Sturgis

Slope

-54.9

-41.9

-36.6

t-statistic

-10.7

-5.2

-4.3

Avg Temperature

Slope

-133.3

-45.5

-516.8

t-statistic

-5.5

-2.0

-1.9

Share Wear Masks

Slope

-3.1

-0.8

-22.4

t-statistic

-3.9

-1.3

-1.8

Interaction Temp & Masks

Slope

5.44

t-statistic

1.8

Intercept

425.5

572.5

309.4

582.5

2,422.5

t-statistic

11.9

6.0

4.5

7.1

2.3

R-squared

71.0%

39.4%

24.2%

73.7%

75.4%

In the regressions with each independent variable taken individually, all the coefficients (slopes) found are highly significant.  The general rule of thumb is that a confidence level of 5% is adequate to call the relationship statistically “significant” (i.e. that the estimated coefficient would not differ from zero just due to random variation in the data).  A t-statistic of 2.0 or higher, in a large sample, would signal significance at least at a 5% confidence level (that is, that the estimated coefficient differs from zero at least 95% of the time), and the t-statistics are each well in excess of 2.0 in each of the single-variable regressions.  The R-squared is quite high, at 71.0%, for the regression on miles from Sturgis, but more modest in the other two (39.4% and 24.2% for October temperature and mask-wearing, respectively).

The estimated coefficients (slopes) are also all negative.  That is, the incidence of Covid-19 goes down with additional miles from Sturgis, with higher October temperatures, and with higher mask-wearing.  The actual coefficients themselves should not be compared to each other for their relative magnitudes.  Their size will depend on the units used for the individual measures (e.g. miles for distance, rather than feet or kilometers; or temperature measured on the Fahrenheit scale rather than Centigrade; or shares expressed as, say, 80 for 80% instead of 0.80).  The units chosen will not matter.  Rather, what is of interest is how the predicted incidence of Covid-19 changes when there is, say, a 1% change in any of the independent variables.  These are elasticities and will be discussed below.

In the fourth regression equation (the fourth column), where the three independent variables are all included, the statistical significance of the mask-wearing variable drops to a t-statistic of just 1.3.  The significance of the temperature variable also falls to 2.0, which is at the borderline for the general rule of thumb of 5% confidence level for statistical significance.  The miles from Sturgis variable remains highly significant (its t-statistic also fell, but remains extremely high).  If one stopped here, it would appear that what matters is distance from Sturgis (consistent with Sturgis acting as a seeding event), coupled with October temperatures falling (so that the thus seeded virus spread fastest where temperatures had fallen the most).

But as was discussed above in the main text, there is good reason to view the temperature variable acting not solely by itself, but in an interaction with whether masks are generally worn or not.  This is tested in the fifth regression, where the three individual variables are included along with an interaction term between temperatures and mask-wearing.  The temperature, mask-wearing, and interaction variables now all have a similar level of significance, although at just less than 5% (at 6% to 8% for each).  While not quite 5%, keep in mind that the 5% is just a rule of thumb.  Note also that the positive sign on the interaction term (the 5.44) is an indication of curvature.  The positive sign, coupled with the negative signs for the temperature and mask-wearing variables taken alone, indicates that the curves are concave facing upwards (the effects of temperature and mask-wearing diminish at the margin at higher values for the variables).  Finally, the miles to Sturgis variable remains highly significant.

Based on this fifth regression equation, with the interaction term allowed for, what would be the estimated response of Covid-19 cases to changes in any of the independent variables (miles to Sturgis, October temperatures, and mask-wearing)?  These are normally presented as elasticities, with the predicted percentage change in Covid-19 cases when one assumes a small (1%) change in any of the independent variables.  In a mixed equation such as this, where some terms are linear and some logarithmic (plus an interaction term), the resulting percentage change can vary depending on the starting point is chosen.  The conventional starting point taken is normally the sample means, and that will be done here.

Also, I have expressed the elasticities here in terms of a 1% decrease in each of the independent variables (since our interest is in what might lead to higher rates of Covid-19 incidence):

Elasticities from Full Equation with Interaction Term

      Percent Increase in Number of Covid-19 Cases from a 1% Decrease Around Sample Means

Elasticity

Miles to Sturgis

1.02%

October Temperature

1.16%

Share Wearing Masks

1.69%

All these estimated elasticities are quite plausible.  If one is 1% closer in geographic distance to Sturgis (starting at the sample mean, and with the other two variables of October temperature and mask-wearing also at their respective sample means), the incidence of Covid-19 cases (per 100,000 of population) as of the week ending November 6 would increase by an estimated 1.02%.  A 1% lower October temperature (from the sample mean) would lead to an estimated 1.16% increase in Covid-19 cases.  And the impact of the share wearing masks is important and stronger, where a 1% reduction in the share wearing masks would lead to an estimated 1.69% increase in cases, with all the other factors here taken into account and controlled for.

These results are consistent with a conclusion that the Sturgis rally led to a significant seeding of cases, especially in near-by states, with the number of infections then growing over time as the disease spread.  The cases grew faster in those states where mask-wearing was relatively low, and in states with lower temperatures in October (leading people to spend more time indoors).  When the falling temperatures were coupled with a lower share (than elsewhere) of the population wearing masks, the rate of Covid-19 cases rose especially fast.

A Carbon Tax with Redistribution Would Be a Significant Help to the Poor

A.  Introduction

Economists have long recommended taxing pollution as an effective as well as efficient way to achieve societal aims to counter that pollution.  What is commonly called a “carbon tax”, but which in fact would apply to all emissions of greenhouse gases (where carbon dioxide, CO2, is the largest contributor), would do this.  “Cap and trade” schemes, where polluters are required to acquire and pay for a limited number of permits, act similarly.  The prime example in the US of such a cap and trade scheme was the program to sharply reduce the sulfur dioxide (SO2) pollution from the burning of coal in power plants.  That program was launched in 1995 and was a major success.  Not only did the benefits exceed the costs by a factor of 14 to 1 (with some estimates even higher – as much as 100 to 1), but the cost of achieving that SO2 reduction was only one-half to one-quarter of what officials expected it would have cost had they followed the traditional regulatory approach.

Cost savings of half or three-quarters are not something to sneer at.  Reducing greenhouse gas emissions, which is quite possibly the greatest challenge of our times, will be expensive.  The benefits will be far greater, so it is certainly worthwhile to incur those expenses (and it is just silly to argue that “we cannot afford it” – the benefits far exceed the costs).  One should, however, still want to minimize those costs.

But while such cost savings are hugely important, one should also not ignore the distributional consequences of any such plan.  These are a concern of many, and rightly so.  The poor should not be harmed, both because they are poor and because their modest consumption is not the primary cause of the pollution problem we are facing.  But this is where there has been a good deal of confusion and misunderstanding.  A tax on all greenhouse gas emissions, with the revenue thus generated then distributed back to all on an equal per capita basis, would be significantly beneficial to the poor in purely financial terms.  Indeed it would be beneficial to most of the population since it is a minority of the population (mostly those who are far better off financially than most) who account for a disproportionate share of emissions.

A specific carbon tax plan that would work in this way was discussed in an earlier post on this blog.  I would refer the reader to that earlier post for the details on that plan.  But briefly, under this proposal all emissions of greenhouse gases (not simply from power plants, but from all sources) would pay a tax of $49 per metric ton of CO2 (or per ton of CO2 equivalent for other greenhouse gases, such as methane).  A fee of $49 per metric ton would be equivalent to about $44.50 per common ton (2,000 pounds, as commonly used in the US but nowhere else in the world).  The revenues thus generated would then be distributed back, in full, to the entire population in equal per capita terms, on a monthly or quarterly basis.  There would also be a border-tax adjustment on goods imported, which would create the incentive for other countries to join in such a scheme (as the US would charge the same carbon tax on such goods when the source country hadn’t, but with those revenues then distributed to Americans).

The US Treasury published a study of this scheme in January 2017, and estimated that such a tax would generate $194 billion of revenues in its initial year (which was assumed to be 2019).  This would allow for a distribution of $583 to every American (man, woman, and child – not just adults).  Furthermore, the authors estimated what the impact would be by family income decile, and concluded that the bottom 7 deciles of families (the bottom 70%, as ranked by income) would enjoy a net benefit, while only the richest 30% would pay a net cost.

That distributional impact will be the focus of this blog post.  It has not received sufficient attention in the discussion on how to address climate change.  While the Treasury study did provide estimates on what the impacts by income decile would be (although not always in an easy to understand form), views on a carbon tax often appear to assume, incorrectly, that the poor will pay the most as a share of their income, while the rich will be able to get away with avoiding the tax.  The impact would in fact be the opposite.  Indeed, while the primary aim of the program is, and should be, the reduction of greenhouse gas emissions, its redistributive benefits are such that on that basis alone the program would have much to commend it.  It would also be just.  As noted above, the poor do not account for a disproportionate share of greenhouse gas emissions – the rich do – yet the poor suffer similarly, if not greater, from the consequences.

This blog post will first review those estimated net cash benefits by family income decile, both in dollar amounts and as a share of income.  To give a sense of how important this is in magnitude, it will then examine how these net benefits compare to the most important current cash transfer program in the US – food stamp benefits.  Finally, it will briefly review the politics of such a program.  Perceptions have, unfortunately, been adverse, and many pundits believe a carbon tax program would never be approved.  Perhaps this might change if news sources paid greater attention to the distribution and economic justice benefits.

B.  Net Benefits or Costs by Family Income Decile from a Carbon Tax with Redistribution

The chart at the top of this post shows what the average net impact would be in dollars per person, by family cash income decile, if a carbon tax of $49 per metric ton were charged with the revenues then distributed on an equal per capita basis.  While prices of energy and other goods whose production or use leads to greenhouse gas emissions would rise, the revenues from the tax thus generated would go back in full to the population.  Those groups who account for a less than proportionate share of greenhouse gas emissions (the poor and much of the middle class) would come out ahead, while those with the income and lifestyle that lead to a greater than average share of greenhouse gas emissions (the rich) will end up paying in more.

The figures are derived from estimates made by the staff of the US Treasury – staff that regularly undertake assessments of the incidence across income groups of various tax proposals.  The study was published in January 2017, and the estimates are of what the impacts would have been had the tax been in place for 2019.  The results were presented in tables following a standard format for such tax incidence studies, with the dollars per person impact of the chart above derived from those tables.

To arrive at these estimates, the Treasury staff first calculated what the impact of such a $49 per metric ton carbon tax would be on the prices of goods.  Such a tax would, for example, raise the price of gasoline by $0.44 per gallon based on the CO2 emitted in its production and when it is burned.  Using standard input-output tables they could then estimate what the price changes would be on a comprehensive set of goods, and based on historic consumption patterns work out what the impacts would be on households by income decile.  The net impact would then follow from distributing back on an equal per capita basis the revenues collected by the tax.  For 2019, the Treasury staff estimated $194 billion would be collected (a bit less than 1% of GDP), which would allow for a transfer back of $583 per person.

Those in the poorest 10% of households would receive an estimated $535 net benefit per person from such a scheme.  The cost of the goods they consume would go up by $48 per person over the course of a year, but they would receive back $583.  They do not account for a major share of greenhouse gas emissions because they cannot afford to consume much.  They are poor, and a family earning, say, $20,000 a year consumes far less of everything than a family earning $200,000 a year.  In terms of greenhouse gas emissions implicit in the Treasury numbers, the poorest 10% of Americans account only for a bit less than 1.0 metric tons of CO2 emissions per person per year (including the CO2 equivalent in other greenhouse gases).  The richest 10% account for close to 36 tons CO2 equivalent per person per year.

As one goes from the lower income deciles to the higher, consumption rises and CO2 emissions from the goods consumed rises.  But it is not a linear trend by decile.  Rather, higher-income households account for a more than proportionate share of greenhouse gas emissions.  As a consequence, the break-even point is not at the 50th percentile of households (as it would be if the trend were linear), but rather something higher.  In the Treasury estimates, households up through the 70th percentile (the 7th decile) would on average still come out ahead.  Only the top three deciles (the richest 30%) would end up paying more for the carbon tax than what they would receive back.  But this is simply because they account for a disproportionately high share of greenhouse gas emissions.  It is fully warranted and just that they should pay more for the pollution they cause.

But it is also worth noting that while the richer household would pay more in dollar terms than they receive back, those higher dollar amounts are modest when taken as a share of their high incomes:

In dollar terms the richest 10% would pay in a net $1,166 per person in this scheme, as per the chart at the top of this post.  But this would be just 1.0% of their per-person incomes.  The 9th decile (families in the 80 to 90th percentile) would pay in a net of 0.7% of their incomes, and the 8th decile would pay in a net of 0.3%. At the other end of the distribution, the poorest 10% (the 1st decile) would receive a net benefit equal to 8.9% of their incomes.  This is not minor.  The relatively modest (as a share of incomes) net transfers from the higher-income households permit a quite substantial rise (in percentage terms) in the incomes of poorer households.

C.  A Comparison to Transfers in the Food Stamps Program

The food stamps program (formally now called SNAP, for Supplemental Nutrition Assistance Program) is the largest cash income transfer program in the US designed specifically to assist the poor.  (While the cost of Medicaid is higher, those payments are made directly to health care providers for their medical services to the poor.)  How would the net transfers under a carbon tax with redistribution compare to SNAP?  Are they in the same ballpark?

I had expected they would not be close.  However, it turns out that they are not that far apart.  While food stamps would still provide a greater transfer for the very poorest households, the supplement to income that those households would receive by such a carbon tax scheme would be significant.  Furthermore, the carbon tax scheme would be of greater benefit than food stamps are, on average, for lower middle-class households (those in the 3rd decile and above).

The Congressional Budget Office (CBO) has estimated how food stamp (SNAP) benefits are distributed by household income decile.  While the forecast year is different (2016 for SNAP vs. 2019 for the carbon tax), for the purposes here the comparison is close enough.  From the CBO figures one can work out the annual net benefits per person under SNAP for households in the 1st to 4th deciles (with the 5th through the 10th deciles then aggregated by the CBO, as they were all small):

The average annual benefits from SNAP were estimated to be about $1,500 per person for households in the poorest decile and $690 per person in the 2nd decile.  These are larger than the estimated net benefits of these two groups under a carbon tax program (of $535 and $464 per person, respectively), but it was surprising, at least to me, that they are as close as they are.  The food stamp program is specifically targeted to assist the poor to purchase the food that they need.  A carbon tax with redistribution program is aimed at cutting back greenhouse gas emissions, with the funds generated then distributed back to households on an equal per capita basis.  They have very different aims, but the redistribution under each is significant.

D.  But the Current Politics of Such a Program Are Not Favorable

A carbon tax with redistribution program would therefore not only reduce greenhouse gas emissions at a lower cost than traditional approaches, but would also provide for an equitable redistribution from those who account for a disproportionate share of greenhouse gas emissions (the rich) to those who do not (the poor).  But news reporters and political pundits, including those who are personally in favor of such a program, consider it politically impossible.  And in what was supposed to be a personal email, but which was part of those obtained by Russian government hackers and then released via WikiLeaks in order to assist the Trump presidential campaign, John Podesta, the senior campaign manager for Hillary Clinton, wrote:  “We have done extensive polling on a carbon tax.  It all sucks.”

Published polls indicate that the degree of support or not for a carbon tax program depends critically on how the question is worded.  If the question is stated as something such as “Would you be in favor of taxing corporations based on their carbon emissions”, polls have found two-thirds or more of Americans in support.  But if the question is worded as something such as “Would you be in favor of paying a carbon tax on the goods you purchase”, the support is less (often still more than a majority, depending on the specific poll, but less than two-thirds).  But they really amount to the same thing.

There are various reasons for this, starting with that the issue is a complex one, is not well understood, and hence opinions can be easily influenced based on how the issue is framed.  This opens the field to well-funded vested interests (such as the fossil fuel companies) being able to influence votes by sophisticated advertising.  Opponents were able to outspend proponents by 2 to 1 in Washington State in 2018, when a referendum on a proposed carbon tax was defeated (as it had been also in 2016).  Political scientists who have studied the two Washington State referenda believe they would be similarly defeated elsewhere.

There appear to be two main concerns:  The first is that “a carbon tax will hurt the poor”.  But as examined above, the opposite would be the case.  The poor would very much benefit, as their low consumption only accounts for a small share of carbon emissions (they are poor, and do not consume much of anything), but they would receive an equal per capita share of the revenues raised.

In distinct contrast, but often not recognized, a program to reduce greenhouse gas emissions based on traditional regulation would still see an increase in costs (and indeed likely by much more, as noted above), but with no compensation for the poor.  The poor would then definitely lose.  There may then be calls to add on a layer of special subsidies to compensate the poor, but these rarely work well.

The second concern often heard is that “a carbon tax is just a nudge” and in the end will not get greenhouse gas emissions down.  There may also be the view (internally inconsistent, but still held) that the rich are so rich that they will not cut back on their consumption of high carbon-emission goods despite the tax, while at the same time the rich can switch their consumption (by buying an electric car, for example, to replace their gasoline one) while the poor cannot.

But the prices do matter.  As noted at the start of this post, the experience with the cap and trade program for SO2 from the burning of coal (where a price is put on the SO2 emissions) found it to be highly effective in bringing SO2 emissions down quickly.  Or as was discussed in an earlier post on this blog, charging polluters for their emissions would be key to getting utilities to switch use to clean energy sources.  The cost of both solar and wind new generation power capacity has come down sharply over the past decade, to the point where, for new capacity, they are the cheapest sources available.  But this is for new generation.  When there is no charge for the greenhouse gases emitted, it is still cheaper to keep burning gas and often coal in existing plants, as the up-front capital costs have already been incurred and do not affect the decision of what to use for current generation.  But as estimated in that earlier post, if those plants were charged $40 per ton for their CO2 emissions, it would be cheaper for the power utilities to build new solar or wind plants and use these to replace existing fossil fuel plants.

There are many other substitution possibilities as well, but many may not be well known when the focus is on a particular sector.  For example, livestock account for about 30% of methane emissions resulting from human activity.  This is roughly the same share as methane emissions from the production and distribution of fossil fuels.  And methane is a particularly potent greenhouse gas, with 86 times the global warming potential over a 20-year horizon of an equal weight of CO2.  Yet a simple modification of the diets of cows will reduce their methane emissions (due to their digestive system – methane comes out as burps and farts) by 33%.  One simply needs to add to their feed just 100 grams of lemongrass per day and the digestive chemistry changes to produce far less methane.  Burger King will now start to purchase its beef from such sources.

This is a simple and inexpensive change, yet one that is being done only by Burger King and a few others in order to gain favorable publicity.  But a tax on such greenhouse gas emissions would induce such an adjustment to the diets of livestock more broadly (as well as research on other dietary changes, that might lead to an even greater reduction in methane emissions).  A regulatory focus on emissions from power plants alone would not see this.  One might argue that a broader regulatory system would cover emissions from such agricultural practices, and in principle it should.  But there has been little discussion of extending the regulation of greenhouse gas emissions to the agricultural sector.

More fundamentally, regulations are set and then kept fixed over time in order to permit those who are regulated to work out and then implement plans to comply.  Such systems are not good, by their nature, at handling innovations, as by definition innovations are not foreseen.  Yet innovations are precisely what one should want to encourage, and indeed the ex-post assessment of the SO2 emissions trading program found that it was innovations that led to costs being far lower than had been anticipated.  A carbon tax program would similarly encourage innovations, while regulatory schemes can not handle them well.

There may well be other concerns, including ones left unstated.  Individuals may feel, for example, that while climate change is indeed a major issue and needs to be addressed, and that redistribution under a carbon tax program might well be equitable overall, that they will nonetheless lose.  And some will.  Those who account for a disproportionately high share of greenhouse gas emissions through the goods they purchase will end up paying more.  But costs will also rise under the alternative of a regulatory approach (and indeed rise by a good deal more), which will affect them as well.  If they do indeed account for a disproportionately high share of greenhouse gas emissions, they should be especially in favor of an approach that would bring these emissions down at the lowest possible cost.  A scheme that puts a price on carbon emissions, such as in a carbon tax scheme, would do this at a lower cost than traditional approaches.

So while many have concerns with a carbon tax with redistribution scheme, much of this is due to a misunderstanding of what the impacts would be, as well as of what the impacts would be of alternatives.  One sees this in the range of responses to polling questions on such schemes, where the degree of support depends very much on how the questions are worded or framed.  There is a need to explain better how a carbon tax with redistribution program would work, and we have collectively (analysts, media, and politicians) failed to do this.

There are also some simple steps one can take which would likely increase the attractiveness of such a program.  For example, perceptions would likely be far better if the initial rebate checks were sent up-front, before the carbon taxes were first to go into effect, rather than later, at the end of whatever period is chosen.  Instead of households being asked to finance the higher costs over the period until they received their first rebate checks, one would have the government do this.  This would not only make sense financially (government can fund itself more cheaply than households can), but more important, politically.  Households would see up-front that they are, indeed, receiving a rebate check before the prices go up to reflect the carbon tax.

And one should not be too pessimistic.  While polling responses depend on the precise wording used, as noted above, the polling results still usually show a majority in support.  But the issue needs to be explained better.  There are problems, clearly, when issues such as the impact on the poor from such a scheme are so fundamentally misunderstood.

E.  Conclusion 

Charging for greenhouse gases emitted (a carbon tax), with the revenues collected then distributed back to the population on an equal per capita basis, would be both efficient (lower cost) and equitable.  Indeed, the transfers from those who account for an especially high share of greenhouse gas emissions (the rich) to those who account for very little of them (the poor), would provide a significant supplement to the incomes of the poor.  While the redistributive effect is not the primary aim of the program (reducing greenhouse gases is), that redistributive effect would be both beneficial and significant.  It should not be ignored.

The conventional wisdom, however, is that such a scheme could not command a majority in a referendum.  The issue is complex, and well-funded vested interests (the fossil fuel companies) have been able to use that complexity to propagate a sufficient level of concern to defeat such referenda.  The impact on the poor has in particular been misportrayed.

But climate change really does need to be addressed.  One should want to do this at the lowest possible cost while also in an equitable manner.  Hopefully, as more learn what carbon tax schemes can achieve, politicians will obtain the support they need to move forward with such a program.

Andrew Yang’s Proposed $1,000 per Month Grant: Issues Raised in the Democratic Debate

A.  Introduction

This is the second in a series of posts on this blog addressing issues that have come up during the campaign of the candidates for the Democratic nomination for president, and which specifically came up in the October 15 Democratic debate.  As flagged in the previous blog post, one can find a transcript of the debate at the Washington Post website, and a video of the debate at the CNN website.

This post will address Andrew Yang’s proposal of a $1,000 per month grant for every adult American (which I will mostly refer to here as a $12,000 grant per year).  This policy is called a universal basic income (or UBI), and has been explored in a few other countries as well.  It has received increased attention in recent years, in part due to the sharp growth in income inequality in the US of recent decades, that began around 1980.  If properly designed, such a $12,000 grant per adult per year could mark a substantial redistribution of income.  But the degree of redistribution depends directly on how the funding would be raised.  As we will discuss below, Yang’s specific proposals for that are problematic.  There are also other issues with such a program which, even if well designed, calls into question whether it would be the best approach to addressing inequality.  All this will be discussed below.

First, however, it is useful to address two misconceptions that appear to be widespread.  One is that many appear to believe that the $12,000 per adult per year would not need to come from somewhere.  That is, everyone would receive it, but no one would have to provide the funds to pay for it.  That is not possible.  The economy produces so much, whatever is produced accrues as incomes to someone, and if one is to transfer some amount ($12,000 here) to each adult then the amounts so transferred will need to come from somewhere.  That is, this is a redistribution.  There is nothing wrong with a redistribution, if well designed, but it is not a magical creation of something out of nothing.

The other misconception, and asserted by Yang as the primary rationale for such a $12,000 per year grant, is that a “Fourth Industrial Revolution” is now underway which will lead to widespread structural unemployment due to automation.  This issue was addressed in the previous post on this blog, where I noted that the forecast job losses due to automation in the coming years are not out of line with what has been the norm in the US for at least the last 150 years.  There has always been job disruption and turnover, and while assistance should certainly be provided to workers whose jobs will be affected, what is expected in the years going forward is similar to what we have had in the past.

It is also a good thing that workers should not be expected to rely on a $12,000 per year grant to make up for a lost job.  Median earnings of a full-time worker was an estimated $50,653 in 2018, according to the Census Bureau.  A grant of $12,000 would not go far in making up for this.

So the issue is one of redistribution, and to be fair to Yang, I should note that he posts on his campaign website a fair amount of detail on how the program would be paid for.  I make use of that information below.  But the numbers do not really add up, and for a candidate who champions math (something I admire), this is disappointing.

B.  Yang’s Proposal of a $1,000 Monthly Grant to All Americans

First of all, the overall cost.  This is easy to calculate, although not much discussed.  The $12,000 per year grant would go to every adult American, who Yang defines as all those over the age of 18.  There were very close to 250 million Americans over the age of 18 in 2018, so at $12,000 per adult the cost would be $3.0 trillion.

This is far from a small amount.  With GDP of approximately $20 trillion in 2018 ($20.58 trillion to be more precise), such a program would come to 15% of GDP.  That is huge.  Total taxes and revenues received by the federal government (including all income taxes, all taxes for Social Security and Medicare, and everything else) only came to $3.3 trillion in FY2018.  This is only 10% more than the $3.0 trillion that would have been required for Yang’s $12,000 per adult grants.  Or put another way, taxes and other government revenues would need almost to be doubled (raised by 91%) to cover the cost of the program.  As another comparison, the cost of the tax cuts that Trump and the Republican leadership rushed through Congress in December 2017 was forecast to be an estimated $150 billion per year.  That was a big revenue loss.  But the Yang proposal would cost 20 times as much.

With such amounts to be raised, Yang proposes on his campaign website a number of taxes and other measures to fund the program.  One is a value-added tax (VAT), and from his very brief statements during the debates but also in interviews with the media, one gets the impression that all of the program would be funded by a value-added tax.  But that is not the case.  He in fact says on his campaign website that the VAT, at the rate and coverage he would set, would raise only about $800 billion.  This would come only to a bit over a quarter (27%) of the $3.0 trillion needed.  There is a need for much more besides, and to his credit, he presents plans for most (although not all) of this.

So what does he propose specifically?:

a) A New Value-Added Tax:

First, and as much noted, he is proposing that the US institute a VAT at a rate of 10%.  He estimates it would raise approximately $800 billion a year, and for the parameters for the tax that he sets, that is a reasonable estimate.  A VAT is common in most of the rest of the world as it is a tax that is relatively easy to collect, with internal checks that make underreporting difficult.  It is in essence a tax on consumption, similar to a sales tax but levied only on the added value at each stage in the production chain.  Yang notes that a 10% rate would be approximately half of the rates found in Europe (which is more or less correct – the rates in Europe in fact vary by country and are between 17 and 27% in the EU countries, but the rates for most of the larger economies are in the 19 to 22% range).

A VAT is a tax on what households consume, and for that reason a regressive tax.  The poor and middle classes who have to spend all or most of their current incomes to meet their family needs will pay a higher share of their incomes under such a tax than higher-income households will.  For this reason, VAT systems as implemented will often exempt (or tax at a reduced rate) certain basic goods such as foodstuffs and other necessities, as such goods account for a particularly high share of the expenditures of the poor and middle classes.  Yang is proposing this as well.  But even with such exemptions (or lower VAT rates), a VAT tax is still normally regressive, just less so.

Furthermore, households will in the end be paying the tax, as prices will rise to reflect the new tax.  Yang asserts that some of the cost of the VAT will be shifted to businesses, who would not be able, he says, to pass along the full cost of the tax.  But this is not correct.  In the case where the VAT applies equally to all goods, the full 10% will be passed along as all goods are affected equally by the now higher cost, and relative prices will not change.  To the extent that certain goods (such as foodstuffs and other necessities) are exempted, there could be some shift in demand to such goods, but the degree will depend on the extent to which they are substitutable for the goods which are taxed.  If they really are necessities, such substitution is likely to be limited.

A VAT as Yang proposes thus would raise a substantial amount of revenues, and the $800 billion figure is a reasonable estimate.  This total would be on the order of half of all that is now raised by individual income taxes in the US (which was $1,684 billion in FY2018).  But one cannot avoid that such a tax is paid by households, who will face higher prices on what they purchase, and the tax will almost certainly be regressive, impacting the poor and middle classes the most (with the extent dependent on how many and which goods are designated as subject to a reduced VAT rate, or no VAT at all).  But whether regressive or not, everyone will be affected and hence no one will actually see a net increase of $12,000 in purchasing power from the proposed grant  Rather, it will be something less.

b)  A Requirement to Choose Either the $12,000 Grants, or Participation in Existing Government Social Programs

Second, Yang’s proposal would require that households who currently benefit from government social programs, such as for welfare or food stamps, would be required to give up those benefits if they choose to receive the $12,000 per adult per year.  He says this will lead to reduced government spending on such social programs of $500 to $600 billion a year.

There are two big problems with this.  The first is that those programs are not that large.  While it is not fully clear how expansive Yang’s list is of the programs which would then be denied to recipients of the $12,000 grants, even if one included all those included in what the Congressional Budget Office defines as “Income Security” (“unemployment compensation, Supplemental Security Income, the refundable portion of the earned income and child tax credits, the Supplemental Nutrition Assistance Program [food stamps], family support, child nutrition, and foster care”), the total spent in FY2018 was only $285 billion.  You cannot save $500 to $600 billion if you are only spending $285 billion.

Second, such a policy would be regressive in the extreme.  Poor and near-poor households, and only such households, would be forced to choose whether to continue to receive benefits under such existing programs, or receive the $12,000 per adult grant per year.  If they are now receiving $12,000 or more in such programs per adult household member, they would receive no benefit at all from what is being called a “universal” basic income grant.  To the extent they are now receiving less than $12,000 from such programs (per adult), they may gain some benefit, but less than $12,000 worth.  For example, if they are now receiving $10,000 in benefits (per adult) from current programs, their net gain would be just $2,000 (setting aside for the moment the higher prices they would also now need to pay due to the 10% VAT).  Furthermore, only the poor and near-poor who are being supported by such government programs will see such an effective reduction in their $12,000 grants.  The rich and others, who benefit from other government programs, will not see such a cut in the programs or tax subsidies that benefit them.

c)  Savings in Other Government Programs 

Third, Yang argues that with his universal basic income grant, there would be a reduction in government spending of $100 to $200 billion a year from lower expenditures on “health care, incarceration, homelessness services and the like”, as “people would be able to take better care of themselves”.  This is clearly more speculative.  There might be some such benefits, and hopefully would be, but without experience to draw on it is impossible to say how important this would be and whether any such savings would add up to such a figure.  Furthermore, much of those savings, were they to follow, would accrue not to the federal government but rather to state and local governments.  It is at the state and local level where most expenditures on incarceration and homelessness, and to a lesser degree on health care, take place.  They would not accrue to the federal budget.

d)  Increased Tax Revenues From a Larger Economy

Fourth, Yang states that with the $12,000 grants the economy would grow larger – by 12.5% he says (or $2.5 trillion in increased GDP).  He cites a 2017 study produced by scholars at the Roosevelt Institute, a left-leaning non-profit think tank based in New York, which examined the impact on the overall economy, under several scenarios, of precisely such a $12,000 annual grant per adult.

There are, however, several problems:

i)  First, under the specific scenario that is closest to the Yang proposal (where the grants would be funded through a combination of taxes and other actions), the impact on the overall economy forecast in the Roosevelt Institute study would be either zero (when net distribution effects are neutral), or small (up to 2.6%, if funded through a highly progressive set of taxes).

ii)  The reason for this result is that the model used by the Roosevelt Institute researchers assumes that the economy is far from full employment, and that economic output is then entirely driven by aggregate demand.  Thus with a new program such as the $12,000 grants, which is fully paid for by taxes or other measures, there is no impact on aggregate demand (and hence no impact on economic output) when net distributional effects are assumed to be neutral.  If funded in a way that is not distributionally neutral, such as through the use of highly progressive taxes, then there can be some effect, but it would be small.

In the Roosevelt Institute model, there is only a substantial expansion of the economy (of about 12.5%) in a scenario where the new $12,000 grants are not funded at all, but rather purely and entirely added to the fiscal deficit and then borrowed.  And with the current fiscal deficit now about 5% of GDP under Trump (unprecedented even at 5% in a time of full employment, other than during World War II), and the $12,000 grants coming to $3.0 trillion or 15% of GDP, this would bring the overall deficit to 20% of GDP!

Few economists would accept that such a scenario is anywhere close to plausible.  First of all, the current unemployment rate of 3.5% is at a 50 year low.  The economy is at full employment.  The Roosevelt Institute researchers are asserting that this is fictitious, and that the economy could expand by a substantial amount (12.5% in their scenario) if the government simply spent more and did not raise taxes to cover any share of the cost.  They also assume that a fiscal deficit of 20% of GDP would not have any consequences, such as on interest rates.  Note also an implication of their approach is that the government spending could be on anything, including, for example, the military.  They are using a purely demand-led model.

iii)  Finally, even if one assumes the economy will grow to be 12.5% larger as a result of the grants, even the Roosevelt Institute researchers do not assume it will be instantaneous.  Rather, in their model the economy becomes 12.5% larger only after eight years.  Yang is implicitly assuming it will be immediate.

There are therefore several problems in the interpretation and use of the Roosevelt Institute study.  Their scenario for 12.5% growth is not the one that follows from Yang’s proposals (which is funded, at least to a degree), nor would GDP jump immediately by such an amount.  And the Roosevelt Insitute model of the economy is one that few economists would accept as applicable in the current state of the economy, with its 3.5% unemployment.

But there is also a further problem.  Even assuming GDP rises instantly by 12.5%, leading to an increase in GDP of $2.5 trillion (from a current $20 trillion), Yang then asserts that this higher GDP will generate between $800 and $900 billion in increased federal tax revenue.  That would imply federal taxes of 32 to 36% on the extra output.  But that is implausible.  Total federal tax (and all other) revenues are only 17.5% of GDP.  While in a progressive tax system the marginal tax revenues received on an increase in income will be higher than at the average tax rate, the US system is no longer very progressive.  And the rates are far from what they would need to be twice as high at the margin (32 to 36%) as they are at the average (17.5%).  A more plausible estimate of the increased federal tax revenues from an economy that somehow became 12.5% larger would not be the $800 to $900 billion Yang calculates, but rather about half that.

Might such a universal basic income grant affect the size of the economy through other, more orthodox, channels?  That is certainly possible, although whether it would lead to a higher or to a lower GDP is not clear.  Yang argues that it would lead recipients to manage their health better, to stay in school longer, to less criminality, and to other such social benefits.  Evidence on this is highly limited, but it is in principle conceivable in a program that does properly redistribute income towards those with lower incomes (where, as discussed above, Yang’s specific program has problems).  Over fairly long periods of time (generations really) this could lead to a larger and stronger economy.

But one will also likely see effects working in the other direction.  There might be an increase in spouses (wives usually) who choose to stay home longer to raise their children, or an increase in those who decide to retire earlier than they would have before, or an increase in the average time between jobs by those who lose or quit from one job before they take another, and other such impacts.  Such impacts are not negative in themselves, if they reflect choices voluntarily made and now possible due to a $12,000 annual grant.  But they all would have the effect of reducing GDP, and hence the tax revenues that follow from some level of GDP.

There might therefore be both positive and negative impacts on GDP.  However, the impact of each is likely to be small, will mostly only develop over time, and will to some extent cancel each other out.  What is likely is that there will be little measurable change in GDP in whichever direction.

e)  Other Taxes

Fifth, Yang would institute other taxes to raise further amounts.  He does not specify precisely how much would be raised or what these would be, but provides a possible list and says they would focus on top earners and on pollution.  The list includes a financial transactions tax, ending the favorable tax treatment now given to capital gains and carried interest, removing the ceiling on wages subject to the Social Security tax, and a tax on carbon emissions (with a portion of such a tax allocated to the $12,000 grants).

What would be raised by such new or increased taxes would depend on precisely what the rates would be and what they would cover.  But the total that would be required, under the assumption that the amounts that would be raised (or saved, when existing government programs are cut) from all the measures listed above are as Yang assumes, would then be between $500 and $800 billion (as the revenues or savings from the programs listed above sum to $2.2 to $2.5 trillion).  That is, one might need from these “other taxes” as much as would be raised by the proposed new VAT.

But as noted in the discussion above, the amounts that would be raised by those measures are often likely to be well short of what Yang says will be the case.  One cannot save $500 to $600 billion in government programs for the poor and near-poor if government is spending only $285 billion on such programs, for example.  A more plausible figure for what might be raised by those proposals would be on the order of $1 trillion, mostly from the VAT, and not the $2.2 to $2.5 trillion Yang says will be the case.

C.  An Assessment

Yang provides a fair amount of detail on how he would implement a universal basic income grant of $12,000 per adult per year, and for a political campaign it is an admirable amount of detail.  But there are still, as discussed above, numerous gaps that prevent anything like a complete assessment of the program.  But a number of points are evident.

To start, the figures provided are not always plausible.  The math just does not add up, and for someone who extolls the need for good math (and rightly so), this is disappointing.  One cannot save $500 to $600 billion in programs for the poor and near-poor when only $285 billion is being spent now.  One cannot assume that the economy will jump immediately by 12.5% (which even the Roosevelt Institute model forecasts would only happen in eight years, and under a scenario that is the opposite of that of the Yang program, and in a model that few economists would take as credible in any case).  Even if the economy did jump by so much immediately, one would not see an increase of $800 to $900 billion in federal tax revenues from this but rather more like half that.  And other such issues.

But while the proposal is still not fully spelled out (in particular on which other taxes would be imposed to fill out the program), we can draw a few conclusions.  One is that the one group in society who will clearly not gain from the $12,000 grants is the poor and near-poor, who currently make use of food stamp and other such programs and decide to stay with those programs.  They would then not be eligible for the $12,000 grants.  And keep in mind that $12,000 per adult grants are not much, if you have nothing else.  One would still be below the federal poverty line if single (where the poverty line in 2019 is $12,490) or in a household with two adults and two or more children (where the poverty line, with two children, is $25,750).  On top of this, such households (like all households) will pay higher prices for at least some of what they purchase due to the new VAT.  So such households will clearly lose.

Furthermore, those poor or near-poor households who do decide to switch, thus giving up their eligibility for food stamps and other such programs, will see a net gain that is substantially less than $12,000 per adult.  The extent will depend on how much they receive now from those social programs.  Those who receive the most (up to $12,000 per adult), who are presumably also most likely to be the poorest among them, will lose the most.  This is not a structure that makes sense for a program that is purportedly designed to be of most benefit to the poorest.

For middle and higher-income households the net gain (or loss) from the program will depend on the full set of taxes that would be needed to fund the program.  One cannot say who will gain and who will lose until the structure of that full set of taxes is made clear.  This is of course not surprising, as one needs to keep in mind that this is a program of redistribution:  Funds will be raised (by taxes) that disproportionately affect certain groups, to be distributed then in the $12,000 grants.  Some will gain and some will lose, but overall the balance has to be zero.

One can also conclude that such a program, providing for a universal basic income with grants of $12,000 per adult, will necessarily be hugely expensive.  It would cost $3 trillion a year, which is 15% of GDP.  Funding it would require raising all federal tax and other revenue by 91% (excluding any offset by cuts in government social programs, which are however unlikely to amount to anything close to what Yang assumes).  Raising funds of such magnitude is completely unrealistic.  And yet despite such costs, the grants provided of $12,000 per adult would be poverty level incomes for those who do not have a job or other source of support.

One could address this by scaling back the grant, from $12,000 to something substantially less, but then it becomes less meaningful to an individual.  The fundamental problem is the design as a universal grant, to all adults.  While this might be thought to be politically attractive, any such program then ends up being hugely expensive.

The alternative is to design a program that is specifically targeted to those who need such support.  Rather than attempting to hide the distributional consequences in a program that claims to be universal (but where certain groups will gain and certain groups will lose, once one takes fully into account how it will be funded), make explicit the redistribution that is being sought.  With this clear, one can then design a focussed program that addresses that redistribution aim.

Finally, one should recognize that there are other policies as well that might achieve those aims that may not require explicit government-intermediated redistribution.  For example, Senator Cory Booker in the October 15 debate noted that a $15 per hour minimum wage would provide more to those now at the minimum wage than a $12,000 annual grant.  This remark was not much noted, but what Senator Booker said was true.  The federal minimum wage is currently $7.25 per hour.  This is low – indeed, it is less (in real terms) than what it was when Harry Truman was president.  If the minimum wage were raised to $15 per hour, a worker now at the $7.25 rate would see an increase in income of $15.00 – $7.25 = $7.75 per hour, and over a year of 40 hour weeks would see an increase in income of $7.75 x 40 x 52 = $16,120.00.  This is well more than a $12,000 annual grant would provide.

Republican politicians have argued that raising the minimum wage by such a magnitude will lead to widespread unemployment.  But there is no evidence that changes in the minimum wage that we have periodically had in the past (whether federal or state level minimum wages) have had such an adverse effect.  There is of course certainly some limit to how much it can be raised, but one should recognize that the minimum wage would now be over $24 per hour if it had been allowed to grow at the same pace as labor productivity since the late 1960s.

Income inequality is a real problem in the US, and needs to be addressed.  But there are problems with Yang’s specific version of a universal basic income.  While one may be able to fix at least some of those problems and come up with something more reasonable, it would still be massively disruptive given the amounts to be raised.  And politically impossible.  A focus on more targeted programs, as well as on issues such as the minimum wage, are likely to prove far more productive.

The Increasingly Attractive Economics of Solar Power: Solar Prices Have Plunged

A.  Introduction

The cost of solar photovoltaic power has fallen dramatically over the past decade, and it is now, together with wind, a lower cost source of new power generation than either fossil-fuel (coal or gas) or nuclear power plants.  The power generated by a new natural gas-fueled power plant in 2018 would have cost a third more than from a solar or wind plant (in terms of the price they would need to sell the power for in order to break even); coal would have cost 2.4 times as much as solar or wind; and a nuclear plant would have cost 3.5 times as much.

These estimates (shown in the chart above, and discussed in more detail below) were derived from figures estimated by Lazard, the investment bank, and are based on bottom-up estimates of what such facilities would have cost to build and operate, including the fuel costs.  But one also finds a similar sharp fall in solar energy prices in the actual market prices that have been charged for the sale of power from such plants under long-term “power purchase agreements” (PPAs).  These will also be discussed below.

With the costs where they are now, it would not make economic sense to build new coal or nuclear generation capacity, nor even gas in most cases.  In practice, however, the situation is more complex due to regulatory issues and conflicting taxes and subsidies, and also because of variation across regions.  Time of day issues may also enter, depending on when (day or night) the increment in new capacity might be needed.  The figures above are also averages, particular cases vary, and what is most economic in any specific locale will depend on local conditions.  Nevertheless, and as we will examine below, there has been a major shift in new generation capacity towards solar and wind, and away from coal (with old coal plants being retired) and from nuclear (with no new plants being built, but old ones largely remaining).

But natural gas generation remains large.  Indeed, while solar and wind generation have grown quickly (from a low base), and together account for the largest increment in new power capacity in recent years, gas accounts for the largest increment in power production (in megawatt-hours) measured from the beginning of this decade.  Why?  In part this is due to the inherent constraints of solar and wind technologies:  Solar panels can only generate power when the sun shines, and wind turbines when the wind is blowing.  But more interestingly, one also needs to look at the economics behind the choice as to whether or not to build new generation capacity to replace existing capacity, and then what sources of capacity to use.  Critical is what economists call the marginal cost of such production.  A power plant lasts for many years once it is built, and the decision on whether to keep an existing plant in operation for another year depends only on the cost of operating and maintaining the plant.  The capital cost has already been spent and is no longer relevant to that decision.

Details in the Lazard report can be used to derive such marginal cost estimates by power source, and we will examine these below.  While the Lazard figures apply to newly built plants (older plants will generally have higher operational and maintenance costs, both because they are getting old and because technology was less efficient when they were built), the estimates based on new plants can still give us a sense of these costs.  But one should recognize they will be biased towards indicating the costs of the older plants are lower than they in fact are.  However, even these numbers (biased in underestimating the costs of older plants) imply that it is now more economical to build new wind and possibly solar plants, in suitable locales, than it costs to continue to keep open and operate coal-burning power plants.  This will be especially true for the older, less-efficient, coal-burning plants.  Thus we should be seeing old coal-burning plants being shut down.  And indeed we do.  Moreover, while the costs of building new wind and solar plants are not yet below the marginal costs of keeping open existing gas-fueled and nuclear power plants, they are on the cusp of being so.

These costs also do not reflect any special subsidies that solar and wind plants might benefit from.  These vary by state.  Fossil-fueled and nuclear power plants also enjoy subsidies (often through special tax advantages), but these are long-standing and are implicitly being included in the Lazard estimates of the costs of such traditional plants.

But one special subsidy enjoyed by fossil fuel burning power plants, not reflected in the Lazard cost estimates, is the implicit subsidy granted to such plants from not having to cover the cost of the damage from the pollution they generate.  Those costs are instead borne by the general public.  And while such plants pollute in many different ways (especially the coal-burning ones), I will focus here on just one of those ways – their emissions of greenhouse gases that are leading to a warming planet and consequent more frequent and more damaging extreme weather events.  Solar and wind generation of power do not cause such pollution – the burning of coal and gas do.

To account for such costs and to ensure a level playing field between power sources, a fee would need to be charged to reflect the costs being imposed on the general population from this (and indeed other) such pollution.  The revenues generated could be distributed back to the public in equal per capita terms, as discussed in an earlier post on this blog.  We will see that a fee of even just $20 per ton of CO2 emitted would suffice to make it economic to build new solar and wind power plants to substitute not just for new gas and coal burning plants, but for existing ones as well.  Gas and especially coal burning plants would not be competitive with installing new solar or wind generation if they had to pay for the damage done as a result of their greenhouse gas pollution, even on just marginal operating costs.

Two notes before starting:  First, many will note that while solar might be fine for the daytime, it will not be available at night.  Similarly, wind generation will be fine when the wind blows, but it may not always blow even in the windiest locales.  This is of course true, and should solar and wind capacity grow to dominate power generation, there will have to be ways to store that power to bridge the times from when the generation occurs to when the power is used.

But while storage might one day be an issue, it is mostly not an issue now.  In 2018, utility-scale solar only accounted for 1.6% of power generation in the US (and 2.3% if one includes small scale roof-top systems), while wind only accounted for 6.6%.  At such low shares, solar and wind power can simply substitute for other, higher cost, sources of power (such as from coal) during the periods the clean sources are available.  Note also that the cost figures for solar and wind reflected in the chart at the top of this post (and discussed in detail below) take into account that solar and wind cannot be used 100% of the time.  Rather, utilization is assumed to be similar to what their recent actual utilization has been, not only for solar and wind but also for gas, coal and nuclear.  Solar and wind are cheaper than other sources of power (over the lifetime of these investments) despite their inherent constraints on possible utilization.

But where the storage question can enter is in cases where new generation capacity is required specifically to serve evening or night-time needs.  New gas burning plants might then be needed to serve such time-of-day needs if storage of day-time solar is not an economic option.  And once such gas-burning plants are built, the decision on whether they should be run also to serve day-time needs will depend on a comparison of the marginal cost of running these gas plants also during the day, to the full cost of building new solar generation capacity, as was discussed briefly above and will be considered in more detail below.

This may explain, in part, why we see new gas-burning plants still being built nationally.  While less than new solar and wind plants combined (in terms of generation capacity), such new gas-burning plants are still being built despite their higher cost.

More broadly, California and Hawaii (both with solar now accounting for over 12% of power used in those states) are two states (and the only two states) which may be approaching the natural limits of solar generation in the absence of major storage.  During some sunny days the cost of power is being driven down to close to zero (and indeed to negative levels on a few days).  Major storage will be needed in those states (and only those states) to make it possible to extend solar generation much further than where it is now.  But this should not be seen so much as a “problem” but rather as an opportunity:  What can we do to take advantage of cheap day-time power to make it available at all hours of the day?  I hope to address that issue in a future blog post.  But in this blog post I will focus on the economics of solar generation (and to a lesser extent from wind), in the absence of significant storage.

Second, on nomenclature:  A megawatt-hour is a million watts of electric power being produced or used for one hour.  One will see it abbreviated in many different ways, including MWHr, MWhr, MWHR, MWH, MWh, and probably more.  I will try to use consistently MWHr.  A kilowatt-hour (often kWh) is a thousand watts of power for one hour, and is the typical unit used for homes.  A megawatt-hour will thus be one thousand times a kilowatt-hour, so a price of, for example, $20 per MWHr for solar-generated power (which we will see below has in fact been offered in several recent PPA contracts) will be equivalent to 2.0 cents per kWh.  This will be the wholesale price of such power.  The retail price in the US for households is typically around 10 to 12 cents per kWh.

B.  The Levelized Cost of Energy 

As seen in the chart at the top of this post, the cost of generating power by way of new utility-scale solar photovoltaic panels has fallen dramatically over the past decade, with a cost now similar to that from new on-shore wind turbines, and well below the cost from building new gas, coal, or nuclear power plants.  These costs can be compared in terms of the “levelized cost of energy” (LCOE), which is an estimate of the price that would need to be charged for power from such a plant over its lifetime, sufficient to cover the initial capital cost (at the anticipated utilization rate), plus the cost of operating and maintaining the plant,

Lazard, the investment bank, has published estimates of such LCOEs annually for some time now.  The most recent report, issued in November 2018, is version 12.0.  Lazard approaches the issue as an investment bank would, examining the cost of producing power by each of the alternative sources, with consistent assumptions on financing (with a debt/equity ratio of 60/40, an assumed cost of debt of 8%, and a cost of equity of 12%) and a time horizon of 20 years.  They also include the impact of taxes, and show separately the impact of special federal tax subsidies for clean energy sources.  But the figures I will refer to throughout this post (including in the chart above) are always the estimates excluding any impact from special subsidies for clean energy.  The aim is to see what the underlying actual costs are, and how they have changed over time.

The Lazard LCOE estimates are calculated and presented in nominal terms.  They show the price, in $/MWHr, that would need to be charged over a 20-year time horizon for such a project to break even.  For comparability over time, as well as to produce estimates that can be compared directly to the PPA contract prices that I will discuss below, I have converted those prices from nominal to real terms in constant 2017 dollars.  Two steps are involved.  First, the fixed nominal LCOE prices over 20 years will be falling over time in real terms due to general inflation.  They were adjusted to the prices of their respective initial year (i.e. the relevant year from 2009 to 2018) using an inflation rate of 2.25% (which is the rate used for the PPA figures discussed below, the rate the EIA assumed in its 2018 Annual Energy Outlook report, and the rate which appears also to be what Lazard assumed for general cost escalation factors).  Second, those prices for the years between 2009 and 2018 were all then converted to constant 2017 prices based on actual inflation between those years and 2017.

The result is the chart shown at the top of this post.  The LCOEs in 2018 (in 2017$) were $33 per MWHr for a newly built utility-scale solar photovoltaic system and also for an on-shore wind installation, $44 per MWHr for a new natural gas combined cycle plant, $78 for a new coal-burning plant, and $115 for a new nuclear power plant.  The natural gas plant would cost one-third more than a solar or wind plant, coal would cost 2.4 times as much, and a nuclear plant 3.5 times as much.  Note also that since the adjustments for inflation are the same for each of the power generation methods, their costs relative to each other (in ratio terms) are the same for the LCOEs expressed in nominal cost terms.  And it is their costs relative to each other which most matters.

The solar prices have fallen especially dramatically.  The 2018 LCOE was only one-tenth of what it was in 2009.  The cost of wind generation has also fallen sharply over the period, to about one-quarter in 2018 of what it was in 2009.  The cost from gas combined cycle plants (the most efficient gas technology, and is now widely used) also fell, but only by about 40%, while the cost of coal or nuclear were roughly flat or rising, depending on precisely what time period is used.

There is good reason to believe the cost of solar technology will continue to decline.  It is still a relatively new technology, and work in labs around the world are developing solar technologies that are both more efficient and less costly to manufacture and install.

Current solar installations (based on crystalline silicon technology) will typically have conversion efficiencies of 15 to 17%.  And panels with efficiencies of up to 22% are now available in the market – a gain already on the order of 30 to 45% over the 15 to 17% efficiency of current systems.  But a chart of how solar efficiencies have improved over time (in laboratory settings) shows there is good reason to believe that the efficiencies of commercially available systems will continue to improve in the years to come.  While there are theoretical upper limits, labs have developed solar cell technologies with efficiencies as high as 46% (as of January 2019).

Particularly exciting in recent years has been the development of what are called “perovskite” solar technologies.  While their current efficiencies (of up to 28%, for a tandem cell) are just modestly better than purely crystalline silicon solar cells, they have achieved this in work spanning only half a decade.  Crystalline silicon cells only saw such an improvement in efficiencies in research that spanned more than four decades.  And perhaps more importantly, perovskite cells are much simpler to manufacture, and hence much cheaper.

Based on such technologies, one could see solar efficiencies doubling within a few years, from the current 15 to 17% to say 30 to 35%.  And with a doubling in efficiency, one will need only half as many solar panels to produce the same megawatts of power, and thus also only half as many frames to hold the panels, half as much wiring to link them together, and half as much land.  Coupled with simplified and hence cheaper manufacturing processes (such as is possible for perovskite cells), there is every reason to believe prices will continue to fall.

While there can be no certainty in precisely how this will develop, a simple extrapolation of recent cost trends can give an indication of what might come.  Assuming costs continue to change at the same annual rate that they had over the most recent five years (2013 to 2018), one would find for the years up to 2023:

If these trends hold, then the LCOE (in 2017$) of solar power will have fallen to $13 per MWHr by 2023, wind will have fallen to $18, and gas will be at $32 (or 2.5 times the LCOE of solar in that year, and 80% above the LCOE of wind).  And coal (at $70) and nuclear (at $153) will be totally uncompetitive.

This is an important transition.  With the dramatic declines in the past decade in the costs for solar power plants, and to a lesser extent wind, these clean sources of power are now more cost competitive than traditional, polluting, sources.  And this is all without any special subsidies for the clean energy.  But before looking at the implications of this for power generation, as a reality check it is good first to examine whether the declining costs of solar power have been reflected in actual market prices for such power.  We will see that they have.

C.  The Market Prices for Solar Generated Power

Power Purchase Agreements (PPAs) are long-term contracts where a power generator (typically an independent power producer) agrees to supply electric power at some contracted capacity and at some price to a purchaser (typically a power utility or electric grid operator).  These are competitively determined (different parties interested in building new power plants will bid for such contracts, with the lowest price winning) and are a direct market measure of the cost of energy from such a source.

The Lawrence Berkeley National Lab, under a contract with the US Department of Energy, produces an annual report that reviews and summarizes PPA contracts for recent utility-scale solar power projects, including the agreed prices for the power.  The most recent was published in September 2018, and covers 2018 (partially) and before.  While the report covers both solar photovoltaic and concentrating solar thermal projects, the figures of interest to us here (and comparable to the Lazard LCOEs discussed above) are the PPAs for the solar photovoltaic projects.

The PPA prices provided in the report were all calculated by the authors on a levelized basis and in terms of 2017 prices.  This was done to put them all on a comparable basis to each other, as the contractual terms of the specific contracts could differ (e.g. some had price escalation clauses and some did not).  Averages by year were worked out with the different projects weighted by generation capacity.

The PPA prices are presented by the year the contracts were signed.  If one then plots these PPA prices with a one year lag and compare them to the Lazard estimated LCOE prices of that year, one finds a remarkable degree of overlap:

This high degree of overlap is extraordinary.  Only the average PPA price for 2010 (reflecting the 2009 average price lagged one year) is off, but would have been close with a one and a half year lag rather than a one year lag.  Note also that while the Lawrence Berkeley report has PPA prices going back to 2006, the figures for the first several years are based on extremely small samples (just one project in 2006, one in 2007, and three in 2008, before rising to 16 in 2009 and 30 in 2010).  For that reason I have not plotted the 2006 to 2008 PPA prices (which would have been 2007 to 2009 if lagged one year), but they also would have been below the Lazard LCOE curve.

What might be behind this extraordinary overlap when the PPA prices are lagged one year?  Two possible explanations present themselves.  One is that the power producers when making their PPA bids realize that there will be a lag from when the bids are prepared to when the winning bidder is announced and construction of the project begins.  With the costs of solar generation falling so quickly, it is possible that the PPA bids reflect what they know will be a lag between when the bid is prepared and when the project has to be built (with solar panels purchased and other costs incurred).  If that lag is one year, one will see overlap such as that found for the two curves.

Another possible explanation for the one-year shift observed between the PPA prices (by date of contract signing) and the Lazard LCOE figures is that the Lazard estimates labeled for some year (2018 for example) might in fact represent data on the cost of the technologies as of the prior year (2017 in this example).  One cannot be sure from what they report.  Or the remarkable degree of overlap might be a result of some combination of these two possible explanations, or something else.

But for whatever reason, the two estimates move almost exactly in parallel over time, and hence show an almost identical rate of decline for both the cost of generating power from solar photovoltaic sources and in the market PPA prices for such power.  And it is that rapid rate of decline which is important.

It is also worth noting that the “bump up” in the average PPA price curve in 2017 (shown in the chart as 2018 with the one year lag) reflects in part that a significant number of the projects in the 2017 sample of PPAs included, as part of the contract, a power storage component to store a portion of the solar-generated power for use in the evening or night.  But these additional costs for storage were remarkably modest, and were even less in several projects in the partial-year 2018 sample.  Specifically, Nevada Energy (as the offtaker) announced in June 2018 that it had contracted for three major solar projects that would include storage of power of up to one-quarter of generation capacity for four hours, with overall PPA prices (levelized, in 2017 prices) for both the generation and the storage of just $22.8, $23.5, and $26.4 per MWHr (i.e. 2.28 cents, 2.35 cents, and 2.64 cents per kWh, respectively).

The PPA prices reported can also be used to examine how the prices vary by region.  One should expect solar power to be cheaper in southern latitudes than in northern ones, and in dry, sunny, desert areas than in regions with more extensive cloud cover.  And this has led to the criticism by skeptics that solar power can only be competitive in places such as the US Southwest.

But this is less of an issue than one might assume.  Dividing up the PPA contracts by region (with no one-year lag in this chart), one finds:

Prices found in the PPAs are indeed lower in the Southwest, California, and Texas.  But the PPA prices for projects in the Southeast, the Midwest, and the Northwest fell at a similar pace as those in the more advantageous regions (and indeed, at a more rapid pace up to 2014).  And note that the prices in those less advantageous regions are similar to what they were in the more advantageous regions just a year or two before.  Finally, the absolute differences in prices have become relatively modest in the last few years.

The observed market prices for power generated by solar photovoltaic systems therefore appear to be consistent with the bottom-up LCOE estimates of Lazard – indeed remarkably so.  Both show a sharp fall in solar energy prices/costs over the last decade, and sharp falls both for the US as a whole and by region.  The next question is whether we see this reflected in investment in additions to new power generation capacity, and in the power generated by that capacity.

D.  Additions to Power Generation Capacity, and in Power Generation

The cost of power from a new solar or wind plant is now below the cost from gas (while the cost of new coal or nuclear generation capacity is totally uncompetitive).  But the LCOEs indicate that the cost advantage relative to gas is relatively recent in the case of solar (starting from 2016), and while a bit longer for wind, the significant gap in favor of wind only opened up in 2014.  One needs also to recognize that these are average or mid-point estimates of costs, and that in specific cases the relative costs will vary depending on local conditions.  Thus while solar or wind power is now cheaper on average across the US, in some particular locale a gas plant might be less expensive (especially if the costs resulting from its pollution are not charged).  Finally, and as discussed above, there may be time-of-day issues that the new capacity may be needed for, with this affecting the choices made.

Thus while one should expect a shift towards solar and wind over the last several years, and away from traditional fuels, the shift will not be absolute and immediate.  What do we see?

First, in terms of the gross additions to power sector generating capacity:

The chart shows the gross additions to power capacity, in megawatts, with both historical figures (up through 2018) and as reflected in plans filed with the US Department of Energy (for 2019 and 2020, with the plans as filed as of end-2018).  The data for this (and the other charts in this section) come from the most recent release of the Electric Power Annual of the Energy Information Agency (EIA) (which was for 2017, and was released on October 22, 2018), plus from the Electric Power Monthly of February, 2019, also from the Energy Information Agency (where the February issue each year provides complete data for the prior calendar year, i.e. for 2018 in this case).

The planned additions to capacity (2019 and 2020 in the chart) provide an indication of what might happen over the next few years, but must be interpreted cautiously.  While probably pretty good for the next few years, biases will start to enter as one goes further into the future.  Power producers are required to file their plans for new capacity (as well as for retirements of existing capacity) with the Department of Energy, for transparency and to help ensure capacity (locally as well as nationally) remains adequate.  But these reported plans should be approached cautiously.  There is a bias as projects that require a relatively long lead time (such as gas plants, as well as coal and especially nuclear) will be filed years ahead, while the more flexible, shorter construction periods, required for solar and wind plants means that these plans will only be filed with the Department of Energy close to when that capacity will be built.  But for the next few years, the plans should provide an indication of how the market is developing.

As seen in the chart, solar and wind taken together accounted for the largest single share of gross additions to capacity, at least through 2017.  While there was then a bump up in new gas generation capacity in 2018, this is expected to fall back to earlier levels in 2019 and 2020.  And these three sources (solar, wind, and gas) accounted for almost all (93%) of the gross additions to new capacity over 2012 to 2018, with this expected to continue.

New coal-burning plants, in contrast, were already low and falling in 2012 and 2013, and there have been no new ones since then.  Nor are any planned.  This is as one would expect based on the LCOE estimates discussed above – new coal plants are simply not cost competitive.  And the additions to nuclear and other capacity have also been low.  “Other” capacity is a miscellaneous category that includes hydro, petroleum-fueled plants such as diesel, as well as other renewables such as from the burning of waste or biomass. The one bump up, in 2016, is due to a nuclear power plant coming on-line that year.  It was unit #2 of the Watts Bar nuclear power plant built by the Tennessee Valley Authority (TVA), and had been under construction for decades.  Indeed the most recent nuclear plant completed in the US before this one was unit #1 at the same TVA plant, which came on-line 20 years before in 1996.  Even aside from any nuclear safety concerns, nuclear plants are simply not economically competitive with other sources of power.

The above are gross additions to power generating capacity, reflecting what new plants are being built.  But old, economically or technologically obsolete, plants are also being retired, so what matters to the overall shift in power generation capacity is what has happened to net generation capacity:

What stands out here is the retirement of coal-burning plants.  And while the retirements might appear to diminish in the plans going forward, this may largely be due to retirement plans only being announced shortly before they happen.  It is also possible that political pressure from the Trump administration to keep coal-burning plants open, despite their higher costs (and their much higher pollution), might be a factor.  We will see what happens.

The cumulative impact of these net additions to capacity (relative to 2010 as the base year) yields:

Solar plus wind accounts for the largest addition to capacity, followed by gas.  Indeed, each of these accounts for more than 100% of the growth in overall capacity, as there has been a net reduction in the nuclear plus other category, and especially in coal.

But what does this mean in terms of the change in the mix of electric power generation capacity in the US?  Actually, less than one might have thought, as one can see in a chart of the shares:

The share of coal has come down, but remains high, and similarly for nuclear (plus miscellaneous other) capacity.  Gas remains the highest and has risen as a share, while solar and wind, while rising at a rapid pace relative to where it was to start, remains the smallest shares (of the categories used here).

The reason for these relatively modest changes in shares is that while solar and wind plus gas account for more than 100% of the net additions to capacity, that net addition has been pretty small.  Between 2010 and 2018, the net addition to US electric power generation capacity was just 58.8 thousand megawatts, or an increase over eight years of just 5.7% over what capacity was in 2010 (1,039.1 thousand megawatts).  A big share of something small will still be small.

So even though solar and wind are now the lowest cost sources of new power generation, the very modest increase in the total power capacity needed has meant that not that much has been built.  And much of what has been built has been in replacement of nuclear and especially coal capacity.  As we will discuss below, the economic issue then is not whether solar and wind are the cheapest source of new capacity (which they are), but whether new solar and wind are more economic than what it costs to continue to operate existing coal and nuclear plants.  That is a different question, and we will see that while new solar and wind are now starting to be a lower cost option than continuing to operate older coal (but not nuclear) plants, this development (a critically important development) has only been recent.

Why did the US require such a small increase in power generation capacity in recent years?  As seen in the chart below, it is not because GDP has not grown, but rather because energy efficiency (real GDP per MWHr of power) improved tremendously, at least until 2017:

From 2010 to 2017, real GDP rose by 15.7% (2.1% a year on average), but GDP per MWHr of power generated rose by 18.3%.  That meant that power generation (note that generation is the relevant issue here, not capacity) could fall by 2.2% despite the higher level of GDP.  Improving energy efficiency was a key priority during the Obama years, and it appears to have worked well.  It is better for efficiency to rise than to have to produce more power, even if that power comes from a clean source such as solar or wind.

This reversed direction in 2018.  It is not clear why, but might be an early indication that the policies of the Trump administration are harming efficiency in our economy.  However, this is still just one year of data, and one will need to wait to see whether this was an aberration or a start of a new, and worrisome, trend.

Which brings us to generation.  While the investment decision is whether or not to add capacity, and if so then of what form (e.g. solar or gas or whatever), what is ultimately needed is the power generated.  This depends on the capacity available and then on the decision of how much of that capacity to use to generate the power needed at any given moment.  One needs to keep in mind that power in general is not stored (other than still very limited storage of solar and wind power), but rather has to be generated at the moment needed.  And since power demand goes up and down over the course of the day (higher during the daylight hours and lower at night), as well as over the course of the year (generally higher during the summer, due to air conditioning, and lower in other seasons), one needs total generation capacity sufficient to meet whatever the peak load might be.  This means that during all other times there will be excess, unutilized, capacity.  Indeed, since one will want to have a safety margin, one will want to have total power generation capacity of even more than whatever the anticipated peak load might be in any locale.

There will always, then, be excess capacity, just sometimes more and sometimes less.  And hence decisions will be necessary as to what of the available capacity to use at any given moment.  While complex, the ultimate driver of this will be (or at least should be, in a rational system) the short-run costs of producing power from the possible alternative sources available in the region where the power is needed.  These costs will be examined in the next section below.  But for here, we will look at how generation has changed over the last several years.

In terms of the change in power generation by source relative to the levels in 2010, one finds:

Gas now accounts for the largest increment in generation over this period, with solar and wind also growing (steadily) but by significantly less.  Coal powered generation, in contrast, fell substantially, while nuclear and other sources were basically flat.  And as noted above, due to increased efficiency in the use of power (until 2017), total power use was flat to falling a bit, even as GDP grew substantially.  This reversed in 2018  when efficiency fell, and gas generated power rose to provide for the resulting increased power demands.  Solar and wind continued on the same path as before, and coal generation still fell at a similar pace as before.  But it remains to be seen whether 2018 marked a change in the previous trend in efficiency gains, or was an aberration.

Why did power generation from gas rise by more than from solar and wind over the period, despite the larger increase in solar plus wind capacity than in gas generation capacity?  In part this reflects the cost factors which we will discuss in the next section below.  But in part one needs also to recognize factors inherent in the technologies.  Solar generation can only happen during the day (and also when there is no cloud cover), while wind generation depends on when the wind blows.  Without major power storage, this will limit how much solar and wind can be used.

The extent to which some source of power is in fact used over some period (say a year), as a share of what would be generated if the power plant operated at 100% of capacity for 24 hours a day, 365 days a year, is defined as the “capacity factor”.  In 2018, the capacity factor realized for solar photovoltaic systems was 26.1% while for wind it was 37.4%.  But for no power source is it 100%.  For natural gas combined cycle plants (the primary source of gas generation), the capacity factor was 57.6% in 2018 (up from 51.3% in 2017, due to the jump in power demand in 2018).  This is well below the theoretical maximum of 100% as in general one will be operating at less than peak capacity (plus plants need to be shut down periodically for maintenance and other servicing).

Thus increments in “capacity”, as measured, will therefore not tell the whole story.  How much such capacity is used also matters.  And the capacity factors for solar and wind will in general be less than what they will be for the other primary sources of power generation, such as gas, coal, and nuclear (and excluding the special case of plants designed solely to operate for short periods of peak load times, or plants used as back-ups or for cases of emergencies).  But how much less depends only partly on the natural constraints on the clean technologies.  It also depends on marginal operating costs, as we will discuss below.

Finally, while gas plus solar and wind have grown in terms of power generation since 2010, and coal has declined (and nuclear and other sources largely unchanged), coal-fired generation remains important.  In terms of the percentage shares of overall power generation:

While coal has fallen as a share, from about 45% of US power generation in 2010 to 27% in 2018, it remains high.  Only gas is significantly higher (at 35% in 2010).  Nuclear and other sources (such as hydro) accounts for 29%, with nuclear alone accounting for two-thirds of this and other sources the remaining one-third.  Solar and wind have grown steadily, and at a rapid rate relative to where they were in 2010, but in 2018 still accounted only for about 8% of US power generation.

Thus while coal has come down, there is still very substantial room for further substitution out of coal, by either solar and wind or by natural gas.  The cost factors that will enter into this decision on substituting out of coal will be discussed next.

E.  The Cost Factors That Enter in the Decisions on What Plants to Build, What Plants to Keep in Operation, and What Plants to Use

The Lazard analysis of costs presents estimates not only for the LCOE of newly built power generation plants, but also figures that can be used to arrive at the costs of operating a plant to produce power on any given day, and of operating a plant plus keeping it maintained for a year.  One needs to know these different costs in order to address different questions.  The LCOE is used to decide whether to build a new plant and keep it in operation for a period (20 years is used); the operating cost is used to decide which particular power plant to run at any given time to generate the power then needed (from among all the plants up and available to run that day); while the operating cost plus the cost of regular annual maintenance is used in the decision of whether to keep a particular plant open for another year.

The Lazard figures are not ideal for this, as they give cost figures for a newly built plant, using the technology and efficiencies available today.  The cost to maintain and operate an older plant will be higher than this, both because older technologies were less efficient but also simply because they are older and hence more liable to break down (and hence cost more to keep running) than a new plant.  But the estimates for a new plant do give us a sense of what the floor for such costs might be – the true costs for currently existing plants of various ages will be somewhat higher.

Lazard also recognized that there will be a range of such costs for a particular type of plant, depending on the specifics of the particular location and other such factors.  Their report therefore provides both what it labels low end and high end estimates, and with a mid-point estimate then based usually on the average between the two.  The figures shown in the chart at the top of this post are the mid-point estimates, but in the tables below we will show the low and high end cost estimates as well.  These figures are helpful in providing a sense of the range in the costs one should expect, although how Lazard defined the range they used is not fully clear.  They are not of the absolutely lowest possible cost plant nor absolutely highest possible cost plant.  Rather, the low end figures appear to be averages of the costs of some share of the lowest cost plants (possibly the lowest one third), and similarly for the high end figures.

The cost figures below are from the 2018 Lazard cost estimates (the most recent year available).  The operating and maintenance costs are by their nature current expenditures, and hence their costs will be in current, i.e. 2018, prices.  The LCOE estimates of Lazard are different.  As was noted above, these are the levelized prices that would need to be charged for the power generated to cover the costs of building and then operating and maintaining the plant over its assumed (20 year) lifetime.  They therefore need to be adjusted to reflect current prices.  For the chart at the top of this post, they were put in terms of 2017 prices (to make them consistent with the PPA prices presented in the Berkeley report discussed above).  But for the purposes here, we will put them in 2018 prices to ensure consistency with the prices for the operating and maintenance costs.  The difference is small (just 2.2%).

The cost estimates derived from the Lazard figures are then:

(all costs in 2018 prices)

A.  Levelized Cost of Energy from a New Power Plant:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$31.23

$22.65

$32.02

$46.85

$87.46

mid-point

$33.58

$33.19

$44.90

$79.26

$117.52

high end

$35.92

$43.73

$57.78

$111.66

$147.58

B.  Cost to Maintain and Operate a Plant Each year, including for Fuel:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$4.00

$9.24

$24.38

$23.19

$23.87

mid-point

$4.66

$10.64

$26.51

$31.30

$25.11

high end

$5.33

$12.04

$28.64

$39.41

$26.35

C.  Short-term Variable Cost to Operate a Plant, including for Fuel:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$0.00

$0.00

$23.16

$14.69

$9.63

mid-point

$0.00

$0.00

$25.23

$18.54

$9.63

high end

$0.00

$0.00

$27.31

$22.40

$9.63

A number of points follow from these cost estimates:

a)  First, and as was discussed above, the LCOE estimates indicate that for the question of what new type of power plant to build, it will in general be cheapest to obtain new power from a solar or wind plant.  The mid-point LCOE estimates for solar and wind are well below the costs of power from gas plants, and especially below the costs from coal or nuclear plants.

But also as noted before, local conditions vary and there will in fact be a range of costs for different types of plants.  The Lazard estimates indicate that a gas plant with costs at the low end of a reasonable range (estimated to be about $32 per MWHr) would be competitive with solar or wind plants at the mid-point of their cost range (about $33 to $34 per MWHr), and below the costs of a solar plant at the high end of its cost range ($36) and especially a wind plant at its high end of its costs ($44).  However, there are not likely to be many such cases:  Gas plants with a cost at their mid-point estimate would not be competitive, and even less so for gas plants with a cost near their high end estimate.

Furthermore, even the lowest cost coal and nuclear plants would be far from competitive with solar or wind plants when considering the building of new generation capacity.  This is consistent with what we saw in Section D above, of no new coal or nuclear plants being built in recent years (with the exception of one nuclear plant whose construction started decades ago and was only finished in 2016).

b)  More interesting is the question of whether it is economic to build new solar or wind plants to substitute for existing gas, coal, or nuclear plants.  The figures in panel B of the table on the cost to operate and maintain a plant for another year (all in terms of $/MWHr) can give us a sense of whether this is worthwhile.  Keeping in mind that these are going to be low estimates (as they are the costs for newly built plants, using the technologies available today, not for existing ones which were built possibly many years ago), the figures suggest that it would make economic sense to build new solar and wind plants (at their LCOE costs) and decommission all but the most efficient coal burning plants.

However, the figures also suggest that this will not be the case for most of the existing gas or nuclear plants.  For such plants, with their capital costs already incurred, the cost to maintain and operate them for a further year is in the range of $24 to $29 (per MWHr) for gas plants and $24 to $26 for nuclear plants.  Even recognizing that these costs estimates will be low (as they are based on what the costs would be for a new plant, not existing ones), only the more efficient solar and wind plants would have an LCOE which is less.  But they are close, and are on the cusp of the point where it would be economic to build new solar and wind plants and decommission existing gas and nuclear plants, just as this is already the case for most coal plants.

c)  Panel C then provides figures to address the question of which power plants to operate, for those which are available for use on any given day.  With no short-term variable cost to generate power from solar or wind sources (they burn no fuel), it will always make sense to use those sources first when they are available.  The short-term cost to operate a nuclear power plant is also fairly low ($9.63 per MWHr in the Lazard estimates, with no significant variation in their estimates).  Unlike other plants, it is difficult to turn nuclear plants on and off, so such plants will generally be operated as baseload plants kept always on (other than for maintenance periods).

But it is interesting that, provided a coal burning plant was kept active and not decommissioned, the Lazard figures suggest that the next cheapest source of power (if one ignores the pollution costs) will be from burning coal.  The figures indicate coal plants are expensive to maintain (the difference between the figures in panel B and in panel C) but then cheap to run if they have been kept operational.  This would explain why we have seen many coal burning plants decommissioned in recent years (new solar and wind capacity is cheaper than the cost of keeping a coal burning plant maintained and operating), but that if the coal burning plant has been kept operational, that it will then typically be cheaper to run rather than a gas plant.

d)  Finally, existing gas plants will cost between $23 and $27 per MWHr to run, mostly for the cost of the gas itself.  Maintenance costs are low.  These figures are somewhat less than the cost of building new solar or wind capacity, although not by much.

But there is another consideration as well.  Suppose one needs to add to night-time capacity, so solar power will not be of use (assuming storage is not an economic option).  Assume also that wind is not an option for some reason (perhaps the particular locale).  The LCOE figures indicate that a new gas plant would then be the next best alternative.  But once this gas plant is built, it will be available also for use during the day.  The question then is whether it would be cheaper to run that gas plant during the day also, or to build solar capacity to provide the day-time power.

And the answer is that at these costs, which exclude the costs from the pollution generated, it would be cheaper to run the gas plant.  The LCOE costs for new solar power ranges from $31 to $36 per MWHr (panel A above), while the variable cost of operating a gas plant built to supply nighttime capacity ranges between $23 and $27 (panel C).  While the difference is not huge, it is still significant.

This may explain in part why new gas generation capacity is not only being built in the US, but also is then being used more than other sources for additional generation, even though new solar and wind capacity would be cheaper.  And part of the reason for this is that the costs imposed on others from the pollution generated by burning fossil fuels are not being borne by the power plant operators.  This will be examined in the next section below.

F.  The Impact of Including the Cost of Greenhouse Gas Emissions

Burning fossil fuels generates pollution.  Coal is especially polluting, in many different ways. But I will focus here on just one area of damage caused by the burning of fossil fuels, which is that from their generation of greenhouse gases.  These gases are warming the earth’s atmosphere, with this then leading to an increased frequency of extreme weather events, from floods and droughts to severe storms, and hurricanes of greater intensity.  While one cannot attribute any particular storm to the impact of a warmer planet, the increased frequency of such storms in recent decades is clearly a consequence of a warmer planet.  It is the same as the relationship of smoking to lung cancer.  While one cannot with certainty attribute a particular case of lung cancer to smoking (there are cases of lung cancer among people who do not smoke), it is well established that there is an increased likelihood and frequency of lung cancer among smokers.

When the costs from the damage created from greenhouse gases are not borne by the party responsible for the emissions, that party will ignore those costs.  In the case of power production, they do not take into account such costs in deciding whether to use clean sources (solar or wind) to generate the power needed, or to burn coal or gas.  But the costs are still there and are being imposed on others.  Hence economists have recommended that those responsible for such decisions face a price which reflects such costs.  A specific proposal, discussed in an earlier post on this blog, is to charge a tax of $40 per ton of CO2 emitted.  All the revenue collected by that tax would then be returned in equal per capita terms to the American population.  Applied to all sources of greenhouse gas emissions (not just power), the tax would lead to an annual rebate of almost $500 per person, or $2,000 for a family of four.  And since it is the rich who account most (in per person terms) for greenhouse gas emissions, it is estimated that such a tax and redistribution would lead to those in the lowest seven deciles of the population (the lowest 70%) receiving more on average than what they would pay (directly or indirectly), while only the richest 30% would end up paying more on a net basis.

Such a tax on greenhouse gas emissions would have an important effect on the decision of what sources of power to use when power is needed.  As noted in the section above, at current costs it is cheaper to use gas-fired generation, and even more so coal-fired generation, if those plants have been built and are available for operation, than it would cost to build new solar or wind plants to provide such power.  The costs are getting close to each other, but are not there yet.  If gas and coal burning plants do not need to worry about the costs imposed on others from the burning of their fuels, such plants may be kept in operation for some time.

A tax on the greenhouse gases emitted would change this calculus, even with all other costs as they are today.  One can calculate from figures presented in the Lazard report what the impact would be.  For the analysis here, I have looked at the impact of charging $20 per ton of CO2 emitted, $40 per ton of CO2, or $60 per ton of CO2.  Analyses of the social cost of CO2 emissions come up with a price of around $40 per ton, and my aim here was to examine a generous span around this cost.

Also entering is how much CO2 is emitted per MWHr of power produced.  Figures in the Lazard report (and elsewhere) put this at 0.51 tons of CO2 per MWHr for gas burning plants, and 0.92 tons of CO2 per MWHr for coal burning plants.  As has been commonly stated, the direct emissions of CO2 from gas burning plants is on the order of half of that from coal burning plants.

[Side note:  This does not take into account that a certain portion of natural gas leaks out directly into the air at some point in the process from when it is pulled from the ground, then transported via pipelines, and then fed into the final use (e.g. at a power plant).  While perhaps small as a percentage of all the gas consumed (the EPA estimates a leak rate of 1.4%, although others estimate it to be more), natural gas (which is primarily methane) is itself a highly potent greenhouse gas with an impact on atmospheric warming that is 34 times as great as the same weight of CO2 over a 100 year time horizon, and 86 times as great over a 20 year horizon.  If one takes such leakage into account (of even just 1.4%), and adds this warming impact to that of the CO2 that is produced by the gas that has not leaked out but is burned, natural gas turns out to have a similar if not greater atmospheric warming impact as that resulting from the burning of coal.  However, for the calculations below, I will leave out the impact from leakage.  Including this would lead to even stronger results.]

One then has:

D.  Cost of Greenhouse Gas Emissions:  $/MWhr

Solar

Wind

Gas

Coal

Nuclear

Tons of CO2 Emitted per MWHr

0.000

0.000

0.510

0.920

0.000

Cost at $20/ton CO2

$0.00

$0.00

$10.20

$18.40

$0.00

Cost at $40/ton CO2

$0.00

$0.00

$20.40

$36.80

$0.00

Cost at $60/ton CO2

$0.00

$0.00

$30.60

$55.20

$0.00

E.  Levelized Cost of Energy for a New Power Plant, including Cost of Greenhouse Gas Emissions (mid-point figures):  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Cost at $20/ton CO2

$33.58

$33.19

$55.10

$97.66

$117.52

Cost at $40/ton CO2

$33.58

$33.19

$65.30

$116.06

$117.52

Cost at $60/ton CO2

$33.58

$33.19

$75.50

$134.46

$117.52

F.  Short-term Variable Cost to Operate a Plant, including Fuel and Cost of Greenhouse Gas Emissions (mid-point figures):  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Cost at $20/ton CO2

$0.00

$0.00

$35.43

$36.94

$9.63

Cost at $40/ton CO2

$0.00

$0.00

$45.63

$55.34

$9.63

Cost at $60/ton CO2

$0.00

$0.00

$55.83

$73.74

$9.63

Panel D shows what would be paid, per MWHr, if greenhouse gas emissions were charged for at a rate of $20 per ton of CO2, of $40 per ton, or of $60 per ton.  The impact would be significant, ranging from $10 to $31 per MWHr for gas and $18 to $55 for coal.

If these costs are then included in the Levelized Cost of Energy figures (using the mid-point estimates for the LCOE), one gets the costs shown in Panel E.  The costs of new power generation capacity from solar or wind sources (as well as nuclear) are unchanged as they have no CO2 emissions.  But the full costs of new gas or coal fired generation capacity will now mean that such sources are even less competitive than before, as their costs now also reflect, in part, the damage done as a result of their greenhouse gas emissions.

But perhaps most interesting is the impact on the choice of whether to keep burning gas or coal in plants that have already been built and remain available for operation.  This is provided in Panel F, which shows the short-term variable cost (per MWHr) of power generated by the different sources.  These short-term costs were primarily the cost of the fuel used, but now also include the cost to compensate for the damage from the resulting greenhouse gas emissions.

If gas as well as coal had to pay for the damages caused by their greenhouse gas emissions, then even at a cost of just $20 per ton of CO2 emitted they would not be competitive with building new solar or wind plants (whose LCOEs, in Panel E, are less).  At a cost of $40 or $60 per ton of CO2 emitted, they would be far from competitive, with costs that are 40% to 120% higher.  There would be a strong incentive then to build new solar and wind plants to serve what they can (including just the day time markets), while existing gas plants (primarily) would in the near term be kept in reserve for service at night or at other times when solar and wind generation is not possible.

G.  Summary and Conclusion

The cost of new clean sources of power generation capacity, wind and especially solar, has plummeted over the last decade, and it is now cheaper to build new solar or wind capacity than to build new gas, coal, and especially nuclear capacity.  One sees this not only in estimates based on assessments of the underlying costs, but also in the actual market prices for new generation capacity (the PPA prices in such contracts).  Both have plummeted, and indeed at an identical pace.

While it was only relatively recently that the solar and wind generation costs have fallen below the cost of generation from gas, one does see these relative costs reflected in the new power generation capacity built in recent years.  Solar plus wind (together) account for the largest single source of new capacity, with gas also high.  And there have been no new coal plants since 2013 (nor nuclear, with the exception of one plant coming online which had been under construction for decades).

But while solar plus wind plants accounted for the largest share of new generation capacity in recent years, the impact on the overall mix was low.  And that is because not that much new generation capacity has been needed.  Up until to at least 2017, efficiency in energy use was improving to such an extent that no net new capacity was needed despite robust GDP growth.  A large share of something small will still be something small.

However, the costs of building new solar or wind generation capacity have now fallen to the point where it is cheaper to build new solar or wind capacity than it costs to maintain and keep in operation many of the existing coal burning power plants.  This is particularly the case for the older coal plants, with their older technologies and higher maintenance costs.  Thus one should see many of these older plants being decommissioned, and one does.

But it is still cheaper, when one ignores the cost of the damage done by the resulting pollution, to maintain and operate existing gas burning plants, than it would cost to build new solar or wind plants to generate the power they are able to provide.  And since some of the new gas burning plants being built may be needed to add to night-time generation capacity, this means that such plants will also be used to generate power by burning gas during the day, instead of installing solar capacity.

This cost advantage only holds, however, because gas-burning plants do not have to pay for the costs resulting from the damage their pollution causes.  While they pollute in many different ways, one is from the greenhouse gases they emit.  But if one charged them just $20 for every ton of CO2 released into the atmosphere when the gas is burned, the result would be different.  It would then be more cost competitive to build new solar or wind capacity to provide power whenever they can, and to save the gas burning plants for those times when such clean power is not possible.

There is therefore a strong case for charging such a fee.  However, many of those who had previously supported such an approach to address global warming have backed away in recent months, arguing that it would be politically impossible.  That assessment of the politics might be correct, but it really makes no sense.  First, it would be politically important that whatever revenues are generated are returned in full to the population, and on an equal per person basis.  While individual situations will of course vary (and those who lose out on a net basis, or perceive that they will, will complain the loudest), assessments based on current consumption patterns indicate that those in the lowest seven deciles of income (the lowest 70%) will on average come out ahead, while only those in the richest 30% will pay more.  It is the rich who, per person, account for the largest share of greenhouse gas emissions, creating costs that others are bearing.  And a redistribution from the richest 30% to the poorest 70% would be a positive redistribution.

But second, the alternative to reducing greenhouse gas emissions would need to be some approach based on top-down directives (central planning in essence), or a centrally directed system of subsidies that aims to offset the subsidies implicit in not requiring those burning fossil fuels to pay for the damages they cause, by subsidizing other sources of power even more.  Such approaches are not only complex and costly, but rarely work well in practice.  And they end up costing more than a fee-based system would.  The political argument being made in their favor ultimately rests on the assumption that by hiding the higher costs they can be made politically more acceptable.  But relying on deception is unlikely to be sustainable for long.

The sharp fall in costs for clean energy of the last decade has created an opportunity to switch our power supply to clean sources at little to no cost.  This would have been impossible just a few years ago.  It would be unfortunate in the extreme if we were to let this opportunity pass.

The Simple Economics of What Determines the Foreign Trade Balance: Econ 101

“There’s no reason that we should have big trade deficits with virtually every country in the world.”

“We’re like the piggybank that everybody is robbing.”

“the United States has been taken advantage of for decades and decades”

“Last year,… [the US] lost  … $817 billion on trade.  That’s ridiculous and it’s unacceptable.”

“Well, if they retaliate, they’re making a mistake.  Because, you see, we have a tremendous trade imbalance. … we can’t lose”

Statements made by President Trump at the press conference held as he left the G-7 meetings in, Québec, Canada, June 9, 2018.

 

A.  Introduction

President Trump does not understand basic economics.  While that is not a surprise, nor something necessarily required or expected of a president, one should expect that a president would appoint advisors who do understand, and who would tell him when he is wrong.  Unfortunately, this president has been singularly unwilling to do so.  This is dangerous.

Trump is threatening a trade war.  Not only by his words at the G-7 meetings and elsewhere, but also by a number of his actions on trade and tariffs in recent months, Trump has made clear that he believes that a trade deficit is a “loss” to the nation, that countries with trade surpluses are somehow robbing those (such as the US) with a deficit, that raising tariffs can and will lead to reductions in trade deficits, and that if others then also raise their tariffs, the US will in the end necessarily “win” simply because the US has a trade deficit to start.

This is confused on many levels.  But it does raise the questions of what determines a country’s trade balance; whether a country “loses” if it has a trade deficit; and what is the role of tariffs.  This Econ 101 blog post will first look at the simple economics of what determines a nation’s trade deficit (hint:  it is not tariffs); will then discuss what tariffs do and where do they indeed matter; and will then consider the role played by foreign investment (into the US) and whether a trade deficit can be considered a “loss” for the nation (a piggybank being robbed).

B.  What Determines the Overall Trade Deficit?

Let’s start with a very simple case, where government accounts are aggregated together with the rest of the economy.  We will later then separate out government.

The goods and services available in an economy can come either from what is produced domestically (which is GDP, or Gross Domestic Product) or from what is imported.  One can call this the supply of product.  These goods and services can then be used for immediate consumption, or for investment, or for export.  One can call this the demand for product.  And since investment includes any net change in inventories, the goods and services made available will always add up to the goods and services used.  Supply equals demand.

One can put this in a simple equation:

GDP + Imports = Domestic Consumption + Domestic Investment + Exports

Re-arranging:

(GDP – Domestic Consumption) – Domestic Investment = Exports – Imports

The first component on the left is Domestic Savings (what is produced domestically less what is consumed domestically).  And Exports minus Imports is the Trade Balance.  Hence one has:

Domestic Savings – Domestic Investment = Trade Balance

As one can see from the way this was derived, this is simply an identity – it always has to hold.  And what it says is that the Trade Balance will always be equal to the difference between Domestic Savings and Domestic Investment.  If Domestic Savings is less than Domestic Investment, then the Trade Balance (Exports less Imports) will be negative, and there will be a trade deficit.  To reduce the trade deficit, one therefore has to either raise Domestic Savings or reduce Domestic Investment.  It really is as straightforward as that.

Where this becomes more interesting is in determining how the simple identity is brought about.  But here again, this is relatively straightforward in an economy which, like now, is at full employment.  Hence GDP is essentially fixed:  It cannot immediately rise by either employing more labor (as all the workers who want a job have one), nor by each of those laborers suddenly becoming more productive (as productivity changes only gradually through time by means of either better education or by investment in capital).  And GDP is equal to labor employed times the productivity of each of those workers.

In such a situation, with GDP at its full employment level, Domestic Savings can only rise if Domestic Consumption goes down, as Domestic Savings equals GDP minus Domestic Consumption.  But households want to consume, and saving more will mean less for consumption.  There is a tradeoff.

The only other way to reduce the trade deficit would then be to reduce Domestic Investment.  But one generally does not want to reduce investment.  One needs investment in order to become more productive, and it is only through higher productivity that incomes can rise.

Reducing the trade deficit, if desirable (and whether it is desirable will be discussed below), will therefore not be easy.  There will be tradeoffs.  And note that tariffs do not enter directly in anything here.  Raising tariffs can only have an impact on the trade balance if they have a significant impact for some reason on either Domestic Savings or Domestic Investment, and tariffs are not a direct factor in either.  There may be indirect impacts of tariffs, which will be discussed below, but we will see that the indirect effects actually could act in the direction of increasing, not decreasing, the trade deficit.  However, whichever direction they act in, those indirect effects are likely to be small.  Tariffs will not have a significant effect on the trade balance.

But first, it is helpful to expand the simple analysis of the above to include Government as a separate set of accounts.  In the above we simply had the Domestic sector.  We will now divide that into the Domestic Private and the Domestic Public (or Government) sectors.  Note that Government includes government spending and revenues at all levels of government (state and local as well as federal).  But the government deficit is primarily a federal government issue.  State and local government entities are constrained in how much of a deficit they can run over time, and the overall balance they run (whether deficit or surplus) is relatively minor from the perspective of the country as a whole.

It will now also be convenient to write out the equations in symbols rather than words, and we will use:

GDP = Gross Domestic Product

C = Domestic Private Consumption

I = Domestic Private Investment

G = Government Spending (whether for Consumption or for Investment)

X = Exports

M = Imports

T = Taxes net of Transfers

Note that T (Taxes net of Transfers) will be the sum total of all taxes paid by the private sector to government, minus all transfers received by the private sector from government (such as for Social Security or Medicare).  I will refer to this as simply net Taxes (T).

The basic balance of goods or services available (supplied) and goods or services used (demanded) will then be:

GDP + M = C + I + G + X

We will then add and subtract net Taxes (T) on the right-hand side:

GDP + M = (C + T) + I + (G – T) + X

Rearranging:

GDP – (C + T) – (G – T) – I = X – M

(GDP – C – T) – I + (T – G) = X – M

Or in (abbreviated) words:

Dom. Priv. Savings – Dom. Priv. Investment + Govt Budget Balance = Trade Balance

Domestic Private Savings (savings by households and private businesses) is equal to what is produced in the economy (GDP), less what is privately consumed (C), less what is paid in net Taxes (T) by the private sector to the public sector.  Domestic Private Investment is simply I, and includes investment both by private businesses and by households (primarily in homes).  And the Government Budget Balance is equal to what government receives in net Taxes (T), less what Government spends (on either consumption items or on public investment).  Note that government spending on transfers (e.g. Social Security) is already accounted for in net Taxes (T).

This equation is very much like what we had before.  The overall Trade Balance will equal Domestic Private Savings less Domestic Private Investment plus the Government Budget Balance (which will be negative when a deficit, as has normally been the case except for a few years at the end of the Clinton administration).  If desired, one could break down the Government Budget Balance into Public Savings (equal to net Taxes minus government spending on consumption goods and services) less Public Investment (equal to government spending on investment goods and services), to see the parallel with Domestic Private Savings and Domestic Private Investment.  The equation would then read that the Trade Balance will equal Domestic Private Savings less Domestic Private Investment, plus Government Savings less Government Investment.  But there is no need.  The budget deficit, as commonly discussed, includes public spending not only on consumption items but also on investment items.

This is still an identity.  The balance will always hold.  And it says that to reduce the trade deficit (make it less negative) one has to either increase Domestic Private Savings, or reduce Domestic Private Investment, or increase the Government Budget Balance (i.e. reduce the budget deficit).  Raising Domestic Private Savings implies reducing consumption (when the economy is at full employment, as now).  Few want this.  And as discussed above, a reduction in investment is not desirable as investment is needed to increase productivity over time.

This leaves the budget deficit, and most agree that it really does need to be reduced in an economy that is now at full employment.  Unfortunately, Trump and the Republican Congress have moved the budget in the exact opposite direction, primarily due to the huge tax cut passed last December, and to a lesser extent due to increases in certain spending (primarily for the military).  As discussed in an earlier post on this blog, an increase in the budget deficit to a forecast 5% of GDP at a time when the economy is at full employment is unprecedented in peacetime.

What this implies for the trade balance is clear from the basic identity derived above.  An increase in the budget deficit (a reduction in the budget balance) will lead, all else being equal, to an increase in the trade deficit (a reduction in the trade balance).  And it might indeed be worse, as all else is not equal.  The stated objective of slashing corporate taxes is to spur an increase in corporate investment.  But if private investment were indeed to rise (there is in fact little evidence that it has moved beyond previous trends, at least so far), this would further worsen the trade balance (increase the trade deficit).

Would raising tariffs have an impact?  One might argue that this would raise net Taxes paid, as tariffs on imports are a tax, which (if government spending is not then also changed) would reduce the budget deficit.  While true, the extent of the impact would be trivially small.  The federal government collected $35.6 billion in all customs duties and fees (tariffs and more) in FY2017 (see the OMB Historical Tables).  This was less than 0.2% of FY2017 GDP.  Even if all tariffs (and other fees on imports) were doubled, and the level of imports remained unchanged, this would only raise 0.2% of GDP.  But the trade deficit was 2.9% of GDP in FY2017.  It would not make much of a difference, even in such an extreme case.  Furthermore, new tariffs are not being pushed by Trump on all imports, but only a limited share (and a very limited share so far).  Finally, if Trump’s tariffs in fact lead to lower imports of the items being newly taxed, as he hopes, then tariffs collected can fall.  In the extreme, if the imports of such items go to zero, then the tariffs collected will go to zero.

Thus, for several reasons, any impact on government revenues from the new Trump tariffs will be minor.

The notion that raising tariffs would be a way to eliminate the trade deficit is therefore confused.  The trade balance will equal the difference between Domestic Savings and Domestic Investment.  Adding in government, the trade balance will equal the difference between Domestic Private Savings and Domestic Private Investment, plus the equivalent for government (the Government Budget Balance, where a budget deficit will be a negative).  Tariffs have little to no effect on these balances.

C.  What Role Do Tariffs Play, Then?

Do tariffs then matter?  They do, although not in the determination of the overall trade deficit.  Rather, tariffs, which are a tax, will change the price of the particular import relative to the price of other products.  If applied only to imports from some countries and not from others, one can expect to see a shift in imports towards those countries where the tariffs have not been imposed.  And in the case when they are applied globally, on imports of the product from any country, one should expect that prices for similar products made in the US will then also rise.  To the extent there are alternatives, purchases of the now more costly products (whether imported or produced domestically) will be reduced, while purchases of alternatives will increase.  And there will be important distributional changes.  Profits of firms producing the now higher priced products will increase, while the profits of firms using such products as an input will fall.  And the real incomes of households buying any of these products will fall due to the higher prices.

Who wins and who loses can rapidly become turn into something very complicated.  Take, for example, the new 25% tariff being imposed by the Trump administration on steel (and 10% on aluminum).  The tariffs were announced on March 8, to take effect on March 23.  Steel imports from Canada and Mexico were at first exempted, but later the Trump administration said those exemptions were only temporary.  On March 22 they then expanded the list of countries with temporary exemptions to also the EU, Australia, South Korea, Brazil, and Argentina, but only to May 1.  Then, on March 28, they said imports from South Korea would receive a permanent exemption, and Australia, Brazil, and Argentina were granted permanent exemptions on May 2.  After a short extension, tariffs were then imposed on steel imports from Canada, Mexico, and the EU, on May 31.  And while this is how it stands as I write this, no one knows what further changes might be announced tomorrow.

With this uneven application of the tariffs by country, one should expect to see shifts in the imports by country.  What this achieves is not clear.  But there are also further complications.  There are hundreds if not thousands of different types of steel that are imported – both of different categories and of different grades within each category – and a company using steel in their production process in the US will need a specific type and grade of steel.  Many of these are not even available from a US producer of steel.  There is thus a system where US users of steel can apply for a waiver from the tariff.  As of June 19, there have been more than 21,000 petitions for a waiver.  But there were only 30 evaluators in the US Department of Commerce who will be deciding which petitions will be granted, and their training started only in the second week of June.  They will be swamped, and one senior Commerce Department official quoted in the Washington Post noted that “It’s going to be so unbelievably random, and some companies are going to get screwed”.  It would not be surprising to find political considerations (based on the interests of the Trump administration) playing a major role.

So far, we have only looked at the effects of one tariff (with steel as the example).  But multiple tariffs on various goods will interact, with difficult to predict consequences.  Take for example the tariff imposed on the imports of washing machines announced in late January, 2018, at a rate of 20% in the first year and at 50% should imports exceed 1.2 million units in the year.  This afforded US producers of washing machines a certain degree of protection from competition, and they then raised their prices by 17% over the next three months (February to May).

But steel is a major input used to make washing machines, and steel prices have risen with the new 25% tariff.  This will partially offset the gains the washing machine producers received from the tariff imposed on their product.  Will the Trump administration now impose an even higher tariff on washing machines to offset this?

More generally, the degree to which any given producer will gain or lose from such multiple tariffs will depend on multiple factors – the tariff rates applied (both for what they produce and for what they use as inputs), the degree to which they can find substitutes for the inputs they need, and the degree to which those using the product (the output) will be able to substitute some alternative for the product, and more.  Individual firms can end up ahead, or behind.  Economists call the net effect the degree of “net effective protection” afforded the industry, and it can be difficult to figure out.  Indeed, government officials who had thought they were providing positive protection to some industry often found out later that they were in fact doing the opposite.

Finally, imposing such tariffs on imports will lead to responses from the countries that had been providing the goods.  Under the agreed rules of international trade, those countries can then impose commensurate tariffs of their own on products they had been importing from the US.  This will harm industries that may otherwise have been totally innocent in whatever was behind the dispute.

An example of what can then happen has been the impact on Harley-Davidson, the American manufacturer of heavy motorcycles (affectionately referred to as “hogs”).  Harley-Davidson is facing what has been described as a “triple whammy” from Trump’s trade decisions.  First, they are facing higher steel (and aluminum) prices for their production in the US, due to the Trump steel and aluminum tariffs.  Harley estimates this will add $20 million to their costs in their US plants.  For a medium-sized company, this is significant.  As of the end of 2017, Harley-Davidson had 5,200 employees in the US (see page 7 of this SEC filing).  With $20 million, they could pay each of their workers $3,850 more.  This is not a small amount.  Instead, the funds will go to bolster the profits of steel and aluminum firms.

Second, the EU has responded to the Trump tariffs on their steel and aluminum by imposing tariffs of their own on US motorcycle imports.  This would add $45 million in costs (or $2,200 per motorcycle) should Harley-Davidson continue to export motorcycles from the US to the EU.  Quite rationally, Harley-Davidson responded that they will now need to shift what had been US production to one of their plants located abroad, to avoid both the higher costs resulting from the new steel and aluminum tariffs, and from the EU tariffs imposed in response.

And one can add thirdly and from earlier, that Trump pulled the US out of the already negotiated (but still to be signed) Trans-Pacific Partnership agreement.  This agreement would have allowed Harley-Davidson to export their US built motorcycles to much of Asia duty-free.  They will now instead be facing high tariffs to sell to those markets.  As a result, Harley-Davidson has had to set up a new plant in Asia (in Thailand), shifting there what had been US jobs.

Trump reacted angrily to Harley-Davidson’s response to his trade policies.  He threatened that “they will be taxed like never before!”.  Yet what Harley-Davidson is doing should not have been a surprise, had any thought been given to what would happen once Trump started imposing tariffs on essential inputs needed in the manufacture of motorcycles (steel and aluminum), coming from our major trade partners (and often closest allies).  And it is positively scary that a president should even think that he should use the powers of the state to threaten an individual private company in this way.  Today it is Harley-Davidson.  Who will it be tomorrow?

There are many other examples of the problems that have already been created by Trump’s new tariffs.  To cite a few, and just briefly:

a)  The National Association of Home Builders estimated that the 20% tariff imposed in 2017 on imports of softwood lumber from Canada added nearly $3,600 to the cost of building an average single-family home in the US and would, over the course of a year, reduce wages of US workers by $500 million and cost 8,200 full-time US jobs.

b)  The largest nail manufacturer in the US said in late June that it has already had to lay off 12% of its workforce due to the new steel tariffs, and that unless it is granted a waiver, it would either have to relocate to Mexico or shut down by September.

c)  As of early June, Reuters estimated that at least $2.5 billion worth of investments in new utility-scale solar installation projects had been canceled or frozen due to the tariffs Trump imposed on the import of solar panel assemblies.  This is far greater than new investments planned for the assembly of such panels in the US.  Furthermore, the jobs involved in such assembly work are generally low-skill and repetitive, and can be automated should wages rise.

So there are consequences from such tariffs.  They might be unintended, and possibly not foreseen, but they are real.

But would the imposition of tariffs necessarily reduce the trade deficit, as Trump evidently believes?  No.  As noted above, the trade deficit would only fall if the tariffs would, for some reason, increase domestic savings or reduce domestic investment.  But tariffs do not enter directly into those factors.  Indirectly, one could map out some chains of possible causation, but these changes in some set of tariffs (even if broadly applied to a wide range of imports) would not have a major effect on overall domestic savings or investment.  They could indeed even act in the opposite direction.

Households, to start, will face higher prices from the new tariffs.  To try to maintain their previous standard of living (in real terms) they would then need to spend more on what they consume and hence would save less.  This, by itself, would reduce domestic savings and hence would increase the trade deficit to the extent there was any impact.

The impacts on firms are more various, and depend on whether the firm will be a net winner or loser from the government actions and how they might then respond.  If a net winner, they have been able to raise their prices and hence increase their profits.  If they then save the extra profits (retained earnings), domestic savings would rise and the trade deficit would fall.  But if they increase their investments in what has now become a more profitable activity (and that is indeed the stated intention behind imposing the tariffs), that response would lead to an increase in the trade deficit.  The net effect will depend on whether their savings or their investment increases by more, and one does not know what that net change might be.  Different firms will likely respond differently.

One also has to examine the responses of the firms who will be the net losers from the newly imposed tariffs.  They will be paying more on their inputs and will see a reduction in their profits.  They will then save less and will likely invest less.  Again, the net impact on the trade deficit is not clear.

The overall impact on the trade deficit from these indirect effects is therefore uncertain, as one has effects that will act in opposing directions.  In part for this reason, but also because the tariffs will affect only certain industries and with responses that are likely to be limited (as a tariff increase today can be just as easily reversed tomorrow), the overall impact on the trade balance from such indirect effects are likely to be minor.

Increases in individual tariffs, such as those being imposed now by Trump, will not then have a significant impact on the overall trade balance.  But tariffs still do matter.  They change the mix of what is produced, from where items will be imported, and from where items will be produced for export (as the Harley-Davidson case shows).  They will create individual winners and losers, and hence it is not surprising to see the political lobbying as has grown in Washington under Trump.  Far from “draining the swamp”, Trump’s trade policy has made it critical for firms to step up their lobbying activities.

But such tariffs do not determine what the overall trade balance will be.

D.  What Role Does Foreign Investment Play in the Determination of the Trade Balance?

While tariffs will not have a significant effect on the overall trade balance, foreign investment (into the US) will.  To see this, we need to return to the basic macro balance derived in Section B above, but generalize it a bit to include all foreign financial flows.

The trade balance is the balance between exports and imports.  It is useful to generalize this to take into account two other sources of current flows in the national income and product accounts which add to (or reduce) the net demand for foreign exchange.  Specifically, there will be foreign exchange earned by US nationals working abroad plus that earned by US nationals on investments they have made abroad.  Economists call this “factor services income”, or simply factor income, as labor and capital are referred to as factors of production.  This is then netted against such income earned in the US by foreign nationals either working here or on their investments here.  Second, there will be unrequited transfers of funds, such as by households to their relatives abroad, or by charities, or under government aid programs.  Again, this will be netted against the similar transfers to the US.

Adding the net flows from these to the trade balance will yield what economists call the “current account balance”.  It is a measure of the net demand for dollars (if positive) or for foreign exchange (if a deficit) from current flows.  To put some numbers on this, the US had a foreign trade deficit of $571.6 billion in 2017.  This was the balance between the exports and imports of goods and services (what economists call non-factor services to be more precise, now that we are distinguishing factor services from non-factor services).  It was negative – a deficit.  But the US also had a surplus in 2017 from net factor services income flows of $216.8 billion, and a deficit of $130.2 billion on net transfers (mostly from households sending funds abroad).  The balance on current account is the sum of these (with deficits as negatives and surpluses as positives) and came to a deficit of $485.0 billion in 2017, or 2.5% of GDP.  As a share of GDP, this deficit is significant but not huge.  The UK had a current account deficit of 4.1% of GDP in 2017 for example, while Canada had a deficit of 3.0%.

The current account for foreign transactions, basically a generalization of the trade balance, is significant as it will be the mirror image of the capital account for foreign transactions.  That is, when the US had a current account deficit of $485.0 billion (as in 2017), there had to be a capital account surplus of $485.0 billion to match this, as the overall purchases and sales of dollars in foreign exchange transactions will have to balance out, i.e. sum to zero.  The capital account incorporates all transactions for the purchase or sale of capital assets (investments) by foreign entities into the US, net of the similar purchase or sale of capital assets by US entities abroad.  When the capital account is a net positive (as has been the case for the US in recent decades), there is more such investment going into the US than is going out.  The investments can be into any capital assets, including equity shares in companies, or real estate, or US Treasury or other bonds, and so on.

But while the two (the current account and the capital account) have to balance out, there is an open question of what drives what.  Look at this from the perspective of a foreigner, wishing to invest in some US asset.  They need to get the dollars for this from somewhere.  While this would be done by means of the foreign exchange markets, which are extremely active (with trillions of dollars worth of currencies being exchanged daily), a capital account surplus of $485 billion (as in 2017) means that foreign entities had to obtain, over the course of the year, a net of $485 billion in dollars for their investments into the US.  The only way this could be done is by the US importing that much more than it exported over the course of the year.  That is, the US would need to run a current account deficit of that amount for the US to have received such investment.

If there is an imbalance between the two (the current account and the capital account), one should expect that the excess supply or demand for dollars will lead to changes in a number of prices, most directly foreign exchange rates, but also interest rates and other asset prices.  These will be complex and we will not go into here all the interactions one might then have.  Rather, the point to note is that a current account deficit, even if seemingly large, is not a sign of disequilibrium when there is a desire on the part of foreign investors to invest a similar amount in US markets.  And US markets have traditionally been a good place to invest.  The US is a large economy, with markets for assets that are deep and active, and these markets have normally been (with a few exceptions) relatively well regulated.

Foreign nationals and firms thus have good reason to invest a share of their assets in the US markets.  And the US has welcomed this, as all countries do.  But the only way they can obtain the dollars to make these investments is for the US to run a current account deficit.  Thus a current account deficit should not necessarily be taken as a sign of weakness, as Trump evidently does.  Depending on what governments are doing in their market interventions, a current account deficit might rather be a sign of foreign entities being eager to invest in the country.  And that is a good sign, not a bad one.

E.  An “Exorbitant Privilege”

The dollar (and hence the US) has a further, and important, advantage.  It is the world’s dominant currency, with most trade contracts (between all countries, not simply between some country and the US) denominated in dollars, as are contracts for most internationally traded commodities (such as oil).  And as noted above, investments in the US are particularly advantageous due to the depth and liquidity of our asset markets.  For these reasons, foreign countries hold most of their international reserves in dollar assets.  And most of these are held in what have been safe, but low yielding, short-term US Treasury bills.

As noted in Section D above, those seeking to make investments in dollar assets can obtain the dollars required only if the US runs a current account deficit.  This is as true for assets held in dollars as part of a country’s international reserves as for any other investments in US dollar assets.  Valéry Giscard d’Estaing in the 1960s, then the Minister of Finance of France, described this as an “exorbitant privilege” for the US (although this is often mistakenly attributed Charles de Gaulle, then his boss as president of France).

And it certainly is a privilege.  With the role of the dollar as the preferred reserve currency for countries around the world, the US is able to run current account deficits indefinitely, obtaining real goods and services from those countries while providing pieces of paper generating only a low yield in return.  Indeed, in recent years the rate of return on short-term US Treasury bills has generally been negative in real terms (i.e. after inflation).  The foreign governments buying these US Treasury bills are helping to cover part of our budget deficits, and are receiving little to nothing in return.

So is the US a “piggybank that everybody is robbing”, as Trump asserted to necessarily be the case when the US is has a current account deficit?  Not at all.  Indeed, it is the precise opposite.  The current account deficit is the mirror image of the foreign investment inflows coming into the US.  To obtain the dollars needed to do this those countries must export more real goods to the US than they import from the US.  The US gains real resources (the net exports), while the foreign entities then invest in US markets.  And for governments obtaining dollars to hold as their international reserves, those investments are primarily in the highly liquid and safe, short-term US Treasury bills, despite those assets earning low or even negative returns.  This truly is an “exorbitant privilege”, not a piggybank being robbed.

Indeed, the real concern is that with the mismanagement of our budget (tax cuts increasing deficits at a time when deficits should be reduced) plus the return to an ideologically driven belief in deregulating banks and other financial markets (such as what led to the financial and then economic collapse of 2008), the dollar may lose its position as the place to hold international reserves.  The British pound had this position in the 1800s and then lost it to the dollar due to the financial stresses of World War I.  The dollar has had the lead position since.  But others would like it, most openly by China and more quietly Europeans hoping for such a role for the euro.  They would very much like to enjoy this “exorbitant privilege”, along with the current account deficits that privilege conveys.

F.  Summary and Conclusion

Trump’s beliefs on the foreign trade deficit, on the impact of hiking tariffs, and on who will “win” in a trade war, are terribly confused.  While one should not necessarily expect a president to understand basic economics, one should expect that a president would appoint and listen to advisors who do.  But Trump has not.

To sum up some of the key points:

a)  The foreign trade balance will always equal the difference between domestic savings and domestic investment.  Or with government accounts split out, the trade balance will equal the difference between domestic private savings and domestic private investment, plus the government budget balance.  The foreign trade balance will only move up or down when there is a change in the balance between domestic savings and domestic investment.

b)  One way to change that balance would be for the government budget balance to increase (i.e. for the government deficit to be reduced).  Yet Trump and the Republican Congress have done the precise opposite.  The massive tax cuts of last December, plus (to a lesser extent) the increase in government spending now budgeted (primarily for the military), will increase the budget deficit to record levels for an economy in peacetime at full employment.  This will lead to a bigger trade deficit, not a smaller one.

c)  One could also reduce the trade deficit by making the US a terrible place to invest in.  This would reduce foreign investment into the US, and hence the current account deficit.  In terms of the basic savings/investment balance, it would reduce domestic investment (whether driven by foreign investors or domestic ones).  If domestic savings was not then also reduced (a big if, and dependant on what was done to make the US a terrible place to invest in), this would lead to a similar reduction in the trade deficit.  This is of course not to be taken seriously, but rather illustrates that there are tradeoffs.  One should not simplistically assume that a lower trade deficit achieved by any means possible is good.

d)  It is also not at all clear that one should be overly concerned about the size of the trade and current account deficits, at where they are today.  The US had a trade deficit of 2.9% of GDP in 2017 and a current account deficit of 2.5% of GDP.  While significant, these are not huge.  Should they become much larger (due, for example, to the forecast increases in government budget deficits to record levels), they might rise to problematic levels.  But at the current levels for the current account deficit, we have seen the markets for foreign exchange and for interest rates functioning pretty well and without overt signs of concern.  The dollars being made available through the current account deficit have been bought up and used for investments in US markets.

e)  Part of the demand for dollars to be invested and held in the US markets comes from the need for international reserves by governments around the world.  The dollar is the dominant currency in the world, and with the depth and liquidity of the US markets (in particular for short-term US Treasury bills) most of these international reserves are held in dollars.  This has given the US what has been called an “exorbitant privilege”, and permits the US to run substantial current account deficits while providing in return what are in essence paper assets yielding just low (or even negative) returns.

f)  The real concern should not be with the consequences of the dollar playing such a role in the system of international trade, but rather with whether the dollar will lose this privileged status.  Other countries have certainly sought this, most openly by China but also more quietly for the euro, but so far the dollar has remained dominant.  But there are increasing concerns that with the mismanagement of the government budget (the recent tax cuts) plus ideologically driven deregulation of banks and the financial markets (as led to the 2008 financial collapse), countries will decide to shift their international reserves out of the dollar towards some alternative.

g)  What will not reduce the overall trade deficit, however, is selective increases in tariff rates, as Trump has started to do.  Such tariff increases will shift around the mix of countries from where the imports will come, and/or the mix of products being imported, but can only reduce the overall trade deficit to the extent such tariffs would lead somehow to either higher domestic savings and/or lower domestic investment.  Tariffs will not have a direct effect on such balances, and indirect effects are going to be small and indeed possibly in the wrong direction (if the aim is to reduce the deficits).

h)  What such tariff policies will do, however, is create a mess.  And they already have, as the Harley-Davidson case illustrates.  Tariffs increase costs for US producers, and they will respond as best they can.  While the higher costs will possibly benefit certain companies, they will harm those using the products unless some government bureaucrat grants them a special exemption.

But what this does lead to is officials in government picking winners and losers.  That is a concern.  And it is positively scary to have a president lashing out and threatening individual firms, such as Harley-Davidson, when the firms respond to the mess created as one should have expected.