How Fast is GDP Growing?: A Curiosum

A.  How Fast is GDP Growing?

The Bureau of Economic Analysis released today its first estimate (what it calls it’s Advance Estimate) for the growth of GDP and its components for the third quarter of 2019.  Most of it looked basically as one would expect, with an estimate of real GDP growth of 1.9% in the quarter, or about the same as the 2.0% growth rate of the second quarter.  There has been a continued slowdown in private investment (which I will discuss below), but this has been offset by an expansion in government spending under Trump, coupled with steady growth in personal consumption expenditures (as one would expect with an economy now at full employment).

But there was a surprise on the last page of the report, in Appendix Table A.  This table provides growth rates of some miscellaneous aggregates that contribute to GDP growth, as well as their contribution to overall GDP growth.  One line shown is for “motor vehicle output”.  What is surprising is that the growth rate shown, at an annualized rate, is an astounding 32.6%!  The table also indicates that real GDP excluding motor vehicle output would have grown at just 1.2% in the quarter.  (I get 1.14% using the underlying, non-rounded, numbers, but these are close.)  The difference is shown in the chart above.

Some points should be noted.  While all these figures provided by the BEA are shown at annualized growth rates, one needs to keep in mind that the underlying figures are for growth in just one quarter.  Hence the quarterly growth will be roughly one-quarter of the annual rate, plus the effects of compounding.  For the motor vehicle output numbers, the estimated growth in the quarter was 7.3%, which if compounded over four quarters would yield the 32.6% annualized rate.  One should also note that the quarterly output figures of this sector are quite volatile historically, and while there has not been a change as large as the 32.6% since 2009/10 (at the time of the economic downturn and recovery) there have been a few quarters when it was in the 20s.

But what appears especially odd, but also possibly interesting to those trying to understand how the GDP accounts are estimated, is why there should have been such a tremendously high growth in the sector, of 32.6%, when the workers at General Motors were on strike for half of September (starting on September 15).  GM is the largest car manufacturer in the US, its production plummeted during the strike, yet the GDP figures indicate that motor vehicle output not only soared in the quarter, but by itself raised overall GDP growth to 1.9% from a 1.2% rate had the sector been flat.

This is now speculation on my part, but I suspect the reason stems from the warning the BEA regularly provides that the initial GDP estimates that are issued just one month after the end of the quarter being covered, really are preliminary and partial.  The BEA receives data on the economy from numerous sources, and a substantial share of that data is incomplete just one month following the end of a quarter.  For motor vehicle production, I would not be surprised if the BEA might only be receiving data for two months (July and August in this case), in time for this initial estimate.  They would then estimate the third month based on past patterns and seasonality.

But because of the strike, past patterns will be misleading.  Production at GM may have been ramped up in July and August in anticipation of the strike, and a mechanical extrapolation of this into September, while normally fine, might have been especially misleading this time.

I stress that this is speculation on my part.  Revised estimates of GDP growth in the third quarter, based on more complete data, will be issued in late November and then again, with even more data, in late December.  We will see what these estimates say.  I would not be surprised if the growth figure for GDP is revised substantially downwards.

B.  Growth in Nonresidential Private Fixed Investment

The figures released by the BEA today also include its estimates for private fixed investment.  The nonresidential portion of this is basically business investment, and it is interesting to track what it has been doing over the last few years.  The argument made for the Trump/Republican tax cuts pushed through Congress in December 2017 were that they would spur business investment.  Corporate profit taxes were basically cut in half.

But the figures show no spur in business investment following their taxes being slashed.  Nonresidential private fixed investment was growing at a relatively high rate already in the fourth quarter of 2017 (similar to rates seen between mid-2013 and mid-2014, and there even was growth of 11.2% in the second quarter of 2014).  This continued through the first half of 2018.  But growth since has fallen steadily, and is now even negative, with a decline of 3.0% in the third quarter of 2019:

There is no indication here that slashing corporate profit taxes (and other business taxes) led to greater business investment.

Andrew Yang’s Proposed $1,000 per Month Grant: Issues Raised in the Democratic Debate

A.  Introduction

This is the second in a series of posts on this blog addressing issues that have come up during the campaign of the candidates for the Democratic nomination for president, and which specifically came up in the October 15 Democratic debate.  As flagged in the previous blog post, one can find a transcript of the debate at the Washington Post website, and a video of the debate at the CNN website.

This post will address Andrew Yang’s proposal of a $1,000 per month grant for every adult American (which I will mostly refer to here as a $12,000 grant per year).  This policy is called a universal basic income (or UBI), and has been explored in a few other countries as well.  It has received increased attention in recent years, in part due to the sharp growth in income inequality in the US of recent decades, that began around 1980.  If properly designed, such a $12,000 grant per adult per year could mark a substantial redistribution of income.  But the degree of redistribution depends directly on how the funding would be raised.  As we will discuss below, Yang’s specific proposals for that are problematic.  There are also other issues with such a program which, even if well designed, calls into question whether it would be the best approach to addressing inequality.  All this will be discussed below.

First, however, it is useful to address two misconceptions that appear to be widespread.  One is that many appear to believe that the $12,000 per adult per year would not need to come from somewhere.  That is, everyone would receive it, but no one would have to provide the funds to pay for it.  That is not possible.  The economy produces so much, whatever is produced accrues as incomes to someone, and if one is to transfer some amount ($12,000 here) to each adult then the amounts so transferred will need to come from somewhere.  That is, this is a redistribution.  There is nothing wrong with a redistribution, if well designed, but it is not a magical creation of something out of nothing.

The other misconception, and asserted by Yang as the primary rationale for such a $12,000 per year grant, is that a “Fourth Industrial Revolution” is now underway which will lead to widespread structural unemployment due to automation.  This issue was addressed in the previous post on this blog, where I noted that the forecast job losses due to automation in the coming years are not out of line with what has been the norm in the US for at least the last 150 years.  There has always been job disruption and turnover, and while assistance should certainly be provided to workers whose jobs will be affected, what is expected in the years going forward is similar to what we have had in the past.

It is also a good thing that workers should not be expected to rely on a $12,000 per year grant to make up for a lost job.  Median earnings of a full-time worker was an estimated $50,653 in 2018, according to the Census Bureau.  A grant of $12,000 would not go far in making up for this.

So the issue is one of redistribution, and to be fair to Yang, I should note that he posts on his campaign website a fair amount of detail on how the program would be paid for.  I make use of that information below.  But the numbers do not really add up, and for a candidate who champions math (something I admire), this is disappointing.

B.  Yang’s Proposal of a $1,000 Monthly Grant to All Americans

First of all, the overall cost.  This is easy to calculate, although not much discussed.  The $12,000 per year grant would go to every adult American, who Yang defines as all those over the age of 18.  There were very close to 250 million Americans over the age of 18 in 2018, so at $12,000 per adult the cost would be $3.0 trillion.

This is far from a small amount.  With GDP of approximately $20 trillion in 2018 ($20.58 trillion to be more precise), such a program would come to 15% of GDP.  That is huge.  Total taxes and revenues received by the federal government (including all income taxes, all taxes for Social Security and Medicare, and everything else) only came to $3.3 trillion in FY2018.  This is only 10% more than the $3.0 trillion that would have been required for Yang’s $12,000 per adult grants.  Or put another way, taxes and other government revenues would need almost to be doubled (raised by 91%) to cover the cost of the program.  As another comparison, the cost of the tax cuts that Trump and the Republican leadership rushed through Congress in December 2017 was forecast to be an estimated $150 billion per year.  That was a big revenue loss.  But the Yang proposal would cost 20 times as much.

With such amounts to be raised, Yang proposes on his campaign website a number of taxes and other measures to fund the program.  One is a value-added tax (VAT), and from his very brief statements during the debates but also in interviews with the media, one gets the impression that all of the program would be funded by a value-added tax.  But that is not the case.  He in fact says on his campaign website that the VAT, at the rate and coverage he would set, would raise only about $800 billion.  This would come only to a bit over a quarter (27%) of the $3.0 trillion needed.  There is a need for much more besides, and to his credit, he presents plans for most (although not all) of this.

So what does he propose specifically?:

a) A New Value-Added Tax:

First, and as much noted, he is proposing that the US institute a VAT at a rate of 10%.  He estimates it would raise approximately $800 billion a year, and for the parameters for the tax that he sets, that is a reasonable estimate.  A VAT is common in most of the rest of the world as it is a tax that is relatively easy to collect, with internal checks that make underreporting difficult.  It is in essence a tax on consumption, similar to a sales tax but levied only on the added value at each stage in the production chain.  Yang notes that a 10% rate would be approximately half of the rates found in Europe (which is more or less correct – the rates in Europe in fact vary by country and are between 17 and 27% in the EU countries, but the rates for most of the larger economies are in the 19 to 22% range).

A VAT is a tax on what households consume, and for that reason a regressive tax.  The poor and middle classes who have to spend all or most of their current incomes to meet their family needs will pay a higher share of their incomes under such a tax than higher-income households will.  For this reason, VAT systems as implemented will often exempt (or tax at a reduced rate) certain basic goods such as foodstuffs and other necessities, as such goods account for a particularly high share of the expenditures of the poor and middle classes.  Yang is proposing this as well.  But even with such exemptions (or lower VAT rates), a VAT tax is still normally regressive, just less so.

Furthermore, households will in the end be paying the tax, as prices will rise to reflect the new tax.  Yang asserts that some of the cost of the VAT will be shifted to businesses, who would not be able, he says, to pass along the full cost of the tax.  But this is not correct.  In the case where the VAT applies equally to all goods, the full 10% will be passed along as all goods are affected equally by the now higher cost, and relative prices will not change.  To the extent that certain goods (such as foodstuffs and other necessities) are exempted, there could be some shift in demand to such goods, but the degree will depend on the extent to which they are substitutable for the goods which are taxed.  If they really are necessities, such substitution is likely to be limited.

A VAT as Yang proposes thus would raise a substantial amount of revenues, and the $800 billion figure is a reasonable estimate.  This total would be on the order of half of all that is now raised by individual income taxes in the US (which was $1,684 billion in FY2018).  But one cannot avoid that such a tax is paid by households, who will face higher prices on what they purchase, and the tax will almost certainly be regressive, impacting the poor and middle classes the most (with the extent dependent on how many and which goods are designated as subject to a reduced VAT rate, or no VAT at all).  But whether regressive or not, everyone will be affected and hence no one will actually see a net increase of $12,000 in purchasing power from the proposed grant  Rather, it will be something less.

b)  A Requirement to Choose Either the $12,000 Grants, or Participation in Existing Government Social Programs

Second, Yang’s proposal would require that households who currently benefit from government social programs, such as for welfare or food stamps, would be required to give up those benefits if they choose to receive the $12,000 per adult per year.  He says this will lead to reduced government spending on such social programs of $500 to $600 billion a year.

There are two big problems with this.  The first is that those programs are not that large.  While it is not fully clear how expansive Yang’s list is of the programs which would then be denied to recipients of the $12,000 grants, even if one included all those included in what the Congressional Budget Office defines as “Income Security” (“unemployment compensation, Supplemental Security Income, the refundable portion of the earned income and child tax credits, the Supplemental Nutrition Assistance Program [food stamps], family support, child nutrition, and foster care”), the total spent in FY2018 was only $285 billion.  You cannot save $500 to $600 billion if you are only spending $285 billion.

Second, such a policy would be regressive in the extreme.  Poor and near-poor households, and only such households, would be forced to choose whether to continue to receive benefits under such existing programs, or receive the $12,000 per adult grant per year.  If they are now receiving $12,000 or more in such programs per adult household member, they would receive no benefit at all from what is being called a “universal” basic income grant.  To the extent they are now receiving less than $12,000 from such programs (per adult), they may gain some benefit, but less than $12,000 worth.  For example, if they are now receiving $10,000 in benefits (per adult) from current programs, their net gain would be just $2,000 (setting aside for the moment the higher prices they would also now need to pay due to the 10% VAT).  Furthermore, only the poor and near-poor who are being supported by such government programs will see such an effective reduction in their $12,000 grants.  The rich and others, who benefit from other government programs, will not see such a cut in the programs or tax subsidies that benefit them.

c)  Savings in Other Government Programs 

Third, Yang argues that with his universal basic income grant, there would be a reduction in government spending of $100 to $200 billion a year from lower expenditures on “health care, incarceration, homelessness services and the like”, as “people would be able to take better care of themselves”.  This is clearly more speculative.  There might be some such benefits, and hopefully would be, but without experience to draw on it is impossible to say how important this would be and whether any such savings would add up to such a figure.  Furthermore, much of those savings, were they to follow, would accrue not to the federal government but rather to state and local governments.  It is at the state and local level where most expenditures on incarceration and homelessness, and to a lesser degree on health care, take place.  They would not accrue to the federal budget.

d)  Increased Tax Revenues From a Larger Economy

Fourth, Yang states that with the $12,000 grants the economy would grow larger – by 12.5% he says (or $2.5 trillion in increased GDP).  He cites a 2017 study produced by scholars at the Roosevelt Institute, a left-leaning non-profit think tank based in New York, which examined the impact on the overall economy, under several scenarios, of precisely such a $12,000 annual grant per adult.

There are, however, several problems:

i)  First, under the specific scenario that is closest to the Yang proposal (where the grants would be funded through a combination of taxes and other actions), the impact on the overall economy forecast in the Roosevelt Institute study would be either zero (when net distribution effects are neutral), or small (up to 2.6%, if funded through a highly progressive set of taxes).

ii)  The reason for this result is that the model used by the Roosevelt Institute researchers assumes that the economy is far from full employment, and that economic output is then entirely driven by aggregate demand.  Thus with a new program such as the $12,000 grants, which is fully paid for by taxes or other measures, there is no impact on aggregate demand (and hence no impact on economic output) when net distributional effects are assumed to be neutral.  If funded in a way that is not distributionally neutral, such as through the use of highly progressive taxes, then there can be some effect, but it would be small.

In the Roosevelt Institute model, there is only a substantial expansion of the economy (of about 12.5%) in a scenario where the new $12,000 grants are not funded at all, but rather purely and entirely added to the fiscal deficit and then borrowed.  And with the current fiscal deficit now about 5% of GDP under Trump (unprecedented even at 5% in a time of full employment, other than during World War II), and the $12,000 grants coming to $3.0 trillion or 15% of GDP, this would bring the overall deficit to 20% of GDP!

Few economists would accept that such a scenario is anywhere close to plausible.  First of all, the current unemployment rate of 3.5% is at a 50 year low.  The economy is at full employment.  The Roosevelt Institute researchers are asserting that this is fictitious, and that the economy could expand by a substantial amount (12.5% in their scenario) if the government simply spent more and did not raise taxes to cover any share of the cost.  They also assume that a fiscal deficit of 20% of GDP would not have any consequences, such as on interest rates.  Note also an implication of their approach is that the government spending could be on anything, including, for example, the military.  They are using a purely demand-led model.

iii)  Finally, even if one assumes the economy will grow to be 12.5% larger as a result of the grants, even the Roosevelt Institute researchers do not assume it will be instantaneous.  Rather, in their model the economy becomes 12.5% larger only after eight years.  Yang is implicitly assuming it will be immediate.

There are therefore several problems in the interpretation and use of the Roosevelt Institute study.  Their scenario for 12.5% growth is not the one that follows from Yang’s proposals (which is funded, at least to a degree), nor would GDP jump immediately by such an amount.  And the Roosevelt Insitute model of the economy is one that few economists would accept as applicable in the current state of the economy, with its 3.5% unemployment.

But there is also a further problem.  Even assuming GDP rises instantly by 12.5%, leading to an increase in GDP of $2.5 trillion (from a current $20 trillion), Yang then asserts that this higher GDP will generate between $800 and $900 billion in increased federal tax revenue.  That would imply federal taxes of 32 to 36% on the extra output.  But that is implausible.  Total federal tax (and all other) revenues are only 17.5% of GDP.  While in a progressive tax system the marginal tax revenues received on an increase in income will be higher than at the average tax rate, the US system is no longer very progressive.  And the rates are far from what they would need to be twice as high at the margin (32 to 36%) as they are at the average (17.5%).  A more plausible estimate of the increased federal tax revenues from an economy that somehow became 12.5% larger would not be the $800 to $900 billion Yang calculates, but rather about half that.

Might such a universal basic income grant affect the size of the economy through other, more orthodox, channels?  That is certainly possible, although whether it would lead to a higher or to a lower GDP is not clear.  Yang argues that it would lead recipients to manage their health better, to stay in school longer, to less criminality, and to other such social benefits.  Evidence on this is highly limited, but it is in principle conceivable in a program that does properly redistribute income towards those with lower incomes (where, as discussed above, Yang’s specific program has problems).  Over fairly long periods of time (generations really) this could lead to a larger and stronger economy.

But one will also likely see effects working in the other direction.  There might be an increase in spouses (wives usually) who choose to stay home longer to raise their children, or an increase in those who decide to retire earlier than they would have before, or an increase in the average time between jobs by those who lose or quit from one job before they take another, and other such impacts.  Such impacts are not negative in themselves, if they reflect choices voluntarily made and now possible due to a $12,000 annual grant.  But they all would have the effect of reducing GDP, and hence the tax revenues that follow from some level of GDP.

There might therefore be both positive and negative impacts on GDP.  However, the impact of each is likely to be small, will mostly only develop over time, and will to some extent cancel each other out.  What is likely is that there will be little measurable change in GDP in whichever direction.

e)  Other Taxes

Fifth, Yang would institute other taxes to raise further amounts.  He does not specify precisely how much would be raised or what these would be, but provides a possible list and says they would focus on top earners and on pollution.  The list includes a financial transactions tax, ending the favorable tax treatment now given to capital gains and carried interest, removing the ceiling on wages subject to the Social Security tax, and a tax on carbon emissions (with a portion of such a tax allocated to the $12,000 grants).

What would be raised by such new or increased taxes would depend on precisely what the rates would be and what they would cover.  But the total that would be required, under the assumption that the amounts that would be raised (or saved, when existing government programs are cut) from all the measures listed above are as Yang assumes, would then be between $500 and $800 billion (as the revenues or savings from the programs listed above sum to $2.2 to $2.5 trillion).  That is, one might need from these “other taxes” as much as would be raised by the proposed new VAT.

But as noted in the discussion above, the amounts that would be raised by those measures are often likely to be well short of what Yang says will be the case.  One cannot save $500 to $600 billion in government programs for the poor and near-poor if government is spending only $285 billion on such programs, for example.  A more plausible figure for what might be raised by those proposals would be on the order of $1 trillion, mostly from the VAT, and not the $2.2 to $2.5 trillion Yang says will be the case.

C.  An Assessment

Yang provides a fair amount of detail on how he would implement a universal basic income grant of $12,000 per adult per year, and for a political campaign it is an admirable amount of detail.  But there are still, as discussed above, numerous gaps that prevent anything like a complete assessment of the program.  But a number of points are evident.

To start, the figures provided are not always plausible.  The math just does not add up, and for someone who extolls the need for good math (and rightly so), this is disappointing.  One cannot save $500 to $600 billion in programs for the poor and near-poor when only $285 billion is being spent now.  One cannot assume that the economy will jump immediately by 12.5% (which even the Roosevelt Institute model forecasts would only happen in eight years, and under a scenario that is the opposite of that of the Yang program, and in a model that few economists would take as credible in any case).  Even if the economy did jump by so much immediately, one would not see an increase of $800 to $900 billion in federal tax revenues from this but rather more like half that.  And other such issues.

But while the proposal is still not fully spelled out (in particular on which other taxes would be imposed to fill out the program), we can draw a few conclusions.  One is that the one group in society who will clearly not gain from the $12,000 grants is the poor and near-poor, who currently make use of food stamp and other such programs and decide to stay with those programs.  They would then not be eligible for the $12,000 grants.  And keep in mind that $12,000 per adult grants are not much, if you have nothing else.  One would still be below the federal poverty line if single (where the poverty line in 2019 is $12,490) or in a household with two adults and two or more children (where the poverty line, with two children, is $25,750).  On top of this, such households (like all households) will pay higher prices for at least some of what they purchase due to the new VAT.  So such households will clearly lose.

Furthermore, those poor or near-poor households who do decide to switch, thus giving up their eligibility for food stamps and other such programs, will see a net gain that is substantially less than $12,000 per adult.  The extent will depend on how much they receive now from those social programs.  Those who receive the most (up to $12,000 per adult), who are presumably also most likely to be the poorest among them, will lose the most.  This is not a structure that makes sense for a program that is purportedly designed to be of most benefit to the poorest.

For middle and higher-income households the net gain (or loss) from the program will depend on the full set of taxes that would be needed to fund the program.  One cannot say who will gain and who will lose until the structure of that full set of taxes is made clear.  This is of course not surprising, as one needs to keep in mind that this is a program of redistribution:  Funds will be raised (by taxes) that disproportionately affect certain groups, to be distributed then in the $12,000 grants.  Some will gain and some will lose, but overall the balance has to be zero.

One can also conclude that such a program, providing for a universal basic income with grants of $12,000 per adult, will necessarily be hugely expensive.  It would cost $3 trillion a year, which is 15% of GDP.  Funding it would require raising all federal tax and other revenue by 91% (excluding any offset by cuts in government social programs, which are however unlikely to amount to anything close to what Yang assumes).  Raising funds of such magnitude is completely unrealistic.  And yet despite such costs, the grants provided of $12,000 per adult would be poverty level incomes for those who do not have a job or other source of support.

One could address this by scaling back the grant, from $12,000 to something substantially less, but then it becomes less meaningful to an individual.  The fundamental problem is the design as a universal grant, to all adults.  While this might be thought to be politically attractive, any such program then ends up being hugely expensive.

The alternative is to design a program that is specifically targeted to those who need such support.  Rather than attempting to hide the distributional consequences in a program that claims to be universal (but where certain groups will gain and certain groups will lose, once one takes fully into account how it will be funded), make explicit the redistribution that is being sought.  With this clear, one can then design a focussed program that addresses that redistribution aim.

Finally, one should recognize that there are other policies as well that might achieve those aims that may not require explicit government-intermediated redistribution.  For example, Senator Cory Booker in the October 15 debate noted that a $15 per hour minimum wage would provide more to those now at the minimum wage than a $12,000 annual grant.  This remark was not much noted, but what Senator Booker said was true.  The federal minimum wage is currently $7.25 per hour.  This is low – indeed, it is less (in real terms) than what it was when Harry Truman was president.  If the minimum wage were raised to $15 per hour, a worker now at the $7.25 rate would see an increase in income of $15.00 – $7.25 = $7.75 per hour, and over a year of 40 hour weeks would see an increase in income of $7.75 x 40 x 52 = $16,120.00.  This is well more than a $12,000 annual grant would provide.

Republican politicians have argued that raising the minimum wage by such a magnitude will lead to widespread unemployment.  But there is no evidence that changes in the minimum wage that we have periodically had in the past (whether federal or state level minimum wages) have had such an adverse effect.  There is of course certainly some limit to how much it can be raised, but one should recognize that the minimum wage would now be over $24 per hour if it had been allowed to grow at the same pace as labor productivity since the late 1960s.

Income inequality is a real problem in the US, and needs to be addressed.  But there are problems with Yang’s specific version of a universal basic income.  While one may be able to fix at least some of those problems and come up with something more reasonable, it would still be massively disruptive given the amounts to be raised.  And politically impossible.  A focus on more targeted programs, as well as on issues such as the minimum wage, are likely to prove far more productive.

The “Threat” of Job Losses is Nothing New and Not to be Feared: Issues Raised in the Democratic Debate

A.  Introduction

The televised debate held October 15 between twelve candidates for the Democratic presidential nomination covered a large number of issues.  Some were clear, but many were not.  The debate format does not allow for much explanation or nuance.  And while some of the positions taken refected sound economics, others did not.

In a series of upcoming blog posts, starting with this one, I will review several of the issues raised, focussing on the economics and sometimes the simple arithmetic (which the candidates often got wrong).  And while the debate covered a broad range of issues, I will limit my attention here to the economic ones.

This post will look at the concern that was raised (initially in a question from one of the moderators) that the US will soon be facing a massive loss of jobs due to automation.  A figure of “a quarter of American jobs” was cited.  All the candidates basically agreed, and offered various solutions.  But there is a good deal of confusion over the issue, starting with the question of whether such job “losses” are unprecedented (they are not) and then in some of the solutions proposed.

A transcript of the debate can be found at the Washington Post website, which one can refer to for the precise wording of the questions and responses.  Unfortunately it does not provide pages or line numbers to refer to, but most of the economic issues were discussed in the first hour of the three hour debate.  Alternatively, one can watch the debate at the CNN.com website.  The discussion on job losses starts at the 32:30 minute mark of the first of the four videos CNN posted at its site.

B.  Job Losses and Productivity Growth

A topic on which there was apparently broad agreement across the candidates was that an unprecedented number of jobs will be “lost” in the US in the coming years due to automation, and that this is a horrifying prospect that needs to be addressed with urgency.  Erin Burnett, one of the moderators, introduced it, citing a study that she said concluded that “about a quarter of American jobs could be lost to automation in just the next 10 years”.  While the name of the study was not explicitly cited, it appears to be one issued by the Brookings Institution in January 2019, with Mark Muro as the principal author.  It received a good deal of attention when it came out, with the focus on its purported conclusion that there would be a loss of a quarter of US jobs by 2030 (see here, here, here, here, and/or here, for examples).

[Actually, the Brookings study did not say that.  Nor was its focus on the overall impact on the number of jobs due to automation.  Rather, its purpose was to look at how automation may differentially affect different geographic zones across the US (states and metropolitan areas), as well as different occupations, as jobs vary in their degree of exposure to possible automation.  Some jobs can be highly automated with technologies that already exist today, while others cannot.  And as the Brookings authors explain, they are applying geographically a methodology that had in fact been developed earlier by the McKinsey Global Institute, presented in reports issued in January 2017 and in December 2017.  The December 2017 report is most directly relevant, and found that 23% of “jobs” in the US (measured in terms of hours of work) may be automated by 2030 using technologies that have already been demonstrated as technically possible (although not necessarily financially worthwhile as yet).  And this would have been the total over a 14 year period starting from their base year of 2016.  This was for their “midpoint scenario”, and McKinsey properly stresses that there is a very high degree of uncertainty surrounding it.]

The candidates offered various answers on how to address this perceived crisis (which I will address below), but it is worth looking first at whether this is indeed a pending crisis.

The answer is no.  While the study cited said that perhaps a quarter of jobs could be “lost to automation” by 2030 (starting from their base year of 2016), such a pace of job loss is in fact not out of line with the norm.  It is not that much different from what has been happening in the US economy for the last 150 years, or longer.

Job losses “due to automation” is just another way of saying productivity has grown.  Fewer workers are needed to produce some given level of output, or equivalently, more output can be produced for a given number of workers.  As a simple example, suppose some factory produces 100 units of some product, and to start has 100 employees.  Output per employee is then 100/100, or a ratio of 1.0.  Suppose then that over a 14 year period, the number of workers needed (following automation of some of the tasks) reduces the number of employees to just 75 to produce that 100 units of output (where that figure of 75 workers includes those who will now be maintaining and operating the new machines, as well as those workers in the economy as a whole who made the machines, with those scaled to account for the lifetime of the machines).  The productivity of the workers would then have grown to 100/75, or a ratio of 1.333.  Over a 14 year period, that implies growth in productivity of 2.1% a year.  More accurately, the McKinsey estimate was that 23% of jobs might be automated, and with this the increase in productivity would be to 100/77 = 1.30.  The growth rate over 14 years would then be 1.9% per annum.

Such an increase in productivity is not outside the norm for the US.  Indeed, it matches what the US has experienced over at least the last century and a half.  The chart at the top of this post shows how GDP per capita has grown since 1870.  The chart is plotted in logarithms, and those of you who remember their high school math will recall that a straight line in such a graph depicts a constant rate of growth.  An earlier version of this chart was originally prepared for a prior post on this blog (where one can find further discussion of its implications), and it has been updated here to reflect GDP growth in recent years (using BEA data, with the earlier data taken from the Maddison Project).

What is remarkable is how steady that rate of growth in GDP per capita has been since 1870.  One straight line fits it extraordinarily well for the entire period, with a growth rate of 1.9% a year (or 1.86% to be more precise).  And while the US is now falling below that long-term trend (since around 2008, from the onset of the economic collapse in the last year of the Bush administration), the deviation of recent years is not that much different from an earlier such deviation between the late 1940s to the mid-1960s.  It remains to be seen whether there will be a similar catch-up to the long-term trend in the coming years.

One might reasonably argue that GDP per capita is not quite productivity, which would be GDP per employee.  Over very long periods of time population and the number of workers in that population will tend to grow at a similar pace, but we could also look at GDP per employee:

This chart is based on BEA data, the agency which issues the official GDP accounts for the US, for both real GDP and the number of employees (in full time equivalent terms, so part-time workers are counted in proportion to the number of hours they work).  The figures unfortunately only go back to 1929, the oldest year for which the BEA has issued estimates.  Note also that the rise in GDP during World War II looks relatively modest here, but that is because measures of “real” GDP (when carefully estimated using standard procedures) can deviate more and more as one goes back in time from the base year for prices (2012 here), coupled with major changes in the structure of production (such as during a major war).  But the BEA figures are the best available.

Once again one finds that the pace of productivity growth was remarkably stable over the period, with a growth rate here of 1.74% a year.  It was lower during the Great Depression years, but then recovered during World War II, and was then above the 1929 to 2018 trend from the early 1950s to 1980.  And the same straight line (meaning a constant growth rate) then fit extremely well from 1980 to 2010.

Since 2010 the growth in labor productivity has been more modest, averaging just 0.5% a year from 2010 to 2018.  An important question going forward is whether the path will return to the previous trend.  If it does, the implication is that there will be more job turnover for at least a temporary period.  If it does not, and productivity growth does not return to the path it has been on since 1929, the US as a whole will not be able to enjoy the growth in overall living standards the economy had made possible before.

The McKinsey numbers for what productivity growth might be going forward, of possibly 1.9% a year, are therefore not out of line with what the economy has actually experienced over the years.  It matches the pace as measured by GDP per capita, and while the 1.74% a year found for the last almost 90 years for the measure based on GDP per employee is a bit less, they are close.  And keep in mind that the McKinsey estimate (of 1.9% growth in productivity over 14 years) is of what might be possible, with a broad range of uncertainty over what will actually happen.

The estimate that “about” a quarter of jobs may be displaced by 2030 is therefore not out of line with what the US has experienced for perhaps a century and a half.  Such disruption is certainly still significant, and should be met with measures to assist workers to transition from jobs that have been automated away to the jobs then in need of more workers.  We have not, as a country, managed this very well in the past.  But the challenge is not new.

What will those new jobs be?  While there are needs that are clear to anyone now (as Bernie Sanders noted, which I will discuss below), most of the new jobs will likely be in fields that do not even exist right now.  A careful study by Daron Acemoglu (of MIT) and Pascual Restrepo (of Boston University), published in the American Economic Review in 2018, found that about 60% of the growth in net new jobs in the US between 1980 and 2015 (an increase of 52 million, from 90 million in 1980 to 142 million in 2015) were in occupations where the specific title of the job (as defined in surveys carried out by the Census Bureau) did not even exist in 1980.  And there was a similar share of those with new job titles over the shorter periods of 1990 to 2015 or 2000 to 2015.  There is no reason not to expect this to continue going forward.  Most new jobs are likely to be in positions that are not even defined at this point.

C.  What Would the Candidates Do?

I will not comment on all the answers provided by the candidates (some of which were indecipherable), but just a few.

Bernie Sanders provided perhaps the best response by saying there is much that needs to be done, requiring millions of workers, and if government were to proceed with the programs needed, there would be plenty of jobs.  He cited specifically the need to rebuild our infrastructure (which he rightly noted is collapsing, and where I would add is an embarrassment to anyone who has seen the infrastructure in other developed economies).  He said 15 million workers would be required for that.  He also cited the Green New Deal (requiring 20 million workers), as well as needs for childcare, for education, for medicine, and in other areas.

There certainly are such needs.  Whether we can organize and pay for such programs is of course critical and would need to be addressed.  But if they can be, there will certainly be millions of workers required.

Sanders was also asked by the moderator specifically about his federal jobs guarantee proposal (and indeed the jobs topic was introduced this way).  But such a policy proposal is more problematic, and separate from the issue of whether the economy will need so many workers.  It is not clear how such a jobs guarantee, provided by the federal government, would work.  The Sanders campaign website provides almost no detail.  But a number of questions need to be addressed.  To start, would such a program be viewed as a temporary backstop for a worker, to be used when he or she cannot find another reasonable job at a wage they would accept, or something permanent?  If permanent, one is really talking more of an expanded public sector, and that does not seem to be the intention of a jobs guarantee program.  But if a backstop, how would the wage be set?  If too high, no workers would want to leave and take a different job, and the program would not be a backstop.  And would all workers in such a program be paid the same, or different based on their skills?  Presumably one would pay an engineer working on the design of infrastructure projects more than someone with just a high school degree.  But how would these be determined?  Also, with a job guarantee, can someone be fired?  Suppose they often do not show up for work?

So there are a number of issues to address, and the answers are not clear.  But more fundamentally, if there is not a shortage of jobs but rather of workers (keep in mind that the unemployment rate is now at a 50 year low), why does one need such a guarantee?  It might be warranted (on a temporary basis) during an economic downturn, when unemployment is high, but why now, when unemployment is low?  [October 28 update:  The initial version of this post had an additional statement here saying that the federal government already had “something close to a job guarantee”, as you could always join the Army.  However, as a reader pointed out, while that once may have been true, it no longer is.  So that sentence has been deleted.]

Andrew Yang responded next, arguing for his proposal of a universal basic income that would provide every adult in the country with a grant of $1,000 per month, no questions asked.  There are many issues with such a proposal, which I will address in a subsequent blog post, but would note here that his basic argument for such a universal grant follows from his assertion that jobs will be scarce due to automation.  He repeatedly asserted in the debate that we have now entered into what has been referred to as the “Fourth Industrial Revolution”, where automation will take over most jobs and millions will be forced out of work.

But as noted above, what we have seen in the US over the last 150 years (at least) is not that much different from what is now forecast for the next few decades.  Automation will reduce the number of workers needed to produce some given amount, and productivity per worker will rise.  And while this will be disruptive and lead to a good deal of job displacement (important issues that certainly need to be addressed), the pace of this in the coming decades is not anticipated to be much different from what the country has seen over the last 150 years.

A universal basic income is fundamentally a program of redistribution, and given the high and growing degree of inequality in the US, a program of redistribution might well be warranted.  I will discuss this is a separate blog post.  But such a program is not needed to provide income to workers who will be losing jobs to automation, as there will be jobs if we follow the right macro policies.  And $12,000 a year would not nearly compensate for a lost job anyway.

Elizabeth Warren’s response to the jobs question was different.  She argued that jobs have been lost not due to automation, but due to poor international trade policies.  She said:  “the data show that we have had a lot of problems with losing jobs, but the principal reason has been bad trade policy.”

Actually, this is simply not true, and the data do not support it.  There have been careful studies of the issue, but it is easy enough to see in the numbers.  For example, in an earlier post on this blog from 2016, I examined what the impact would have been on the motor vehicle sector if the US had moved to zero net imports in the sector (i.e. limiting car imports to what the US exports, which is not very much).  Employment in the sector would then have been flat, rather than decline by 17%, between the years 1967 and 2014.  But this impact would have been dwarfed by the impact of productivity gains.  The output of the motor vehicle (in real terms) was 4.5 times higher in 2014 than what it was in 1967.  If productivity had not grown, they would then have required 4.5 times as many workers.  But productivity did grow – by 5.4 times.  Hence the number of workers needed to produce the higher output actually went down by the 17% observed.  Banning imports would have had almost no effect relative to this.

D.  Summary and Conclusion

Automation is important, but is nothing new.  The Luddites destroyed factory machinery in the early 1800s in England due to a belief that the machines were taking away their jobs and that they would then be left with no prospects.  And data for the US that goes back to at least 1870 shows such job “destroying” processes have long been underway.  They have not accelerated now.  Indeed, over the past decade the pace has slowed (i.e. less job “destruction”).  But it is too soon to tell whether this deceleration is similar to fluctuations seen in the past, where there were occasional deviations but then always a return to the long-term path.

Looking forward, careful studies such as those carried out by McKinsey have estimated how many jobs may be exposed to automation (using technologies that we know already to be technically feasible).  While they emphasize that any such forecasts are subject to a great deal of uncertainty, McKinsey’s midpoint scenario estimates that perhaps 23% of jobs may be substituted away by automation between 2016 and 2030.  If so, such a pace (of 1.9% a year) would be similar to what productivity growth has been historically in the US.  There is nothing new here.

But while nothing new, that does not mean it should be ignored.  It will lead, just as it has in the past, to job displacement and disruption.  There is plenty of scope for government to assist workers in finding appropriate new jobs, and in obtaining training for them, but the US has historically never done this all that well.  Countries such as Germany have been far better at addressing such needs.

The candidate responses did not, however, address this (other than Andrew Yang saying government supported training programs in the US have not been effective).  While Bernie Sanders correctly noted there is no shortage of needs for which workers will be required, he has also proposed a jobs guarantee to be provided by the federal government.  Such a guarantee would be more problematic, with many questions not yet answered.  But it is also not clear why it would be needed in current circumstances anyway (with an economy at full employment).

Andrew Yang argued the opposite:  That the economy is facing a structural problem that will lead to mass unemployment due to automation, with a Fourth Industrial Revolution now underway that is unprecedented in US history.  But the figures show this not to be the case, with forecast prospects similar to what the US has faced in the past.  Thus the basis for his argument that we now need to do something fundamentally different (a universal basic income of $1,000 a month for every adult) falls away.  And I will address the $1,000 a month itself in a separate blog post.

Finally, Elizabeth Warren asserted that the problem stems primarily from poor international trade policy.  If we just had better trade policy, she said, there would be no jobs problem.  But this is also not borne out by the data.  Increased imports, even in the motor vehicle sector (which has long been viewed as one of the most exposed sectors to international trade), explains only a small fraction of why there are fewer workers needed in that sector now than was the case 50 years ago.  By far the more important reason is that workers in the sector are now far more productive.

The Growing Fiscal Deficit, the Keynesian Stimulus Policies of Trump, and the FY20/21 Budget Agreement

A.  The Growing Fiscal Deficit Under Trump

Donald Trump, when campaigning for office, promised that he would “quickly” drive down the fiscal deficit to zero.  Few serious analysts believed that he would get it all the way to zero during his term in office, but many assumed that he would at least try to reduce the deficit by some amount.  And this clearly should have been possible, had he sought to do so, when Republicans were in full control of both the House and the Senate, as well as the presidency.

That has not happened.  The deficit has grown markedly, despite the economy being at full employment, and is expected to top $1 trillion this year, reaching over 5% of GDP.  This is unprecedented in peacetime.  Never before in US history, other than during World War II, has the federal deficit hit 5% of GDP with the economy at full employment.  Indeed, the fiscal deficit has never even reached 4% of GDP at a time of full employment (other than, again, World War II).

The chart at the top of this post shows what has happened.  The deficit is the difference between what the government spends (shown as the line in blue) and the revenues it receives (the line in green).  The deficit grew markedly following the financial and economic collapse in the last year of the Bush administration.  A combination of higher government spending and lower taxes (lower both because the economy was depressed but also from legislated tax cuts) were then necessary to stabilize the economy.  As the economy recovered the fiscal deficit then narrowed.  But it is now widening again, and as noted above, is expected to top $1 trillion dollars in FY2019 (which ends on September 30).

More precisely, the US Treasury publishes monthly a detailed report on what the federal government received in revenues and what was spent in outlays for that month and for up to that point in the fiscal year.  See here for the June report, and here for previous monthly reports.  It includes a forecast of what will be received and spent for the fiscal year as a whole, and hence what the deficit will be, based on the budget report released each spring, usually in March.  For FY2019, the forecast was of a deficit of $1.092 trillion.  But these are forecasts, and comparing the forecasts made to the actuals realized over the last three fiscal years (FY2016 to18), government outlays were on average overestimated by 2.0% and government revenues by 2.2%.  These are similar, and scaling the forecasts of government outlays and government revenues down by these ratios, the deficit would end up at $1.075 trillion.  I used these scaled figures in the chart above.

The widening in the deficit in recent years is evident.  The interesting question is why.  For this one needs counterfactuals, of what the figures would have been if some alternative decisions had been made.

For government revenues (taxes of various kinds), the curve in orange show what they would have been had taxes remained at the same shares of the relevant income (depending on the tax) as they were in FY2016.  Specifically, individual income taxes were kept at a constant share of personal income (as defined and estimated in the National Income and Product Accounts, or NIPA accounts, assembled by the Bureau of Economic Analysis, or BEA, of the US Department of Commerce); corporate profit taxes were kept at a constant share of corporate profits (as estimated in the NIPA accounts); payroll taxes (primarily Social Security taxes) were kept at a constant share of compensation of employees (again from the NIPA accounts); and all other taxes were kept at a constant share of GDP.  The NIPA accounts (often referred to as the GDP accounts) are available through the second quarter of CY2019, and hence are not yet available for the final quarter of FY2019 (which ends September 30, and hence includes the third quarter of CY2019).  For this, I extrapolated the final quarter’s figures based on what growth had been over the preceding four quarters.

Note also that the base year here (FY2016) already shows a flattening in tax revenues.  If I had used the tax shares of FY2015 as a base for the comparison, the tax losses in the years since then would have been even greater.  Various factors account for the flattening of tax revenues in FY2016, including (according to an analysis by the Congressional Budget Office) passage by Congress of Public Law 114-113 in December 2015, that allowed for a more rapid acceleration of depreciation allowances for investment by businesses.  This had the effect of reducing corporate profit taxes substantially in FY2016.

Had taxes remained at the shares of the relevant income as they were in FY2016, tax revenues would have grown, following the path of the orange curve.  Instead, they were flat in nominal dollar amount (the green curve), indicating they were falling in real terms as well as a share of income.  The largest loss in revenues stemmed from the major tax cut pushed through Congress in December 2017, which took effect on January 1, 2018.  Hence it applied over three of the four quarters in FY2018, and for all of FY2019.

An increase in government spending is also now leading, in FY2019, to a widening of the deficit.  Again, one needs to define a counterfactual for the comparison.  For this I assumed that government spending during Trump’s term in office so far would have grown at the same rate as it had during Obama’s eight years in office (the rate of increase from FY2008 to 16).  That rate of increase during Obama’s two terms was 3.2% a year (in nominal terms), and was substantially less than during Bush’s two terms (which was a 6.6% rate of growth per year).

The rate of growth in government spending in the first two years of Trump’s term (FY2017 and 2018) then almost exactly matched the rate of growth under Obama.  But this has now changed sharply in FY19, with government spending expected to jump by 8.0% in just one year.

The fiscal deficit is then the difference, as noted above, between the two curves for spending and revenues.  Its change over time may be clearer in a chart of just the deficit itself:

The curve in black shows what the deficit has been, and what is expected for FY2019.  The deficit narrowed to $442 billion in FY2015, and then started to widen.  Primarily due to flat tax revenues in FY2016 (spending was following the path it had been following before, after several years of suppression), the deficit grew in FY2016.  And it then continued to grow until at least through FY2019.  The curve in red shows what the deficit would have been had government spending continued to grow under Trump at the pace it had under Obama.  This would have made essentially no difference in FY2017 and FY2018, but would have reduced the deficit in FY2019 from the expected $1,075 billion to $877 billion instead.  Not a small deficit by any means, but not as high.

But more important has been the contribution to the higher deficit from tax cuts.  The combined effect is shown in the curve in blue in the chart.  The deficit would have stabilized and in fact reduced by a bit.  For FY2019, the deficit would have been $528 billion, or a reasonable 2.5% of GDP.  Instead, at an expected $1,075 billion, it will be over twice as high.  And it is a consequence of Trump’s policies.

B.  Have the Tax Cuts Led to Higher Growth?

The Trump administration claimed that the tax cuts (and specifically the major cuts passed in December 2017) would lead to such a more rapid pace of GDP growth that they would “pay for themselves”.  This clearly has not happened – tax revenues have fallen in real terms (they were flat in nominal terms).  But a less extreme argument was that the tax cuts, and in particular the extremely sharp cut in corporate profit taxes, would lead to a spurt of new corporate investment in equipment, which would raise productivity and hence GDP.  See, for example, the analysis issued by the White House Council of Economic Advisors in October 2017.

But this has not happened either.  Growth in private investment in equipment has in fact declined since the first quarter of 2018 (when the law went into effect):

The curve in blue shows the quarter to quarter changes (at an annual rate), while the curve in red smooths this out by showing the change over the same quarter of a year earlier.  There is a good deal of volatility in the quarter to quarter figures, while the year on year changes show perhaps some trends that last perhaps two years or so, but with no evidence that the tax cut led to a spurt in such investment.  The growth has in fact slowed.

Such investment is in fact driven largely by more fundamental factors, not by taxes.  There was a sharp fall in 2008 as a result of the broad economic and financial collapse at the end of the Bush administration, it then bounced back in 2009/10, and has fluctuated since driven by various industry factors.  For example, oil prices as well as agricultural prices both fell sharply in 2015, and the NIPA accounts indicate that equipment investment in just these two sectors reduced private investment in equipment by more than 2% points from what the total would have been in 2015.  This continued into 2016, with a reduction of a further 1.3% points.  What matters are the fundamentals.  Taxes are secondary, at best.

What about GDP itself?:

Here again there is quarter to quarter volatility, but no evidence that the tax cuts have spurred GDP growth.  Over the past three years, real GDP growth on a quarter to quarter basis peaked in the fourth quarter of 2017, before the tax cuts went into effect, and has declined modestly since then.  And that peak in the fourth quarter of 2017 was not anything special:  GDP grew at a substantially faster pace in the second and third quarters of 2014, and the year on year rate in early 2015 was higher than anything reached in 2017-19.  Rather, what we see in real GDP growth since late 2009 is significant quarter to quarter volatility, but around an average pace of about 2.3% a year.  There is no evidence that the late 2017 tax cut has raised this.

The argument that tax cuts will spur private investment, and hence productivity and hence GDP, is a supply-side argument.  There is no evidence in the numbers to support this.  But there may also be a demand-side argument, which is basically Keynesian.  The argument would be that tax cuts lead to higher (after-tax) incomes, and that these higher incomes led to higher consumption expenditures by households.  There might be some basis to this, to the extent that a portion of the tax cuts went to low and middle-income households who will spend more upon receiving it.  But since the tax cut law passed in December 2017 went primarily to the rich, whose consumption is not constrained by their current income flows (they save the excess), the impact of the tax cuts on household consumption would be weak.  It still, however, might be something.

But this still did not lead to a more rapid pace of GDP growth, as we saw above.  Why?  One needs to recognize that GDP is a measure of production in the domestic economy (GDP is Gross Domestic Product), and not of demand.  GDP is commonly measured by adding up the components of demand, with any increase or decrease in the stock of inventories then added (or subtracted, if negative) to tell us what production must have been.  But this is being done because the data is better (and more quickly available) for the components of GDP demand.  One must not forget that GDP is still an estimate of production, and not of total domestic demand.

And what the economy can produce when at full employment is constrained by whatever capacity was at that point in time.  The rate of unemployment has fallen steadily since hitting its peak in 2009 during the downturn:

Aside from the “squiggles” in these monthly figures (the data are obtained from household surveys, and will be noisy), unemployment fell at a remarkably steady pace since 2009.  One can also not discern any sharp change in that pace before and after January 2017, when Trump took office.  But the rate of unemployment is now leveling off, as it must, since there will always be some degree of frictional unemployment when an economy is at “full employment”.

With the economy at full employment, growth will now be constrained by the pace of growth of the labor force (about 0.5% a year) plus the growth in productivity of the average labor force member (which analysts, such as at the Congressional Budget Office, put at about 1.5% a year in the long term, and a bit less over the next decade).  That is, growth in GDP capacity will be 2% a year, or less, on average.

In such situations, Keynesian demand expansion will not raise the growth in GDP beyond that 2% rate.  There will of course be quarter to quarter fluctuations (GDP growth estimates are volatile), but on average over time, one should not expect growth in excess of this.

But growth can be less.  In a downturn, such as that suffered in 2008/09, GDP growth can drop well below capacity.  Unemployment soars, and Keynesian demand stimulus is needed to stabilize the economy and return it to a growth path.  Tax cuts (when focused on low and middle income households) can be stimulative.  But especially stimulative in such circumstances is direct government spending, as such spending leads directly to people being hired and put to work.

Thus the expansion in government spending in 2008/09 (see the chart at the top of this post) was exactly what was needed in those circumstances.  The mistake then was to hold government spending flat in nominal terms (and hence falling in real terms) between 2009 and 2014, even though unemployment, while falling, was still relatively high.  That cut-back in government spending was unprecedented in a period of recovery from a downturn (over at least the past half-century in the US).  And an earlier post on this blog estimated that had government spending been allowed to increase at the same pace as it had under Reagan following the 1982 downturn, the US economy would have fully recovered by 2012.

But the economy is now at full employment.  In these circumstances, extra demand stimulus will not increase production (as production is limited by capacity), but will rather spill over into a drawdown in inventories (in the short term, but there is only so much in inventories that one can draw down) or an increase in the trade deficit (more imports to satisfy the domestic demand, or exports diverted to meet the domestic demand).  One saw this in the initial estimates for the GDP figures for the second quarter of 2019.  GDP is estimated to have grown at a 2.1% rate.  But the domestic final demand components grew at a pace that, by themselves, would have accounted for a 3.6% point increase in GDP.  The difference was accounted for by a drawdown in inventories (accounting for 0.7% points of GDP) and an increase in the trade deficit (accounting for a further reduction of 0.8% points of GDP).  But these are just one quarter of figures, they are volatile, and it remains to be seen whether this will continue.

It is conceivable that domestic demand might fall back to grow in line with capacity.  But this then brings up what should be considered the second arm of Trump’s Keynesian stimulus program.  While tax cuts led to growing deficits in FY2017 and 18, we are now seeing in FY2019, in addition to the tax cuts, an extraordinary growth in government spending.  Based on US Treasury forecasts for FY2019 (as adjusted above), federal government spending this fiscal year is expected to grow by 8.0%.  This will add to domestic demand growth.  And there has not been such growth in government spending during a time of full employment since George H. W. Bush was president.

C.  The Impact of the Bipartisan Budget Act of 2019

Just before leaving for its summer recess, the House and the Senate in late July both passed an important bill setting the budget parameters for fiscal years 2020 and 2021.  Trump signed it into law on August 2.  It was needed as, under the budget sequester process forced on Obama in 2011, there would have otherwise been sharp cutbacks in the discretionary budgets for what government is allowed to spend (other than for programs such as Social Security or Medicare, where spending follows the terms of the programs as established, or for what is spent on interest on the public debt).  The sequesters would have set sharp cuts in government spending in fiscal years 2020 and 2021, and if allowed, such sudden cuts could have pushed the US economy into a recession.

The impact is clear on a chart:

The figures are derived from the Congressional Budget Office analysis of the impact on government spending from the lifting of the caps.  Without the change in the spending caps, discretionary spending would have been sharply reduced.  At the new caps, spending will increase at a similar pace as it had before.

Note the sharp contrast with the cut-backs in discretionary budget outlays from FY2011 to FY2015.  Unemployment was high then, and the economy struggled to recover from the 2008/09 downturn while confronting these contractionary headwinds.  But the economy is now at full employment, and the extra stimulus on demand from such spending will not, in itself and in the near term, lead to an increase in capacity, and hence not lead to a faster rate of growth than what we have seen in recent years.

But I should hasten to add that lifting the spending caps was not a mistake.  Government spending has been kept too limited for too long – there are urgent public needs (just look at the condition of our roads).  And a sharp and sudden cut in spending could have pushed the economy into a recession, as noted above.

More fundamentally, keeping up a “high pressure” economy is not necessarily a mistake.  One will of course need to monitor what is happening to inventories and the trade deficit, but the pressure on the labor market from a low unemployment rate has been bringing into the labor force workers who had previously been marginalized out of it.  And while there is little evidence as yet that it has spurred higher wages, continued pressure to secure workers should at some point lead to this.  What one does not want would be to reach the point where this leads to higher inflation.  But there is no evidence that we are near that now.  Indeed, the Fed decided on July 31 to reduce interest rates (for the first time since 2008, in part out of concern that inflation has been too low.

D.  Summary, Implications, and Conclusion

Trump campaigned on the promise that he would bring down the government deficit – indeed bring it down to zero.  The opposite has happened.  The deficit has grown sharply, and is expected to reach over $1 trillion this fiscal year, or over 5% of GDP.  This is unprecedented in the US in a time of full employment, other than during World War II.

The increase in the deficit is primarily due to the tax cuts he championed, supplemented (in FY2019) by a sharp rise in government spending.  Without such tax cuts, and with government spending growth the same as it had been under Obama, the deficit in FY2019 would have been $530 billion.  It is instead forecast to be double that (a forecast $1.075 trillion).

The tax cuts were justified by the administration by arguing that they would spur investment and hence growth.  That has not happened.  Growth in private investment in equipment has slowed since the major tax cuts of December 2017 were passed.  So has the pace of GDP growth.

This should not be surprising.  Taxes have at best a marginal effect on investment decisions.  The decision to invest is driven primarily by more fundamental considerations, including whether the extra capacity is needed given demand for the products, by the technologies available, and so on.

But tax cuts (to the extent they go to low and middle income households), and even more so direct government spending, can spur demand in the economy.  At times of less than full employment, this can lead to a higher GDP in standard Keynesian fashion.  But when the economy is at full employment, the constraint is not aggregate demand but rather production capacity.  And that is set by the available labor force and how much each worker can produce (their productivity).  The economy can then grow only as fast as the labor force and productivity grow, and most estimates put that at about 2% or less per year in the US right now.

The spur to demand can, however, act to keep the economy from falling back into a recession.  With the chaos being created in the markets by the trade wars Trump has launched, this is not a small consideration.  Indeed, the Fed, in announcing its July 31 cut in interest rates, indicated that in addition to inflation tracking below its target rate of 2%, concerns regarding “global developments” (interpreted as especially trade issues) was a factor in making the cut.

There are also advantages to keeping high pressure on the labor markets, as it draws in labor that was previously marginalized, and should at some point lead to higher wages.  As long as inflation remains modest (and as noted, it is currently below what the Fed considers desirable), all this sounds like a good situation.  The fiscal policies are therefore providing support to help ensure the economy does not fall back into recession despite the chaos of the trade wars and other concerns, while keeping positive pressure in the labor markets.  Trump should certainly thank Nancy Pelosi for the increases in the government spending caps under the recently approved budget agreement, as this will provide significant, and possibly critical, support to the economy in the period leading up to the 2020 election.

So what is there not to like?

The high fiscal deficit at a time of full employment is not to like.  As noted above, a fiscal deficit of more than 5% of GDP during a time of full employment is unprecedented (other than during World War II).  Unemployment was similarly low in the final few years of the Clinton presidency, but the economy then had fiscal surpluses (reaching 2.3% of GDP in FY2000) as well as a public debt that was falling in dollar amount (and even more so as a share of GDP).

The problem with a fiscal deficit of 5% of GDP with the economy at full employment is that when the economy next goes into a recession (and there eventually always has been a recession), the fiscal deficit will rise (and will need to rise) from this already high base.  The fiscal deficit rose by close to 9 percentage points of GDP between FY2007 and FY2009.  A similar economic downturn starting from a base where the deficit is already 5% of GDP would thus raise the fiscal deficit to 14% of GDP.   And that would certainly lead conservatives to argue, as they did in 2009, that the nation cannot respond to the economic downturn with the increase in government spending that would be required to stabilize and then bring down unemployment.

Is a recession imminent?  No one really knows, but the current economic expansion, that began five months after Obama took office, is now the longest on record in the US – 121 months as of July.  It has just beaten the 120 month expansion during the 1990s, mostly when Clinton was in office.  Of more concern to many analysts is that long-term interest rates (such as on 10-year US Treasury bonds) are now lower than short-term interest rates on otherwise similar US Treasury obligations.  This is termed an “inverted yield curve”, as the yield curve (a plot of interest rates against the term of the bond) will normally be upward sloping.  Longer-term loans normally have to pay a higher interest rate than shorter ones.  But right now, 10-year US Treasury bonds are being sold in the market at a lower interest rate than the interest rate demanded on short-term obligations.  This only makes sense if those in the market expect a downturn (forcing a reduction in interest rates) at some point in the next few years.

The concern is that in every single one of the seven economic recessions since the mid-1960s, the yield curve became inverted prior to that downturn.  While this was typically two or three years before the downturn (and in the case leading up to the 1970 recession, about four years before), in no case was there an inverted yield curve without a subsequent downturn within that time frame.  Some argue that “this time is different”, and perhaps it will be.  But an inverted yield curve has been 100% accurate so far in predicting an imminent recession.

The extremely high fiscal deficit under Trump at a time of full employment is therefore leaving the US economy vulnerable when the next recession occurs.  And a growing public debt (it will reach $16.8 trillion, or 79% of GDP, by September 30 of this year, in terms of debt held by the public) cannot keep growing forever.

What then to do?  A sharp cut in government spending might well bring on the downturn that we are seeking to avoid.  Plus government spending is critically needed in a range of areas.  But raising taxes, and specifically raising taxes on the well-off who benefited disproportionately in the series of tax cuts by Reagan, Bush II, and then Trump, would have the effect of raising revenue without causing a contractionary impulse.  The well-off are not constrained in what they spend on consumption by their incomes – they consume what they wish and save the residual.

The impact on the deficit and hence on the debt could also be significant.  While now a bit dated, an analysis on this blog from September 2013 (using Congressional Budget Office figures) found that simply reversing in full the Bush tax cuts of 2001 and 2003 would lead the public debt to GDP ratio to fall and fall sharply (by about half in 25 years).  The Trump tax cuts of December 2017 have now made things worse, but a good first step would be to reverse these.

It was the Bush and now Trump tax cuts that have put the fiscal accounts on an unsustainable trajectory.  As was noted above, the fiscal accounts were in surplus at the end of the Clinton administration.  But we now have a large and unprecedented deficit even when the economy is at full employment.  In a situation like this, one would think it should be clear to acknowledge the mistake, and revert to what had worked well before.

Managing the fiscal accounts in a responsible way is certainly possible.  But they have been terribly mismanaged by this administration.

The Increasingly Attractive Economics of Solar Power: Solar Prices Have Plunged

A.  Introduction

The cost of solar photovoltaic power has fallen dramatically over the past decade, and it is now, together with wind, a lower cost source of new power generation than either fossil-fuel (coal or gas) or nuclear power plants.  The power generated by a new natural gas-fueled power plant in 2018 would have cost a third more than from a solar or wind plant (in terms of the price they would need to sell the power for in order to break even); coal would have cost 2.4 times as much as solar or wind; and a nuclear plant would have cost 3.5 times as much.

These estimates (shown in the chart above, and discussed in more detail below) were derived from figures estimated by Lazard, the investment bank, and are based on bottom-up estimates of what such facilities would have cost to build and operate, including the fuel costs.  But one also finds a similar sharp fall in solar energy prices in the actual market prices that have been charged for the sale of power from such plants under long-term “power purchase agreements” (PPAs).  These will also be discussed below.

With the costs where they are now, it would not make economic sense to build new coal or nuclear generation capacity, nor even gas in most cases.  In practice, however, the situation is more complex due to regulatory issues and conflicting taxes and subsidies, and also because of variation across regions.  Time of day issues may also enter, depending on when (day or night) the increment in new capacity might be needed.  The figures above are also averages, particular cases vary, and what is most economic in any specific locale will depend on local conditions.  Nevertheless, and as we will examine below, there has been a major shift in new generation capacity towards solar and wind, and away from coal (with old coal plants being retired) and from nuclear (with no new plants being built, but old ones largely remaining).

But natural gas generation remains large.  Indeed, while solar and wind generation have grown quickly (from a low base), and together account for the largest increment in new power capacity in recent years, gas accounts for the largest increment in power production (in megawatt-hours) measured from the beginning of this decade.  Why?  In part this is due to the inherent constraints of solar and wind technologies:  Solar panels can only generate power when the sun shines, and wind turbines when the wind is blowing.  But more interestingly, one also needs to look at the economics behind the choice as to whether or not to build new generation capacity to replace existing capacity, and then what sources of capacity to use.  Critical is what economists call the marginal cost of such production.  A power plant lasts for many years once it is built, and the decision on whether to keep an existing plant in operation for another year depends only on the cost of operating and maintaining the plant.  The capital cost has already been spent and is no longer relevant to that decision.

Details in the Lazard report can be used to derive such marginal cost estimates by power source, and we will examine these below.  While the Lazard figures apply to newly built plants (older plants will generally have higher operational and maintenance costs, both because they are getting old and because technology was less efficient when they were built), the estimates based on new plants can still give us a sense of these costs.  But one should recognize they will be biased towards indicating the costs of the older plants are lower than they in fact are.  However, even these numbers (biased in underestimating the costs of older plants) imply that it is now more economical to build new wind and possibly solar plants, in suitable locales, than it costs to continue to keep open and operate coal-burning power plants.  This will be especially true for the older, less-efficient, coal-burning plants.  Thus we should be seeing old coal-burning plants being shut down.  And indeed we do.  Moreover, while the costs of building new wind and solar plants are not yet below the marginal costs of keeping open existing gas-fueled and nuclear power plants, they are on the cusp of being so.

These costs also do not reflect any special subsidies that solar and wind plants might benefit from.  These vary by state.  Fossil-fueled and nuclear power plants also enjoy subsidies (often through special tax advantages), but these are long-standing and are implicitly being included in the Lazard estimates of the costs of such traditional plants.

But one special subsidy enjoyed by fossil fuel burning power plants, not reflected in the Lazard cost estimates, is the implicit subsidy granted to such plants from not having to cover the cost of the damage from the pollution they generate.  Those costs are instead borne by the general public.  And while such plants pollute in many different ways (especially the coal-burning ones), I will focus here on just one of those ways – their emissions of greenhouse gases that are leading to a warming planet and consequent more frequent and more damaging extreme weather events.  Solar and wind generation of power do not cause such pollution – the burning of coal and gas do.

To account for such costs and to ensure a level playing field between power sources, a fee would need to be charged to reflect the costs being imposed on the general population from this (and indeed other) such pollution.  The revenues generated could be distributed back to the public in equal per capita terms, as discussed in an earlier post on this blog.  We will see that a fee of even just $20 per ton of CO2 emitted would suffice to make it economic to build new solar and wind power plants to substitute not just for new gas and coal burning plants, but for existing ones as well.  Gas and especially coal burning plants would not be competitive with installing new solar or wind generation if they had to pay for the damage done as a result of their greenhouse gas pollution, even on just marginal operating costs.

Two notes before starting:  First, many will note that while solar might be fine for the daytime, it will not be available at night.  Similarly, wind generation will be fine when the wind blows, but it may not always blow even in the windiest locales.  This is of course true, and should solar and wind capacity grow to dominate power generation, there will have to be ways to store that power to bridge the times from when the generation occurs to when the power is used.

But while storage might one day be an issue, it is mostly not an issue now.  In 2018, utility-scale solar only accounted for 1.6% of power generation in the US (and 2.3% if one includes small scale roof-top systems), while wind only accounted for 6.6%.  At such low shares, solar and wind power can simply substitute for other, higher cost, sources of power (such as from coal) during the periods the clean sources are available.  Note also that the cost figures for solar and wind reflected in the chart at the top of this post (and discussed in detail below) take into account that solar and wind cannot be used 100% of the time.  Rather, utilization is assumed to be similar to what their recent actual utilization has been, not only for solar and wind but also for gas, coal and nuclear.  Solar and wind are cheaper than other sources of power (over the lifetime of these investments) despite their inherent constraints on possible utilization.

But where the storage question can enter is in cases where new generation capacity is required specifically to serve evening or night-time needs.  New gas burning plants might then be needed to serve such time-of-day needs if storage of day-time solar is not an economic option.  And once such gas-burning plants are built, the decision on whether they should be run also to serve day-time needs will depend on a comparison of the marginal cost of running these gas plants also during the day, to the full cost of building new solar generation capacity, as was discussed briefly above and will be considered in more detail below.

This may explain, in part, why we see new gas-burning plants still being built nationally.  While less than new solar and wind plants combined (in terms of generation capacity), such new gas-burning plants are still being built despite their higher cost.

More broadly, California and Hawaii (both with solar now accounting for over 12% of power used in those states) are two states (and the only two states) which may be approaching the natural limits of solar generation in the absence of major storage.  During some sunny days the cost of power is being driven down to close to zero (and indeed to negative levels on a few days).  Major storage will be needed in those states (and only those states) to make it possible to extend solar generation much further than where it is now.  But this should not be seen so much as a “problem” but rather as an opportunity:  What can we do to take advantage of cheap day-time power to make it available at all hours of the day?  I hope to address that issue in a future blog post.  But in this blog post I will focus on the economics of solar generation (and to a lesser extent from wind), in the absence of significant storage.

Second, on nomenclature:  A megawatt-hour is a million watts of electric power being produced or used for one hour.  One will see it abbreviated in many different ways, including MWHr, MWhr, MWHR, MWH, MWh, and probably more.  I will try to use consistently MWHr.  A kilowatt-hour (often kWh) is a thousand watts of power for one hour, and is the typical unit used for homes.  A megawatt-hour will thus be one thousand times a kilowatt-hour, so a price of, for example, $20 per MWHr for solar-generated power (which we will see below has in fact been offered in several recent PPA contracts) will be equivalent to 2.0 cents per kWh.  This will be the wholesale price of such power.  The retail price in the US for households is typically around 10 to 12 cents per kWh.

B.  The Levelized Cost of Energy 

As seen in the chart at the top of this post, the cost of generating power by way of new utility-scale solar photovoltaic panels has fallen dramatically over the past decade, with a cost now similar to that from new on-shore wind turbines, and well below the cost from building new gas, coal, or nuclear power plants.  These costs can be compared in terms of the “levelized cost of energy” (LCOE), which is an estimate of the price that would need to be charged for power from such a plant over its lifetime, sufficient to cover the initial capital cost (at the anticipated utilization rate), plus the cost of operating and maintaining the plant,

Lazard, the investment bank, has published estimates of such LCOEs annually for some time now.  The most recent report, issued in November 2018, is version 12.0.  Lazard approaches the issue as an investment bank would, examining the cost of producing power by each of the alternative sources, with consistent assumptions on financing (with a debt/equity ratio of 60/40, an assumed cost of debt of 8%, and a cost of equity of 12%) and a time horizon of 20 years.  They also include the impact of taxes, and show separately the impact of special federal tax subsidies for clean energy sources.  But the figures I will refer to throughout this post (including in the chart above) are always the estimates excluding any impact from special subsidies for clean energy.  The aim is to see what the underlying actual costs are, and how they have changed over time.

The Lazard LCOE estimates are calculated and presented in nominal terms.  They show the price, in $/MWHr, that would need to be charged over a 20-year time horizon for such a project to break even.  For comparability over time, as well as to produce estimates that can be compared directly to the PPA contract prices that I will discuss below, I have converted those prices from nominal to real terms in constant 2017 dollars.  Two steps are involved.  First, the fixed nominal LCOE prices over 20 years will be falling over time in real terms due to general inflation.  They were adjusted to the prices of their respective initial year (i.e. the relevant year from 2009 to 2018) using an inflation rate of 2.25% (which is the rate used for the PPA figures discussed below, the rate the EIA assumed in its 2018 Annual Energy Outlook report, and the rate which appears also to be what Lazard assumed for general cost escalation factors).  Second, those prices for the years between 2009 and 2018 were all then converted to constant 2017 prices based on actual inflation between those years and 2017.

The result is the chart shown at the top of this post.  The LCOEs in 2018 (in 2017$) were $33 per MWHr for a newly built utility-scale solar photovoltaic system and also for an on-shore wind installation, $44 per MWHr for a new natural gas combined cycle plant, $78 for a new coal-burning plant, and $115 for a new nuclear power plant.  The natural gas plant would cost one-third more than a solar or wind plant, coal would cost 2.4 times as much, and a nuclear plant 3.5 times as much.  Note also that since the adjustments for inflation are the same for each of the power generation methods, their costs relative to each other (in ratio terms) are the same for the LCOEs expressed in nominal cost terms.  And it is their costs relative to each other which most matters.

The solar prices have fallen especially dramatically.  The 2018 LCOE was only one-tenth of what it was in 2009.  The cost of wind generation has also fallen sharply over the period, to about one-quarter in 2018 of what it was in 2009.  The cost from gas combined cycle plants (the most efficient gas technology, and is now widely used) also fell, but only by about 40%, while the cost of coal or nuclear were roughly flat or rising, depending on precisely what time period is used.

There is good reason to believe the cost of solar technology will continue to decline.  It is still a relatively new technology, and work in labs around the world are developing solar technologies that are both more efficient and less costly to manufacture and install.

Current solar installations (based on crystalline silicon technology) will typically have conversion efficiencies of 15 to 17%.  And panels with efficiencies of up to 22% are now available in the market – a gain already on the order of 30 to 45% over the 15 to 17% efficiency of current systems.  But a chart of how solar efficiencies have improved over time (in laboratory settings) shows there is good reason to believe that the efficiencies of commercially available systems will continue to improve in the years to come.  While there are theoretical upper limits, labs have developed solar cell technologies with efficiencies as high as 46% (as of January 2019).

Particularly exciting in recent years has been the development of what are called “perovskite” solar technologies.  While their current efficiencies (of up to 28%, for a tandem cell) are just modestly better than purely crystalline silicon solar cells, they have achieved this in work spanning only half a decade.  Crystalline silicon cells only saw such an improvement in efficiencies in research that spanned more than four decades.  And perhaps more importantly, perovskite cells are much simpler to manufacture, and hence much cheaper.

Based on such technologies, one could see solar efficiencies doubling within a few years, from the current 15 to 17% to say 30 to 35%.  And with a doubling in efficiency, one will need only half as many solar panels to produce the same megawatts of power, and thus also only half as many frames to hold the panels, half as much wiring to link them together, and half as much land.  Coupled with simplified and hence cheaper manufacturing processes (such as is possible for perovskite cells), there is every reason to believe prices will continue to fall.

While there can be no certainty in precisely how this will develop, a simple extrapolation of recent cost trends can give an indication of what might come.  Assuming costs continue to change at the same annual rate that they had over the most recent five years (2013 to 2018), one would find for the years up to 2023:

If these trends hold, then the LCOE (in 2017$) of solar power will have fallen to $13 per MWHr by 2023, wind will have fallen to $18, and gas will be at $32 (or 2.5 times the LCOE of solar in that year, and 80% above the LCOE of wind).  And coal (at $70) and nuclear (at $153) will be totally uncompetitive.

This is an important transition.  With the dramatic declines in the past decade in the costs for solar power plants, and to a lesser extent wind, these clean sources of power are now more cost competitive than traditional, polluting, sources.  And this is all without any special subsidies for the clean energy.  But before looking at the implications of this for power generation, as a reality check it is good first to examine whether the declining costs of solar power have been reflected in actual market prices for such power.  We will see that they have.

C.  The Market Prices for Solar Generated Power

Power Purchase Agreements (PPAs) are long-term contracts where a power generator (typically an independent power producer) agrees to supply electric power at some contracted capacity and at some price to a purchaser (typically a power utility or electric grid operator).  These are competitively determined (different parties interested in building new power plants will bid for such contracts, with the lowest price winning) and are a direct market measure of the cost of energy from such a source.

The Lawrence Berkeley National Lab, under a contract with the US Department of Energy, produces an annual report that reviews and summarizes PPA contracts for recent utility-scale solar power projects, including the agreed prices for the power.  The most recent was published in September 2018, and covers 2018 (partially) and before.  While the report covers both solar photovoltaic and concentrating solar thermal projects, the figures of interest to us here (and comparable to the Lazard LCOEs discussed above) are the PPAs for the solar photovoltaic projects.

The PPA prices provided in the report were all calculated by the authors on a levelized basis and in terms of 2017 prices.  This was done to put them all on a comparable basis to each other, as the contractual terms of the specific contracts could differ (e.g. some had price escalation clauses and some did not).  Averages by year were worked out with the different projects weighted by generation capacity.

The PPA prices are presented by the year the contracts were signed.  If one then plots these PPA prices with a one year lag and compare them to the Lazard estimated LCOE prices of that year, one finds a remarkable degree of overlap:

This high degree of overlap is extraordinary.  Only the average PPA price for 2010 (reflecting the 2009 average price lagged one year) is off, but would have been close with a one and a half year lag rather than a one year lag.  Note also that while the Lawrence Berkeley report has PPA prices going back to 2006, the figures for the first several years are based on extremely small samples (just one project in 2006, one in 2007, and three in 2008, before rising to 16 in 2009 and 30 in 2010).  For that reason I have not plotted the 2006 to 2008 PPA prices (which would have been 2007 to 2009 if lagged one year), but they also would have been below the Lazard LCOE curve.

What might be behind this extraordinary overlap when the PPA prices are lagged one year?  Two possible explanations present themselves.  One is that the power producers when making their PPA bids realize that there will be a lag from when the bids are prepared to when the winning bidder is announced and construction of the project begins.  With the costs of solar generation falling so quickly, it is possible that the PPA bids reflect what they know will be a lag between when the bid is prepared and when the project has to be built (with solar panels purchased and other costs incurred).  If that lag is one year, one will see overlap such as that found for the two curves.

Another possible explanation for the one-year shift observed between the PPA prices (by date of contract signing) and the Lazard LCOE figures is that the Lazard estimates labeled for some year (2018 for example) might in fact represent data on the cost of the technologies as of the prior year (2017 in this example).  One cannot be sure from what they report.  Or the remarkable degree of overlap might be a result of some combination of these two possible explanations, or something else.

But for whatever reason, the two estimates move almost exactly in parallel over time, and hence show an almost identical rate of decline for both the cost of generating power from solar photovoltaic sources and in the market PPA prices for such power.  And it is that rapid rate of decline which is important.

It is also worth noting that the “bump up” in the average PPA price curve in 2017 (shown in the chart as 2018 with the one year lag) reflects in part that a significant number of the projects in the 2017 sample of PPAs included, as part of the contract, a power storage component to store a portion of the solar-generated power for use in the evening or night.  But these additional costs for storage were remarkably modest, and were even less in several projects in the partial-year 2018 sample.  Specifically, Nevada Energy (as the offtaker) announced in June 2018 that it had contracted for three major solar projects that would include storage of power of up to one-quarter of generation capacity for four hours, with overall PPA prices (levelized, in 2017 prices) for both the generation and the storage of just $22.8, $23.5, and $26.4 per MWHr (i.e. 2.28 cents, 2.35 cents, and 2.64 cents per kWh, respectively).

The PPA prices reported can also be used to examine how the prices vary by region.  One should expect solar power to be cheaper in southern latitudes than in northern ones, and in dry, sunny, desert areas than in regions with more extensive cloud cover.  And this has led to the criticism by skeptics that solar power can only be competitive in places such as the US Southwest.

But this is less of an issue than one might assume.  Dividing up the PPA contracts by region (with no one-year lag in this chart), one finds:

Prices found in the PPAs are indeed lower in the Southwest, California, and Texas.  But the PPA prices for projects in the Southeast, the Midwest, and the Northwest fell at a similar pace as those in the more advantageous regions (and indeed, at a more rapid pace up to 2014).  And note that the prices in those less advantageous regions are similar to what they were in the more advantageous regions just a year or two before.  Finally, the absolute differences in prices have become relatively modest in the last few years.

The observed market prices for power generated by solar photovoltaic systems therefore appear to be consistent with the bottom-up LCOE estimates of Lazard – indeed remarkably so.  Both show a sharp fall in solar energy prices/costs over the last decade, and sharp falls both for the US as a whole and by region.  The next question is whether we see this reflected in investment in additions to new power generation capacity, and in the power generated by that capacity.

D.  Additions to Power Generation Capacity, and in Power Generation

The cost of power from a new solar or wind plant is now below the cost from gas (while the cost of new coal or nuclear generation capacity is totally uncompetitive).  But the LCOEs indicate that the cost advantage relative to gas is relatively recent in the case of solar (starting from 2016), and while a bit longer for wind, the significant gap in favor of wind only opened up in 2014.  One needs also to recognize that these are average or mid-point estimates of costs, and that in specific cases the relative costs will vary depending on local conditions.  Thus while solar or wind power is now cheaper on average across the US, in some particular locale a gas plant might be less expensive (especially if the costs resulting from its pollution are not charged).  Finally, and as discussed above, there may be time-of-day issues that the new capacity may be needed for, with this affecting the choices made.

Thus while one should expect a shift towards solar and wind over the last several years, and away from traditional fuels, the shift will not be absolute and immediate.  What do we see?

First, in terms of the gross additions to power sector generating capacity:

The chart shows the gross additions to power capacity, in megawatts, with both historical figures (up through 2018) and as reflected in plans filed with the US Department of Energy (for 2019 and 2020, with the plans as filed as of end-2018).  The data for this (and the other charts in this section) come from the most recent release of the Electric Power Annual of the Energy Information Agency (EIA) (which was for 2017, and was released on October 22, 2018), plus from the Electric Power Monthly of February, 2019, also from the Energy Information Agency (where the February issue each year provides complete data for the prior calendar year, i.e. for 2018 in this case).

The planned additions to capacity (2019 and 2020 in the chart) provide an indication of what might happen over the next few years, but must be interpreted cautiously.  While probably pretty good for the next few years, biases will start to enter as one goes further into the future.  Power producers are required to file their plans for new capacity (as well as for retirements of existing capacity) with the Department of Energy, for transparency and to help ensure capacity (locally as well as nationally) remains adequate.  But these reported plans should be approached cautiously.  There is a bias as projects that require a relatively long lead time (such as gas plants, as well as coal and especially nuclear) will be filed years ahead, while the more flexible, shorter construction periods, required for solar and wind plants means that these plans will only be filed with the Department of Energy close to when that capacity will be built.  But for the next few years, the plans should provide an indication of how the market is developing.

As seen in the chart, solar and wind taken together accounted for the largest single share of gross additions to capacity, at least through 2017.  While there was then a bump up in new gas generation capacity in 2018, this is expected to fall back to earlier levels in 2019 and 2020.  And these three sources (solar, wind, and gas) accounted for almost all (93%) of the gross additions to new capacity over 2012 to 2018, with this expected to continue.

New coal-burning plants, in contrast, were already low and falling in 2012 and 2013, and there have been no new ones since then.  Nor are any planned.  This is as one would expect based on the LCOE estimates discussed above – new coal plants are simply not cost competitive.  And the additions to nuclear and other capacity have also been low.  “Other” capacity is a miscellaneous category that includes hydro, petroleum-fueled plants such as diesel, as well as other renewables such as from the burning of waste or biomass. The one bump up, in 2016, is due to a nuclear power plant coming on-line that year.  It was unit #2 of the Watts Bar nuclear power plant built by the Tennessee Valley Authority (TVA), and had been under construction for decades.  Indeed the most recent nuclear plant completed in the US before this one was unit #1 at the same TVA plant, which came on-line 20 years before in 1996.  Even aside from any nuclear safety concerns, nuclear plants are simply not economically competitive with other sources of power.

The above are gross additions to power generating capacity, reflecting what new plants are being built.  But old, economically or technologically obsolete, plants are also being retired, so what matters to the overall shift in power generation capacity is what has happened to net generation capacity:

What stands out here is the retirement of coal-burning plants.  And while the retirements might appear to diminish in the plans going forward, this may largely be due to retirement plans only being announced shortly before they happen.  It is also possible that political pressure from the Trump administration to keep coal-burning plants open, despite their higher costs (and their much higher pollution), might be a factor.  We will see what happens.

The cumulative impact of these net additions to capacity (relative to 2010 as the base year) yields:

Solar plus wind accounts for the largest addition to capacity, followed by gas.  Indeed, each of these accounts for more than 100% of the growth in overall capacity, as there has been a net reduction in the nuclear plus other category, and especially in coal.

But what does this mean in terms of the change in the mix of electric power generation capacity in the US?  Actually, less than one might have thought, as one can see in a chart of the shares:

The share of coal has come down, but remains high, and similarly for nuclear (plus miscellaneous other) capacity.  Gas remains the highest and has risen as a share, while solar and wind, while rising at a rapid pace relative to where it was to start, remains the smallest shares (of the categories used here).

The reason for these relatively modest changes in shares is that while solar and wind plus gas account for more than 100% of the net additions to capacity, that net addition has been pretty small.  Between 2010 and 2018, the net addition to US electric power generation capacity was just 58.8 thousand megawatts, or an increase over eight years of just 5.7% over what capacity was in 2010 (1,039.1 thousand megawatts).  A big share of something small will still be small.

So even though solar and wind are now the lowest cost sources of new power generation, the very modest increase in the total power capacity needed has meant that not that much has been built.  And much of what has been built has been in replacement of nuclear and especially coal capacity.  As we will discuss below, the economic issue then is not whether solar and wind are the cheapest source of new capacity (which they are), but whether new solar and wind are more economic than what it costs to continue to operate existing coal and nuclear plants.  That is a different question, and we will see that while new solar and wind are now starting to be a lower cost option than continuing to operate older coal (but not nuclear) plants, this development (a critically important development) has only been recent.

Why did the US require such a small increase in power generation capacity in recent years?  As seen in the chart below, it is not because GDP has not grown, but rather because energy efficiency (real GDP per MWHr of power) improved tremendously, at least until 2017:

From 2010 to 2017, real GDP rose by 15.7% (2.1% a year on average), but GDP per MWHr of power generated rose by 18.3%.  That meant that power generation (note that generation is the relevant issue here, not capacity) could fall by 2.2% despite the higher level of GDP.  Improving energy efficiency was a key priority during the Obama years, and it appears to have worked well.  It is better for efficiency to rise than to have to produce more power, even if that power comes from a clean source such as solar or wind.

This reversed direction in 2018.  It is not clear why, but might be an early indication that the policies of the Trump administration are harming efficiency in our economy.  However, this is still just one year of data, and one will need to wait to see whether this was an aberration or a start of a new, and worrisome, trend.

Which brings us to generation.  While the investment decision is whether or not to add capacity, and if so then of what form (e.g. solar or gas or whatever), what is ultimately needed is the power generated.  This depends on the capacity available and then on the decision of how much of that capacity to use to generate the power needed at any given moment.  One needs to keep in mind that power in general is not stored (other than still very limited storage of solar and wind power), but rather has to be generated at the moment needed.  And since power demand goes up and down over the course of the day (higher during the daylight hours and lower at night), as well as over the course of the year (generally higher during the summer, due to air conditioning, and lower in other seasons), one needs total generation capacity sufficient to meet whatever the peak load might be.  This means that during all other times there will be excess, unutilized, capacity.  Indeed, since one will want to have a safety margin, one will want to have total power generation capacity of even more than whatever the anticipated peak load might be in any locale.

There will always, then, be excess capacity, just sometimes more and sometimes less.  And hence decisions will be necessary as to what of the available capacity to use at any given moment.  While complex, the ultimate driver of this will be (or at least should be, in a rational system) the short-run costs of producing power from the possible alternative sources available in the region where the power is needed.  These costs will be examined in the next section below.  But for here, we will look at how generation has changed over the last several years.

In terms of the change in power generation by source relative to the levels in 2010, one finds:

Gas now accounts for the largest increment in generation over this period, with solar and wind also growing (steadily) but by significantly less.  Coal powered generation, in contrast, fell substantially, while nuclear and other sources were basically flat.  And as noted above, due to increased efficiency in the use of power (until 2017), total power use was flat to falling a bit, even as GDP grew substantially.  This reversed in 2018  when efficiency fell, and gas generated power rose to provide for the resulting increased power demands.  Solar and wind continued on the same path as before, and coal generation still fell at a similar pace as before.  But it remains to be seen whether 2018 marked a change in the previous trend in efficiency gains, or was an aberration.

Why did power generation from gas rise by more than from solar and wind over the period, despite the larger increase in solar plus wind capacity than in gas generation capacity?  In part this reflects the cost factors which we will discuss in the next section below.  But in part one needs also to recognize factors inherent in the technologies.  Solar generation can only happen during the day (and also when there is no cloud cover), while wind generation depends on when the wind blows.  Without major power storage, this will limit how much solar and wind can be used.

The extent to which some source of power is in fact used over some period (say a year), as a share of what would be generated if the power plant operated at 100% of capacity for 24 hours a day, 365 days a year, is defined as the “capacity factor”.  In 2018, the capacity factor realized for solar photovoltaic systems was 26.1% while for wind it was 37.4%.  But for no power source is it 100%.  For natural gas combined cycle plants (the primary source of gas generation), the capacity factor was 57.6% in 2018 (up from 51.3% in 2017, due to the jump in power demand in 2018).  This is well below the theoretical maximum of 100% as in general one will be operating at less than peak capacity (plus plants need to be shut down periodically for maintenance and other servicing).

Thus increments in “capacity”, as measured, will therefore not tell the whole story.  How much such capacity is used also matters.  And the capacity factors for solar and wind will in general be less than what they will be for the other primary sources of power generation, such as gas, coal, and nuclear (and excluding the special case of plants designed solely to operate for short periods of peak load times, or plants used as back-ups or for cases of emergencies).  But how much less depends only partly on the natural constraints on the clean technologies.  It also depends on marginal operating costs, as we will discuss below.

Finally, while gas plus solar and wind have grown in terms of power generation since 2010, and coal has declined (and nuclear and other sources largely unchanged), coal-fired generation remains important.  In terms of the percentage shares of overall power generation:

While coal has fallen as a share, from about 45% of US power generation in 2010 to 27% in 2018, it remains high.  Only gas is significantly higher (at 35% in 2010).  Nuclear and other sources (such as hydro) accounts for 29%, with nuclear alone accounting for two-thirds of this and other sources the remaining one-third.  Solar and wind have grown steadily, and at a rapid rate relative to where they were in 2010, but in 2018 still accounted only for about 8% of US power generation.

Thus while coal has come down, there is still very substantial room for further substitution out of coal, by either solar and wind or by natural gas.  The cost factors that will enter into this decision on substituting out of coal will be discussed next.

E.  The Cost Factors That Enter in the Decisions on What Plants to Build, What Plants to Keep in Operation, and What Plants to Use

The Lazard analysis of costs presents estimates not only for the LCOE of newly built power generation plants, but also figures that can be used to arrive at the costs of operating a plant to produce power on any given day, and of operating a plant plus keeping it maintained for a year.  One needs to know these different costs in order to address different questions.  The LCOE is used to decide whether to build a new plant and keep it in operation for a period (20 years is used); the operating cost is used to decide which particular power plant to run at any given time to generate the power then needed (from among all the plants up and available to run that day); while the operating cost plus the cost of regular annual maintenance is used in the decision of whether to keep a particular plant open for another year.

The Lazard figures are not ideal for this, as they give cost figures for a newly built plant, using the technology and efficiencies available today.  The cost to maintain and operate an older plant will be higher than this, both because older technologies were less efficient but also simply because they are older and hence more liable to break down (and hence cost more to keep running) than a new plant.  But the estimates for a new plant do give us a sense of what the floor for such costs might be – the true costs for currently existing plants of various ages will be somewhat higher.

Lazard also recognized that there will be a range of such costs for a particular type of plant, depending on the specifics of the particular location and other such factors.  Their report therefore provides both what it labels low end and high end estimates, and with a mid-point estimate then based usually on the average between the two.  The figures shown in the chart at the top of this post are the mid-point estimates, but in the tables below we will show the low and high end cost estimates as well.  These figures are helpful in providing a sense of the range in the costs one should expect, although how Lazard defined the range they used is not fully clear.  They are not of the absolutely lowest possible cost plant nor absolutely highest possible cost plant.  Rather, the low end figures appear to be averages of the costs of some share of the lowest cost plants (possibly the lowest one third), and similarly for the high end figures.

The cost figures below are from the 2018 Lazard cost estimates (the most recent year available).  The operating and maintenance costs are by their nature current expenditures, and hence their costs will be in current, i.e. 2018, prices.  The LCOE estimates of Lazard are different.  As was noted above, these are the levelized prices that would need to be charged for the power generated to cover the costs of building and then operating and maintaining the plant over its assumed (20 year) lifetime.  They therefore need to be adjusted to reflect current prices.  For the chart at the top of this post, they were put in terms of 2017 prices (to make them consistent with the PPA prices presented in the Berkeley report discussed above).  But for the purposes here, we will put them in 2018 prices to ensure consistency with the prices for the operating and maintenance costs.  The difference is small (just 2.2%).

The cost estimates derived from the Lazard figures are then:

(all costs in 2018 prices)

A.  Levelized Cost of Energy from a New Power Plant:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$31.23

$22.65

$32.02

$46.85

$87.46

mid-point

$33.58

$33.19

$44.90

$79.26

$117.52

high end

$35.92

$43.73

$57.78

$111.66

$147.58

B.  Cost to Maintain and Operate a Plant Each year, including for Fuel:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$4.00

$9.24

$24.38

$23.19

$23.87

mid-point

$4.66

$10.64

$26.51

$31.30

$25.11

high end

$5.33

$12.04

$28.64

$39.41

$26.35

C.  Short-term Variable Cost to Operate a Plant, including for Fuel:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$0.00

$0.00

$23.16

$14.69

$9.63

mid-point

$0.00

$0.00

$25.23

$18.54

$9.63

high end

$0.00

$0.00

$27.31

$22.40

$9.63

A number of points follow from these cost estimates:

a)  First, and as was discussed above, the LCOE estimates indicate that for the question of what new type of power plant to build, it will in general be cheapest to obtain new power from a solar or wind plant.  The mid-point LCOE estimates for solar and wind are well below the costs of power from gas plants, and especially below the costs from coal or nuclear plants.

But also as noted before, local conditions vary and there will in fact be a range of costs for different types of plants.  The Lazard estimates indicate that a gas plant with costs at the low end of a reasonable range (estimated to be about $32 per MWHr) would be competitive with solar or wind plants at the mid-point of their cost range (about $33 to $34 per MWHr), and below the costs of a solar plant at the high end of its cost range ($36) and especially a wind plant at its high end of its costs ($44).  However, there are not likely to be many such cases:  Gas plants with a cost at their mid-point estimate would not be competitive, and even less so for gas plants with a cost near their high end estimate.

Furthermore, even the lowest cost coal and nuclear plants would be far from competitive with solar or wind plants when considering the building of new generation capacity.  This is consistent with what we saw in Section D above, of no new coal or nuclear plants being built in recent years (with the exception of one nuclear plant whose construction started decades ago and was only finished in 2016).

b)  More interesting is the question of whether it is economic to build new solar or wind plants to substitute for existing gas, coal, or nuclear plants.  The figures in panel B of the table on the cost to operate and maintain a plant for another year (all in terms of $/MWHr) can give us a sense of whether this is worthwhile.  Keeping in mind that these are going to be low estimates (as they are the costs for newly built plants, using the technologies available today, not for existing ones which were built possibly many years ago), the figures suggest that it would make economic sense to build new solar and wind plants (at their LCOE costs) and decommission all but the most efficient coal burning plants.

However, the figures also suggest that this will not be the case for most of the existing gas or nuclear plants.  For such plants, with their capital costs already incurred, the cost to maintain and operate them for a further year is in the range of $24 to $29 (per MWHr) for gas plants and $24 to $26 for nuclear plants.  Even recognizing that these costs estimates will be low (as they are based on what the costs would be for a new plant, not existing ones), only the more efficient solar and wind plants would have an LCOE which is less.  But they are close, and are on the cusp of the point where it would be economic to build new solar and wind plants and decommission existing gas and nuclear plants, just as this is already the case for most coal plants.

c)  Panel C then provides figures to address the question of which power plants to operate, for those which are available for use on any given day.  With no short-term variable cost to generate power from solar or wind sources (they burn no fuel), it will always make sense to use those sources first when they are available.  The short-term cost to operate a nuclear power plant is also fairly low ($9.63 per MWHr in the Lazard estimates, with no significant variation in their estimates).  Unlike other plants, it is difficult to turn nuclear plants on and off, so such plants will generally be operated as baseload plants kept always on (other than for maintenance periods).

But it is interesting that, provided a coal burning plant was kept active and not decommissioned, the Lazard figures suggest that the next cheapest source of power (if one ignores the pollution costs) will be from burning coal.  The figures indicate coal plants are expensive to maintain (the difference between the figures in panel B and in panel C) but then cheap to run if they have been kept operational.  This would explain why we have seen many coal burning plants decommissioned in recent years (new solar and wind capacity is cheaper than the cost of keeping a coal burning plant maintained and operating), but that if the coal burning plant has been kept operational, that it will then typically be cheaper to run rather than a gas plant.

d)  Finally, existing gas plants will cost between $23 and $27 per MWHr to run, mostly for the cost of the gas itself.  Maintenance costs are low.  These figures are somewhat less than the cost of building new solar or wind capacity, although not by much.

But there is another consideration as well.  Suppose one needs to add to night-time capacity, so solar power will not be of use (assuming storage is not an economic option).  Assume also that wind is not an option for some reason (perhaps the particular locale).  The LCOE figures indicate that a new gas plant would then be the next best alternative.  But once this gas plant is built, it will be available also for use during the day.  The question then is whether it would be cheaper to run that gas plant during the day also, or to build solar capacity to provide the day-time power.

And the answer is that at these costs, which exclude the costs from the pollution generated, it would be cheaper to run the gas plant.  The LCOE costs for new solar power ranges from $31 to $36 per MWHr (panel A above), while the variable cost of operating a gas plant built to supply nighttime capacity ranges between $23 and $27 (panel C).  While the difference is not huge, it is still significant.

This may explain in part why new gas generation capacity is not only being built in the US, but also is then being used more than other sources for additional generation, even though new solar and wind capacity would be cheaper.  And part of the reason for this is that the costs imposed on others from the pollution generated by burning fossil fuels are not being borne by the power plant operators.  This will be examined in the next section below.

F.  The Impact of Including the Cost of Greenhouse Gas Emissions

Burning fossil fuels generates pollution.  Coal is especially polluting, in many different ways. But I will focus here on just one area of damage caused by the burning of fossil fuels, which is that from their generation of greenhouse gases.  These gases are warming the earth’s atmosphere, with this then leading to an increased frequency of extreme weather events, from floods and droughts to severe storms, and hurricanes of greater intensity.  While one cannot attribute any particular storm to the impact of a warmer planet, the increased frequency of such storms in recent decades is clearly a consequence of a warmer planet.  It is the same as the relationship of smoking to lung cancer.  While one cannot with certainty attribute a particular case of lung cancer to smoking (there are cases of lung cancer among people who do not smoke), it is well established that there is an increased likelihood and frequency of lung cancer among smokers.

When the costs from the damage created from greenhouse gases are not borne by the party responsible for the emissions, that party will ignore those costs.  In the case of power production, they do not take into account such costs in deciding whether to use clean sources (solar or wind) to generate the power needed, or to burn coal or gas.  But the costs are still there and are being imposed on others.  Hence economists have recommended that those responsible for such decisions face a price which reflects such costs.  A specific proposal, discussed in an earlier post on this blog, is to charge a tax of $40 per ton of CO2 emitted.  All the revenue collected by that tax would then be returned in equal per capita terms to the American population.  Applied to all sources of greenhouse gas emissions (not just power), the tax would lead to an annual rebate of almost $500 per person, or $2,000 for a family of four.  And since it is the rich who account most (in per person terms) for greenhouse gas emissions, it is estimated that such a tax and redistribution would lead to those in the lowest seven deciles of the population (the lowest 70%) receiving more on average than what they would pay (directly or indirectly), while only the richest 30% would end up paying more on a net basis.

Such a tax on greenhouse gas emissions would have an important effect on the decision of what sources of power to use when power is needed.  As noted in the section above, at current costs it is cheaper to use gas-fired generation, and even more so coal-fired generation, if those plants have been built and are available for operation, than it would cost to build new solar or wind plants to provide such power.  The costs are getting close to each other, but are not there yet.  If gas and coal burning plants do not need to worry about the costs imposed on others from the burning of their fuels, such plants may be kept in operation for some time.

A tax on the greenhouse gases emitted would change this calculus, even with all other costs as they are today.  One can calculate from figures presented in the Lazard report what the impact would be.  For the analysis here, I have looked at the impact of charging $20 per ton of CO2 emitted, $40 per ton of CO2, or $60 per ton of CO2.  Analyses of the social cost of CO2 emissions come up with a price of around $40 per ton, and my aim here was to examine a generous span around this cost.

Also entering is how much CO2 is emitted per MWHr of power produced.  Figures in the Lazard report (and elsewhere) put this at 0.51 tons of CO2 per MWHr for gas burning plants, and 0.92 tons of CO2 per MWHr for coal burning plants.  As has been commonly stated, the direct emissions of CO2 from gas burning plants is on the order of half of that from coal burning plants.

[Side note:  This does not take into account that a certain portion of natural gas leaks out directly into the air at some point in the process from when it is pulled from the ground, then transported via pipelines, and then fed into the final use (e.g. at a power plant).  While perhaps small as a percentage of all the gas consumed (the EPA estimates a leak rate of 1.4%, although others estimate it to be more), natural gas (which is primarily methane) is itself a highly potent greenhouse gas with an impact on atmospheric warming that is 34 times as great as the same weight of CO2 over a 100 year time horizon, and 86 times as great over a 20 year horizon.  If one takes such leakage into account (of even just 1.4%), and adds this warming impact to that of the CO2 that is produced by the gas that has not leaked out but is burned, natural gas turns out to have a similar if not greater atmospheric warming impact as that resulting from the burning of coal.  However, for the calculations below, I will leave out the impact from leakage.  Including this would lead to even stronger results.]

One then has:

D.  Cost of Greenhouse Gas Emissions:  $/MWhr

Solar

Wind

Gas

Coal

Nuclear

Tons of CO2 Emitted per MWHr

0.000

0.000

0.510

0.920

0.000

Cost at $20/ton CO2

$0.00

$0.00

$10.20

$18.40

$0.00

Cost at $40/ton CO2

$0.00

$0.00

$20.40

$36.80

$0.00

Cost at $60/ton CO2

$0.00

$0.00

$30.60

$55.20

$0.00

E.  Levelized Cost of Energy for a New Power Plant, including Cost of Greenhouse Gas Emissions (mid-point figures):  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Cost at $20/ton CO2

$33.58

$33.19

$55.10

$97.66

$117.52

Cost at $40/ton CO2

$33.58

$33.19

$65.30

$116.06

$117.52

Cost at $60/ton CO2

$33.58

$33.19

$75.50

$134.46

$117.52

F.  Short-term Variable Cost to Operate a Plant, including Fuel and Cost of Greenhouse Gas Emissions (mid-point figures):  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Cost at $20/ton CO2

$0.00

$0.00

$35.43

$36.94

$9.63

Cost at $40/ton CO2

$0.00

$0.00

$45.63

$55.34

$9.63

Cost at $60/ton CO2

$0.00

$0.00

$55.83

$73.74

$9.63

Panel D shows what would be paid, per MWHr, if greenhouse gas emissions were charged for at a rate of $20 per ton of CO2, of $40 per ton, or of $60 per ton.  The impact would be significant, ranging from $10 to $31 per MWHr for gas and $18 to $55 for coal.

If these costs are then included in the Levelized Cost of Energy figures (using the mid-point estimates for the LCOE), one gets the costs shown in Panel E.  The costs of new power generation capacity from solar or wind sources (as well as nuclear) are unchanged as they have no CO2 emissions.  But the full costs of new gas or coal fired generation capacity will now mean that such sources are even less competitive than before, as their costs now also reflect, in part, the damage done as a result of their greenhouse gas emissions.

But perhaps most interesting is the impact on the choice of whether to keep burning gas or coal in plants that have already been built and remain available for operation.  This is provided in Panel F, which shows the short-term variable cost (per MWHr) of power generated by the different sources.  These short-term costs were primarily the cost of the fuel used, but now also include the cost to compensate for the damage from the resulting greenhouse gas emissions.

If gas as well as coal had to pay for the damages caused by their greenhouse gas emissions, then even at a cost of just $20 per ton of CO2 emitted they would not be competitive with building new solar or wind plants (whose LCOEs, in Panel E, are less).  At a cost of $40 or $60 per ton of CO2 emitted, they would be far from competitive, with costs that are 40% to 120% higher.  There would be a strong incentive then to build new solar and wind plants to serve what they can (including just the day time markets), while existing gas plants (primarily) would in the near term be kept in reserve for service at night or at other times when solar and wind generation is not possible.

G.  Summary and Conclusion

The cost of new clean sources of power generation capacity, wind and especially solar, has plummeted over the last decade, and it is now cheaper to build new solar or wind capacity than to build new gas, coal, and especially nuclear capacity.  One sees this not only in estimates based on assessments of the underlying costs, but also in the actual market prices for new generation capacity (the PPA prices in such contracts).  Both have plummeted, and indeed at an identical pace.

While it was only relatively recently that the solar and wind generation costs have fallen below the cost of generation from gas, one does see these relative costs reflected in the new power generation capacity built in recent years.  Solar plus wind (together) account for the largest single source of new capacity, with gas also high.  And there have been no new coal plants since 2013 (nor nuclear, with the exception of one plant coming online which had been under construction for decades).

But while solar plus wind plants accounted for the largest share of new generation capacity in recent years, the impact on the overall mix was low.  And that is because not that much new generation capacity has been needed.  Up until to at least 2017, efficiency in energy use was improving to such an extent that no net new capacity was needed despite robust GDP growth.  A large share of something small will still be something small.

However, the costs of building new solar or wind generation capacity have now fallen to the point where it is cheaper to build new solar or wind capacity than it costs to maintain and keep in operation many of the existing coal burning power plants.  This is particularly the case for the older coal plants, with their older technologies and higher maintenance costs.  Thus one should see many of these older plants being decommissioned, and one does.

But it is still cheaper, when one ignores the cost of the damage done by the resulting pollution, to maintain and operate existing gas burning plants, than it would cost to build new solar or wind plants to generate the power they are able to provide.  And since some of the new gas burning plants being built may be needed to add to night-time generation capacity, this means that such plants will also be used to generate power by burning gas during the day, instead of installing solar capacity.

This cost advantage only holds, however, because gas-burning plants do not have to pay for the costs resulting from the damage their pollution causes.  While they pollute in many different ways, one is from the greenhouse gases they emit.  But if one charged them just $20 for every ton of CO2 released into the atmosphere when the gas is burned, the result would be different.  It would then be more cost competitive to build new solar or wind capacity to provide power whenever they can, and to save the gas burning plants for those times when such clean power is not possible.

There is therefore a strong case for charging such a fee.  However, many of those who had previously supported such an approach to address global warming have backed away in recent months, arguing that it would be politically impossible.  That assessment of the politics might be correct, but it really makes no sense.  First, it would be politically important that whatever revenues are generated are returned in full to the population, and on an equal per person basis.  While individual situations will of course vary (and those who lose out on a net basis, or perceive that they will, will complain the loudest), assessments based on current consumption patterns indicate that those in the lowest seven deciles of income (the lowest 70%) will on average come out ahead, while only those in the richest 30% will pay more.  It is the rich who, per person, account for the largest share of greenhouse gas emissions, creating costs that others are bearing.  And a redistribution from the richest 30% to the poorest 70% would be a positive redistribution.

But second, the alternative to reducing greenhouse gas emissions would need to be some approach based on top-down directives (central planning in essence), or a centrally directed system of subsidies that aims to offset the subsidies implicit in not requiring those burning fossil fuels to pay for the damages they cause, by subsidizing other sources of power even more.  Such approaches are not only complex and costly, but rarely work well in practice.  And they end up costing more than a fee-based system would.  The political argument being made in their favor ultimately rests on the assumption that by hiding the higher costs they can be made politically more acceptable.  But relying on deception is unlikely to be sustainable for long.

The sharp fall in costs for clean energy of the last decade has created an opportunity to switch our power supply to clean sources at little to no cost.  This would have been impossible just a few years ago.  It would be unfortunate in the extreme if we were to let this opportunity pass.