The Increasingly Attractive Economics of Solar Power: Solar Prices Have Plunged

A.  Introduction

The cost of solar photovoltaic power has fallen dramatically over the past decade, and it is now, together with wind, a lower cost source of new power generation than either fossil-fuel (coal or gas) or nuclear power plants.  The power generated by a new natural gas-fueled power plant in 2018 would have cost a third more than from a solar or wind plant (in terms of the price they would need to sell the power for in order to break even); coal would have cost 2.4 times as much as solar or wind; and a nuclear plant would have cost 3.5 times as much.

These estimates (shown in the chart above, and discussed in more detail below) were derived from figures estimated by Lazard, the investment bank, and are based on bottom-up estimates of what such facilities would have cost to build and operate, including the fuel costs.  But one also finds a similar sharp fall in solar energy prices in the actual market prices that have been charged for the sale of power from such plants under long-term “power purchase agreements” (PPAs).  These will also be discussed below.

With the costs where they are now, it would not make economic sense to build new coal or nuclear generation capacity, nor even gas in most cases.  In practice, however, the situation is more complex due to regulatory issues and conflicting taxes and subsidies, and also because of variation across regions.  Time of day issues may also enter, depending on when (day or night) the increment in new capacity might be needed.  The figures above are also averages, particular cases vary, and what is most economic in any specific locale will depend on local conditions.  Nevertheless, and as we will examine below, there has been a major shift in new generation capacity towards solar and wind, and away from coal (with old coal plants being retired) and from nuclear (with no new plants being built, but old ones largely remaining).

But natural gas generation remains large.  Indeed, while solar and wind generation have grown quickly (from a low base), and together account for the largest increment in new power capacity in recent years, gas accounts for the largest increment in power production (in megawatt-hours) measured from the beginning of this decade.  Why?  In part this is due to the inherent constraints of solar and wind technologies:  Solar panels can only generate power when the sun shines, and wind turbines when the wind is blowing.  But more interestingly, one also needs to look at the economics behind the choice as to whether or not to build new generation capacity to replace existing capacity, and then what sources of capacity to use.  Critical is what economists call the marginal cost of such production.  A power plant lasts for many years once it is built, and the decision on whether to keep an existing plant in operation for another year depends only on the cost of operating and maintaining the plant.  The capital cost has already been spent and is no longer relevant to that decision.

Details in the Lazard report can be used to derive such marginal cost estimates by power source, and we will examine these below.  While the Lazard figures apply to newly built plants (older plants will generally have higher operational and maintenance costs, both because they are getting old and because technology was less efficient when they were built), the estimates based on new plants can still give us a sense of these costs.  But one should recognize they will be biased towards indicating the costs of the older plants are lower than they in fact are.  However, even these numbers (biased in underestimating the costs of older plants) imply that it is now more economical to build new wind and possibly solar plants, in suitable locales, than it costs to continue to keep open and operate coal-burning power plants.  This will be especially true for the older, less-efficient, coal-burning plants.  Thus we should be seeing old coal-burning plants being shut down.  And indeed we do.  Moreover, while the costs of building new wind and solar plants are not yet below the marginal costs of keeping open existing gas-fueled and nuclear power plants, they are on the cusp of being so.

These costs also do not reflect any special subsidies that solar and wind plants might benefit from.  These vary by state.  Fossil-fueled and nuclear power plants also enjoy subsidies (often through special tax advantages), but these are long-standing and are implicitly being included in the Lazard estimates of the costs of such traditional plants.

But one special subsidy enjoyed by fossil fuel burning power plants, not reflected in the Lazard cost estimates, is the implicit subsidy granted to such plants from not having to cover the cost of the damage from the pollution they generate.  Those costs are instead borne by the general public.  And while such plants pollute in many different ways (especially the coal-burning ones), I will focus here on just one of those ways – their emissions of greenhouse gases that are leading to a warming planet and consequent more frequent and more damaging extreme weather events.  Solar and wind generation of power do not cause such pollution – the burning of coal and gas do.

To account for such costs and to ensure a level playing field between power sources, a fee would need to be charged to reflect the costs being imposed on the general population from this (and indeed other) such pollution.  The revenues generated could be distributed back to the public in equal per capita terms, as discussed in an earlier post on this blog.  We will see that a fee of even just $20 per ton of CO2 emitted would suffice to make it economic to build new solar and wind power plants to substitute not just for new gas and coal burning plants, but for existing ones as well.  Gas and especially coal burning plants would not be competitive with installing new solar or wind generation if they had to pay for the damage done as a result of their greenhouse gas pollution, even on just marginal operating costs.

Two notes before starting:  First, many will note that while solar might be fine for the daytime, it will not be available at night.  Similarly, wind generation will be fine when the wind blows, but it may not always blow even in the windiest locales.  This is of course true, and should solar and wind capacity grow to dominate power generation, there will have to be ways to store that power to bridge the times from when the generation occurs to when the power is used.

But while storage might one day be an issue, it is mostly not an issue now.  In 2018, utility-scale solar only accounted for 1.6% of power generation in the US (and 2.3% if one includes small scale roof-top systems), while wind only accounted for 6.6%.  At such low shares, solar and wind power can simply substitute for other, higher cost, sources of power (such as from coal) during the periods the clean sources are available.  Note also that the cost figures for solar and wind reflected in the chart at the top of this post (and discussed in detail below) take into account that solar and wind cannot be used 100% of the time.  Rather, utilization is assumed to be similar to what their recent actual utilization has been, not only for solar and wind but also for gas, coal and nuclear.  Solar and wind are cheaper than other sources of power (over the lifetime of these investments) despite their inherent constraints on possible utilization.

But where the storage question can enter is in cases where new generation capacity is required specifically to serve evening or night-time needs.  New gas burning plants might then be needed to serve such time-of-day needs if storage of day-time solar is not an economic option.  And once such gas-burning plants are built, the decision on whether they should be run also to serve day-time needs will depend on a comparison of the marginal cost of running these gas plants also during the day, to the full cost of building new solar generation capacity, as was discussed briefly above and will be considered in more detail below.

This may explain, in part, why we see new gas-burning plants still being built nationally.  While less than new solar and wind plants combined (in terms of generation capacity), such new gas-burning plants are still being built despite their higher cost.

More broadly, California and Hawaii (both with solar now accounting for over 12% of power used in those states) are two states (and the only two states) which may be approaching the natural limits of solar generation in the absence of major storage.  During some sunny days the cost of power is being driven down to close to zero (and indeed to negative levels on a few days).  Major storage will be needed in those states (and only those states) to make it possible to extend solar generation much further than where it is now.  But this should not be seen so much as a “problem” but rather as an opportunity:  What can we do to take advantage of cheap day-time power to make it available at all hours of the day?  I hope to address that issue in a future blog post.  But in this blog post I will focus on the economics of solar generation (and to a lesser extent from wind), in the absence of significant storage.

Second, on nomenclature:  A megawatt-hour is a million watts of electric power being produced or used for one hour.  One will see it abbreviated in many different ways, including MWHr, MWhr, MWHR, MWH, MWh, and probably more.  I will try to use consistently MWHr.  A kilowatt-hour (often kWh) is a thousand watts of power for one hour, and is the typical unit used for homes.  A megawatt-hour will thus be one thousand times a kilowatt-hour, so a price of, for example, $20 per MWHr for solar-generated power (which we will see below has in fact been offered in several recent PPA contracts) will be equivalent to 2.0 cents per kWh.  This will be the wholesale price of such power.  The retail price in the US for households is typically around 10 to 12 cents per kWh.

B.  The Levelized Cost of Energy 

As seen in the chart at the top of this post, the cost of generating power by way of new utility-scale solar photovoltaic panels has fallen dramatically over the past decade, with a cost now similar to that from new on-shore wind turbines, and well below the cost from building new gas, coal, or nuclear power plants.  These costs can be compared in terms of the “levelized cost of energy” (LCOE), which is an estimate of the price that would need to be charged for power from such a plant over its lifetime, sufficient to cover the initial capital cost (at the anticipated utilization rate), plus the cost of operating and maintaining the plant,

Lazard, the investment bank, has published estimates of such LCOEs annually for some time now.  The most recent report, issued in November 2018, is version 12.0.  Lazard approaches the issue as an investment bank would, examining the cost of producing power by each of the alternative sources, with consistent assumptions on financing (with a debt/equity ratio of 60/40, an assumed cost of debt of 8%, and a cost of equity of 12%) and a time horizon of 20 years.  They also include the impact of taxes, and show separately the impact of special federal tax subsidies for clean energy sources.  But the figures I will refer to throughout this post (including in the chart above) are always the estimates excluding any impact from special subsidies for clean energy.  The aim is to see what the underlying actual costs are, and how they have changed over time.

The Lazard LCOE estimates are calculated and presented in nominal terms.  They show the price, in $/MWHr, that would need to be charged over a 20-year time horizon for such a project to break even.  For comparability over time, as well as to produce estimates that can be compared directly to the PPA contract prices that I will discuss below, I have converted those prices from nominal to real terms in constant 2017 dollars.  Two steps are involved.  First, the fixed nominal LCOE prices over 20 years will be falling over time in real terms due to general inflation.  They were adjusted to the prices of their respective initial year (i.e. the relevant year from 2009 to 2018) using an inflation rate of 2.25% (which is the rate used for the PPA figures discussed below, the rate the EIA assumed in its 2018 Annual Energy Outlook report, and the rate which appears also to be what Lazard assumed for general cost escalation factors).  Second, those prices for the years between 2009 and 2018 were all then converted to constant 2017 prices based on actual inflation between those years and 2017.

The result is the chart shown at the top of this post.  The LCOEs in 2018 (in 2017$) were $33 per MWHr for a newly built utility-scale solar photovoltaic system and also for an on-shore wind installation, $44 per MWHr for a new natural gas combined cycle plant, $78 for a new coal-burning plant, and $115 for a new nuclear power plant.  The natural gas plant would cost one-third more than a solar or wind plant, coal would cost 2.4 times as much, and a nuclear plant 3.5 times as much.  Note also that since the adjustments for inflation are the same for each of the power generation methods, their costs relative to each other (in ratio terms) are the same for the LCOEs expressed in nominal cost terms.  And it is their costs relative to each other which most matters.

The solar prices have fallen especially dramatically.  The 2018 LCOE was only one-tenth of what it was in 2009.  The cost of wind generation has also fallen sharply over the period, to about one-quarter in 2018 of what it was in 2009.  The cost from gas combined cycle plants (the most efficient gas technology, and is now widely used) also fell, but only by about 40%, while the cost of coal or nuclear were roughly flat or rising, depending on precisely what time period is used.

There is good reason to believe the cost of solar technology will continue to decline.  It is still a relatively new technology, and work in labs around the world are developing solar technologies that are both more efficient and less costly to manufacture and install.

Current solar installations (based on crystalline silicon technology) will typically have conversion efficiencies of 15 to 17%.  And panels with efficiencies of up to 22% are now available in the market – a gain already on the order of 30 to 45% over the 15 to 17% efficiency of current systems.  But a chart of how solar efficiencies have improved over time (in laboratory settings) shows there is good reason to believe that the efficiencies of commercially available systems will continue to improve in the years to come.  While there are theoretical upper limits, labs have developed solar cell technologies with efficiencies as high as 46% (as of January 2019).

Particularly exciting in recent years has been the development of what are called “perovskite” solar technologies.  While their current efficiencies (of up to 28%, for a tandem cell) are just modestly better than purely crystalline silicon solar cells, they have achieved this in work spanning only half a decade.  Crystalline silicon cells only saw such an improvement in efficiencies in research that spanned more than four decades.  And perhaps more importantly, perovskite cells are much simpler to manufacture, and hence much cheaper.

Based on such technologies, one could see solar efficiencies doubling within a few years, from the current 15 to 17% to say 30 to 35%.  And with a doubling in efficiency, one will need only half as many solar panels to produce the same megawatts of power, and thus also only half as many frames to hold the panels, half as much wiring to link them together, and half as much land.  Coupled with simplified and hence cheaper manufacturing processes (such as is possible for perovskite cells), there is every reason to believe prices will continue to fall.

While there can be no certainty in precisely how this will develop, a simple extrapolation of recent cost trends can give an indication of what might come.  Assuming costs continue to change at the same annual rate that they had over the most recent five years (2013 to 2018), one would find for the years up to 2023:

If these trends hold, then the LCOE (in 2017$) of solar power will have fallen to $13 per MWHr by 2023, wind will have fallen to $18, and gas will be at $32 (or 2.5 times the LCOE of solar in that year, and 80% above the LCOE of wind).  And coal (at $70) and nuclear (at $153) will be totally uncompetitive.

This is an important transition.  With the dramatic declines in the past decade in the costs for solar power plants, and to a lesser extent wind, these clean sources of power are now more cost competitive than traditional, polluting, sources.  And this is all without any special subsidies for the clean energy.  But before looking at the implications of this for power generation, as a reality check it is good first to examine whether the declining costs of solar power have been reflected in actual market prices for such power.  We will see that they have.

C.  The Market Prices for Solar Generated Power

Power Purchase Agreements (PPAs) are long-term contracts where a power generator (typically an independent power producer) agrees to supply electric power at some contracted capacity and at some price to a purchaser (typically a power utility or electric grid operator).  These are competitively determined (different parties interested in building new power plants will bid for such contracts, with the lowest price winning) and are a direct market measure of the cost of energy from such a source.

The Lawrence Berkeley National Lab, under a contract with the US Department of Energy, produces an annual report that reviews and summarizes PPA contracts for recent utility-scale solar power projects, including the agreed prices for the power.  The most recent was published in September 2018, and covers 2018 (partially) and before.  While the report covers both solar photovoltaic and concentrating solar thermal projects, the figures of interest to us here (and comparable to the Lazard LCOEs discussed above) are the PPAs for the solar photovoltaic projects.

The PPA prices provided in the report were all calculated by the authors on a levelized basis and in terms of 2017 prices.  This was done to put them all on a comparable basis to each other, as the contractual terms of the specific contracts could differ (e.g. some had price escalation clauses and some did not).  Averages by year were worked out with the different projects weighted by generation capacity.

The PPA prices are presented by the year the contracts were signed.  If one then plots these PPA prices with a one year lag and compare them to the Lazard estimated LCOE prices of that year, one finds a remarkable degree of overlap:

This high degree of overlap is extraordinary.  Only the average PPA price for 2010 (reflecting the 2009 average price lagged one year) is off, but would have been close with a one and a half year lag rather than a one year lag.  Note also that while the Lawrence Berkeley report has PPA prices going back to 2006, the figures for the first several years are based on extremely small samples (just one project in 2006, one in 2007, and three in 2008, before rising to 16 in 2009 and 30 in 2010).  For that reason I have not plotted the 2006 to 2008 PPA prices (which would have been 2007 to 2009 if lagged one year), but they also would have been below the Lazard LCOE curve.

What might be behind this extraordinary overlap when the PPA prices are lagged one year?  Two possible explanations present themselves.  One is that the power producers when making their PPA bids realize that there will be a lag from when the bids are prepared to when the winning bidder is announced and construction of the project begins.  With the costs of solar generation falling so quickly, it is possible that the PPA bids reflect what they know will be a lag between when the bid is prepared and when the project has to be built (with solar panels purchased and other costs incurred).  If that lag is one year, one will see overlap such as that found for the two curves.

Another possible explanation for the one-year shift observed between the PPA prices (by date of contract signing) and the Lazard LCOE figures is that the Lazard estimates labeled for some year (2018 for example) might in fact represent data on the cost of the technologies as of the prior year (2017 in this example).  One cannot be sure from what they report.  Or the remarkable degree of overlap might be a result of some combination of these two possible explanations, or something else.

But for whatever reason, the two estimates move almost exactly in parallel over time, and hence show an almost identical rate of decline for both the cost of generating power from solar photovoltaic sources and in the market PPA prices for such power.  And it is that rapid rate of decline which is important.

It is also worth noting that the “bump up” in the average PPA price curve in 2017 (shown in the chart as 2018 with the one year lag) reflects in part that a significant number of the projects in the 2017 sample of PPAs included, as part of the contract, a power storage component to store a portion of the solar-generated power for use in the evening or night.  But these additional costs for storage were remarkably modest, and were even less in several projects in the partial-year 2018 sample.  Specifically, Nevada Energy (as the offtaker) announced in June 2018 that it had contracted for three major solar projects that would include storage of power of up to one-quarter of generation capacity for four hours, with overall PPA prices (levelized, in 2017 prices) for both the generation and the storage of just $22.8, $23.5, and $26.4 per MWHr (i.e. 2.28 cents, 2.35 cents, and 2.64 cents per kWh, respectively).

The PPA prices reported can also be used to examine how the prices vary by region.  One should expect solar power to be cheaper in southern latitudes than in northern ones, and in dry, sunny, desert areas than in regions with more extensive cloud cover.  And this has led to the criticism by skeptics that solar power can only be competitive in places such as the US Southwest.

But this is less of an issue than one might assume.  Dividing up the PPA contracts by region (with no one-year lag in this chart), one finds:

Prices found in the PPAs are indeed lower in the Southwest, California, and Texas.  But the PPA prices for projects in the Southeast, the Midwest, and the Northwest fell at a similar pace as those in the more advantageous regions (and indeed, at a more rapid pace up to 2014).  And note that the prices in those less advantageous regions are similar to what they were in the more advantageous regions just a year or two before.  Finally, the absolute differences in prices have become relatively modest in the last few years.

The observed market prices for power generated by solar photovoltaic systems therefore appear to be consistent with the bottom-up LCOE estimates of Lazard – indeed remarkably so.  Both show a sharp fall in solar energy prices/costs over the last decade, and sharp falls both for the US as a whole and by region.  The next question is whether we see this reflected in investment in additions to new power generation capacity, and in the power generated by that capacity.

D.  Additions to Power Generation Capacity, and in Power Generation

The cost of power from a new solar or wind plant is now below the cost from gas (while the cost of new coal or nuclear generation capacity is totally uncompetitive).  But the LCOEs indicate that the cost advantage relative to gas is relatively recent in the case of solar (starting from 2016), and while a bit longer for wind, the significant gap in favor of wind only opened up in 2014.  One needs also to recognize that these are average or mid-point estimates of costs, and that in specific cases the relative costs will vary depending on local conditions.  Thus while solar or wind power is now cheaper on average across the US, in some particular locale a gas plant might be less expensive (especially if the costs resulting from its pollution are not charged).  Finally, and as discussed above, there may be time-of-day issues that the new capacity may be needed for, with this affecting the choices made.

Thus while one should expect a shift towards solar and wind over the last several years, and away from traditional fuels, the shift will not be absolute and immediate.  What do we see?

First, in terms of the gross additions to power sector generating capacity:

The chart shows the gross additions to power capacity, in megawatts, with both historical figures (up through 2018) and as reflected in plans filed with the US Department of Energy (for 2019 and 2020, with the plans as filed as of end-2018).  The data for this (and the other charts in this section) come from the most recent release of the Electric Power Annual of the Energy Information Agency (EIA) (which was for 2017, and was released on October 22, 2018), plus from the Electric Power Monthly of February, 2019, also from the Energy Information Agency (where the February issue each year provides complete data for the prior calendar year, i.e. for 2018 in this case).

The planned additions to capacity (2019 and 2020 in the chart) provide an indication of what might happen over the next few years, but must be interpreted cautiously.  While probably pretty good for the next few years, biases will start to enter as one goes further into the future.  Power producers are required to file their plans for new capacity (as well as for retirements of existing capacity) with the Department of Energy, for transparency and to help ensure capacity (locally as well as nationally) remains adequate.  But these reported plans should be approached cautiously.  There is a bias as projects that require a relatively long lead time (such as gas plants, as well as coal and especially nuclear) will be filed years ahead, while the more flexible, shorter construction periods, required for solar and wind plants means that these plans will only be filed with the Department of Energy close to when that capacity will be built.  But for the next few years, the plans should provide an indication of how the market is developing.

As seen in the chart, solar and wind taken together accounted for the largest single share of gross additions to capacity, at least through 2017.  While there was then a bump up in new gas generation capacity in 2018, this is expected to fall back to earlier levels in 2019 and 2020.  And these three sources (solar, wind, and gas) accounted for almost all (93%) of the gross additions to new capacity over 2012 to 2018, with this expected to continue.

New coal-burning plants, in contrast, were already low and falling in 2012 and 2013, and there have been no new ones since then.  Nor are any planned.  This is as one would expect based on the LCOE estimates discussed above – new coal plants are simply not cost competitive.  And the additions to nuclear and other capacity have also been low.  “Other” capacity is a miscellaneous category that includes hydro, petroleum-fueled plants such as diesel, as well as other renewables such as from the burning of waste or biomass. The one bump up, in 2016, is due to a nuclear power plant coming on-line that year.  It was unit #2 of the Watts Bar nuclear power plant built by the Tennessee Valley Authority (TVA), and had been under construction for decades.  Indeed the most recent nuclear plant completed in the US before this one was unit #1 at the same TVA plant, which came on-line 20 years before in 1996.  Even aside from any nuclear safety concerns, nuclear plants are simply not economically competitive with other sources of power.

The above are gross additions to power generating capacity, reflecting what new plants are being built.  But old, economically or technologically obsolete, plants are also being retired, so what matters to the overall shift in power generation capacity is what has happened to net generation capacity:

What stands out here is the retirement of coal-burning plants.  And while the retirements might appear to diminish in the plans going forward, this may largely be due to retirement plans only being announced shortly before they happen.  It is also possible that political pressure from the Trump administration to keep coal-burning plants open, despite their higher costs (and their much higher pollution), might be a factor.  We will see what happens.

The cumulative impact of these net additions to capacity (relative to 2010 as the base year) yields:

Solar plus wind accounts for the largest addition to capacity, followed by gas.  Indeed, each of these accounts for more than 100% of the growth in overall capacity, as there has been a net reduction in the nuclear plus other category, and especially in coal.

But what does this mean in terms of the change in the mix of electric power generation capacity in the US?  Actually, less than one might have thought, as one can see in a chart of the shares:

The share of coal has come down, but remains high, and similarly for nuclear (plus miscellaneous other) capacity.  Gas remains the highest and has risen as a share, while solar and wind, while rising at a rapid pace relative to where it was to start, remains the smallest shares (of the categories used here).

The reason for these relatively modest changes in shares is that while solar and wind plus gas account for more than 100% of the net additions to capacity, that net addition has been pretty small.  Between 2010 and 2018, the net addition to US electric power generation capacity was just 58.8 thousand megawatts, or an increase over eight years of just 5.7% over what capacity was in 2010 (1,039.1 thousand megawatts).  A big share of something small will still be small.

So even though solar and wind are now the lowest cost sources of new power generation, the very modest increase in the total power capacity needed has meant that not that much has been built.  And much of what has been built has been in replacement of nuclear and especially coal capacity.  As we will discuss below, the economic issue then is not whether solar and wind are the cheapest source of new capacity (which they are), but whether new solar and wind are more economic than what it costs to continue to operate existing coal and nuclear plants.  That is a different question, and we will see that while new solar and wind are now starting to be a lower cost option than continuing to operate older coal (but not nuclear) plants, this development (a critically important development) has only been recent.

Why did the US require such a small increase in power generation capacity in recent years?  As seen in the chart below, it is not because GDP has not grown, but rather because energy efficiency (real GDP per MWHr of power) improved tremendously, at least until 2017:

From 2010 to 2017, real GDP rose by 15.7% (2.1% a year on average), but GDP per MWHr of power generated rose by 18.3%.  That meant that power generation (note that generation is the relevant issue here, not capacity) could fall by 2.2% despite the higher level of GDP.  Improving energy efficiency was a key priority during the Obama years, and it appears to have worked well.  It is better for efficiency to rise than to have to produce more power, even if that power comes from a clean source such as solar or wind.

This reversed direction in 2018.  It is not clear why, but might be an early indication that the policies of the Trump administration are harming efficiency in our economy.  However, this is still just one year of data, and one will need to wait to see whether this was an aberration or a start of a new, and worrisome, trend.

Which brings us to generation.  While the investment decision is whether or not to add capacity, and if so then of what form (e.g. solar or gas or whatever), what is ultimately needed is the power generated.  This depends on the capacity available and then on the decision of how much of that capacity to use to generate the power needed at any given moment.  One needs to keep in mind that power in general is not stored (other than still very limited storage of solar and wind power), but rather has to be generated at the moment needed.  And since power demand goes up and down over the course of the day (higher during the daylight hours and lower at night), as well as over the course of the year (generally higher during the summer, due to air conditioning, and lower in other seasons), one needs total generation capacity sufficient to meet whatever the peak load might be.  This means that during all other times there will be excess, unutilized, capacity.  Indeed, since one will want to have a safety margin, one will want to have total power generation capacity of even more than whatever the anticipated peak load might be in any locale.

There will always, then, be excess capacity, just sometimes more and sometimes less.  And hence decisions will be necessary as to what of the available capacity to use at any given moment.  While complex, the ultimate driver of this will be (or at least should be, in a rational system) the short-run costs of producing power from the possible alternative sources available in the region where the power is needed.  These costs will be examined in the next section below.  But for here, we will look at how generation has changed over the last several years.

In terms of the change in power generation by source relative to the levels in 2010, one finds:

Gas now accounts for the largest increment in generation over this period, with solar and wind also growing (steadily) but by significantly less.  Coal powered generation, in contrast, fell substantially, while nuclear and other sources were basically flat.  And as noted above, due to increased efficiency in the use of power (until 2017), total power use was flat to falling a bit, even as GDP grew substantially.  This reversed in 2018  when efficiency fell, and gas generated power rose to provide for the resulting increased power demands.  Solar and wind continued on the same path as before, and coal generation still fell at a similar pace as before.  But it remains to be seen whether 2018 marked a change in the previous trend in efficiency gains, or was an aberration.

Why did power generation from gas rise by more than from solar and wind over the period, despite the larger increase in solar plus wind capacity than in gas generation capacity?  In part this reflects the cost factors which we will discuss in the next section below.  But in part one needs also to recognize factors inherent in the technologies.  Solar generation can only happen during the day (and also when there is no cloud cover), while wind generation depends on when the wind blows.  Without major power storage, this will limit how much solar and wind can be used.

The extent to which some source of power is in fact used over some period (say a year), as a share of what would be generated if the power plant operated at 100% of capacity for 24 hours a day, 365 days a year, is defined as the “capacity factor”.  In 2018, the capacity factor realized for solar photovoltaic systems was 26.1% while for wind it was 37.4%.  But for no power source is it 100%.  For natural gas combined cycle plants (the primary source of gas generation), the capacity factor was 57.6% in 2018 (up from 51.3% in 2017, due to the jump in power demand in 2018).  This is well below the theoretical maximum of 100% as in general one will be operating at less than peak capacity (plus plants need to be shut down periodically for maintenance and other servicing).

Thus increments in “capacity”, as measured, will therefore not tell the whole story.  How much such capacity is used also matters.  And the capacity factors for solar and wind will in general be less than what they will be for the other primary sources of power generation, such as gas, coal, and nuclear (and excluding the special case of plants designed solely to operate for short periods of peak load times, or plants used as back-ups or for cases of emergencies).  But how much less depends only partly on the natural constraints on the clean technologies.  It also depends on marginal operating costs, as we will discuss below.

Finally, while gas plus solar and wind have grown in terms of power generation since 2010, and coal has declined (and nuclear and other sources largely unchanged), coal-fired generation remains important.  In terms of the percentage shares of overall power generation:

While coal has fallen as a share, from about 45% of US power generation in 2010 to 27% in 2018, it remains high.  Only gas is significantly higher (at 35% in 2010).  Nuclear and other sources (such as hydro) accounts for 29%, with nuclear alone accounting for two-thirds of this and other sources the remaining one-third.  Solar and wind have grown steadily, and at a rapid rate relative to where they were in 2010, but in 2018 still accounted only for about 8% of US power generation.

Thus while coal has come down, there is still very substantial room for further substitution out of coal, by either solar and wind or by natural gas.  The cost factors that will enter into this decision on substituting out of coal will be discussed next.

E.  The Cost Factors That Enter in the Decisions on What Plants to Build, What Plants to Keep in Operation, and What Plants to Use

The Lazard analysis of costs presents estimates not only for the LCOE of newly built power generation plants, but also figures that can be used to arrive at the costs of operating a plant to produce power on any given day, and of operating a plant plus keeping it maintained for a year.  One needs to know these different costs in order to address different questions.  The LCOE is used to decide whether to build a new plant and keep it in operation for a period (20 years is used); the operating cost is used to decide which particular power plant to run at any given time to generate the power then needed (from among all the plants up and available to run that day); while the operating cost plus the cost of regular annual maintenance is used in the decision of whether to keep a particular plant open for another year.

The Lazard figures are not ideal for this, as they give cost figures for a newly built plant, using the technology and efficiencies available today.  The cost to maintain and operate an older plant will be higher than this, both because older technologies were less efficient but also simply because they are older and hence more liable to break down (and hence cost more to keep running) than a new plant.  But the estimates for a new plant do give us a sense of what the floor for such costs might be – the true costs for currently existing plants of various ages will be somewhat higher.

Lazard also recognized that there will be a range of such costs for a particular type of plant, depending on the specifics of the particular location and other such factors.  Their report therefore provides both what it labels low end and high end estimates, and with a mid-point estimate then based usually on the average between the two.  The figures shown in the chart at the top of this post are the mid-point estimates, but in the tables below we will show the low and high end cost estimates as well.  These figures are helpful in providing a sense of the range in the costs one should expect, although how Lazard defined the range they used is not fully clear.  They are not of the absolutely lowest possible cost plant nor absolutely highest possible cost plant.  Rather, the low end figures appear to be averages of the costs of some share of the lowest cost plants (possibly the lowest one third), and similarly for the high end figures.

The cost figures below are from the 2018 Lazard cost estimates (the most recent year available).  The operating and maintenance costs are by their nature current expenditures, and hence their costs will be in current, i.e. 2018, prices.  The LCOE estimates of Lazard are different.  As was noted above, these are the levelized prices that would need to be charged for the power generated to cover the costs of building and then operating and maintaining the plant over its assumed (20 year) lifetime.  They therefore need to be adjusted to reflect current prices.  For the chart at the top of this post, they were put in terms of 2017 prices (to make them consistent with the PPA prices presented in the Berkeley report discussed above).  But for the purposes here, we will put them in 2018 prices to ensure consistency with the prices for the operating and maintenance costs.  The difference is small (just 2.2%).

The cost estimates derived from the Lazard figures are then:

(all costs in 2018 prices)

A.  Levelized Cost of Energy from a New Power Plant:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$30.56

$22.16

$31.33

$45.84

$85.57

mid-point

$32.85

$32.47

$43.93

$77.55

$114.99

high end

$35.15

$42.79

$56.54

$109.26

$144.41

B.  Cost to Maintain and Operate a Plant Each year, including for Fuel:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$4.00

$9.24

$24.38

$23.19

$23.87

mid-point

$4.66

$10.64

$26.51

$31.30

$25.11

high end

$5.33

$12.04

$28.64

$39.41

$26.35

C.  Short-term Variable Cost to Operate a Plant, including for Fuel:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

low end

$0.00

$0.00

$23.16

$14.69

$9.63

mid-point

$0.00

$0.00

$25.23

$18.54

$9.63

high end

$0.00

$0.00

$27.31

$22.40

$9.63

A number of points follow from these cost estimates:

a)  First, and as was discussed above, the LCOE estimates indicate that for the question of what new type of power plant to build, it will in general be cheapest to obtain new power from a solar or wind plant.  The mid-point LCOE estimates for solar and wind are well below the costs of power from gas plants, and especially below the costs from coal or nuclear plants.

But also as noted before, local conditions vary and there will in fact be a range of costs for different types of plants.  The Lazard estimates indicate that a gas plant with costs at the low end of a reasonable range (estimated to be about $31 per MWHr) would be competitive with solar or wind plants at the mid-point of their cost range (about $32 to $33 per MWHr), and below the costs of a solar plant at the high end of its cost range ($35) and especially a wind plant at its high end of its costs ($43).  However, there are not likely to be many such cases:  Gas plants with a cost at their mid-point estimate would not be competitive, and even less so for gas plants with a cost near their high end estimate.

Furthermore, even the lowest cost coal and nuclear plants would be far from competitive with solar or wind plants when considering the building of new generation capacity.  This is consistent with what we saw in Section D above, of no new coal or nuclear plants being built in recent years (with the exception of one nuclear plant whose construction started decades ago and was only finished in 2016).

b)  More interesting is the question of whether it is economic to build new solar or wind plants to substitute for existing gas, coal, or nuclear plants.  The figures in panel B of the table on the cost to operate and maintain a plant for another year (all in terms of $/MWHr) can give us a sense of whether this is worthwhile.  Keeping in mind that these are going to be low estimates (as they are the costs for newly built plants, using the technologies available today, not for existing ones which were built possibly many years ago), the figures suggest that it would make economic sense to build new solar and wind plants (at their LCOE costs) and decommission all but the most efficient coal burning plants.

However, the figures also suggest that this will not be the case for most of the existing gas or nuclear plants.  For such plants, with their capital costs already incurred, the cost to maintain and operate them for a further year is in the range of $24 to $29 (per MWHr) for gas plants and $24 to $26 for nuclear plants.  Even recognizing that these costs estimates will be low (as they are based on what the costs would be for a new plant, not existing ones), only the more efficient solar and wind plants would have an LCOE which is less.  But they are close, and are on the cusp of the point where it would be economic to build new solar and wind plants and decommission existing gas and nuclear plants, just as this is already the case for most coal plants.

c)  Panel C then provides figures to address the question of which power plants to operate, for those which are available for use on any given day.  With no short-term variable cost to generate power from solar or wind sources (they burn no fuel), it will always make sense to use those sources first when they are available.  The short-term cost to operate a nuclear power plant is also fairly low ($9.63 per MWHr in the Lazard estimates, with no significant variation in their estimates).  Unlike other plants, it is difficult to turn nuclear plants on and off, so such plants will generally be operated as baseload plants kept always on (other than for maintenance periods).

But it is interesting that, provided a coal burning plant was kept active and not decommissioned, the Lazard figures suggest that the next cheapest source of power (if one ignores the pollution costs) will be from burning coal.  The figures indicate coal plants are expensive to maintain (the difference between the figures in panel B and in panel C) but then cheap to run if they have been kept operational.  This would explain why we have seen many coal burning plants decommissioned in recent years (new solar and wind capacity is cheaper than the cost of keeping a coal burning plant maintained and operating), but that if the coal burning plant has been kept operational, that it will then typically be cheaper to run rather than a gas plant.

d)  Finally, existing gas plants will cost between $23 and $27 per MWHr to run, mostly for the cost of the gas itself.  Maintenance costs are low.  These figures are somewhat less than the cost of building new solar or wind capacity, although not by much.

But there is another consideration as well.  Suppose one needs to add to night-time capacity, so solar power will not be of use (assuming storage is not an economic option).  Assume also that wind is not an option for some reason (perhaps the particular locale).  The LCOE figures indicate that a new gas plant would then be the next best alternative.  But once this gas plant is built, it will be available also for use during the day.  The question then is whether it would be cheaper to run that gas plant during the day also, or to build solar capacity to provide the day-time power.

And the answer is that at these costs, which exclude the costs from the pollution generated, it would be cheaper to run that gas plant.  The LCOE costs for new solar power ranges from $31 to $35 per MWHr (panel A above), while the variable cost of operating a gas plant built to supply nighttime capacity ranges between $23 and $27 (panel C).  While the difference is not huge, it is still significant.

This may explain in part why new gas generation capacity is not only being built in the US, but also is then being used more than other sources for additional generation, even though new solar and wind capacity would be cheaper.  And part of the reason for this is that the costs imposed on others from the pollution generated by burning fossil fuels are not being borne by the power plant operators.  This will be examined in the next section below.

F.  The Impact of Including the Cost of Greenhouse Gas Emissions

Burning fossil fuels generates pollution.  Coal is especially polluting, in many different ways. But I will focus here on just one area of damage caused by the burning of fossil fuels, which is that from their generation of greenhouse gases.  These gases are warming the earth’s atmosphere, with this then leading to an increased frequency of extreme weather events, from floods and droughts to severe storms, and hurricanes of greater intensity.  While one cannot attribute any particular storm to the impact of a warmer planet, the increased frequency of such storms in recent decades is clearly a consequence of a warmer planet.  It is the same as the relationship of smoking to lung cancer.  While one cannot with certainty attribute a particular case of lung cancer to smoking (there are cases of lung cancer among people who do not smoke), it is well established that there is an increased likelihood and frequency of lung cancer among smokers.

When the costs from the damage created from greenhouse gases are not borne by the party responsible for the emissions, that party will ignore those costs.  In the case of power production, they do not take into account such costs in deciding whether to use clean sources (solar or wind) to generate the power needed, or to burn coal or gas.  But the costs are still there and are being imposed on others.  Hence economists have recommended that those responsible for such decisions face a price which reflects such costs.  A specific proposal, discussed in an earlier post on this blog, is to charge a tax of $40 per ton of CO2 emitted.  All the revenue collected by that tax would then be returned in equal per capita terms to the American population.  Applied to all sources of greenhouse gas emissions (not just power), the tax would lead to an annual rebate of almost $500 per person, or $2,000 for a family of four.  And since it is the rich who account most (in per person terms) for greenhouse gas emissions, it is estimated that such a tax and redistribution would lead to those in the lowest seven deciles of the population (the lowest 70%) receiving more on average than what they would pay (directly or indirectly), while only the richest 30% would end up paying more on a net basis.

Such a tax on greenhouse gas emissions would have an important effect on the decision of what sources of power to use when power is needed.  As noted in the section above, at current costs it is cheaper to use gas-fired generation, and even more so coal-fired generation, if those plants have been built and are available for operation, than it would cost to build new solar or wind plants to provide such power.  The prices are getting close to each other, but are not there yet.  If gas and coal burning plants do not need to worry about the costs imposed on others from the burning of their fuels, such plants may be kept in operation for some time.

A tax on the greenhouse gases emitted would change this calculus, even with all other costs as they are today.  One can calculate from figures presented in the Lazard report what the impact would be.  For the analysis here, I have looked at the impact of charging $20 per ton of CO2 emitted, $40 per ton of CO2, or $60 per ton of CO2.  Analyses of the social cost of CO2 emissions come up with a price of around $40 per ton, and my aim here was to examine a generous span around this cost.

Also entering is how much CO2 is emitted per MWHr of power produced.  Figures in the Lazard report (and elsewhere) put this at 0.51 tons of CO2 per MWHr for gas burning plants, and 0.92 tons of CO2 per MWHr for coal burning plants.  As has been commonly stated, the direct emissions of CO2 from gas burning plants is on the order of half of that from coal burning plants.

[Side note:  This does not take into account that a certain portion of natural gas leaks out directly into the air at some point in the process from when it is pulled from the ground, then transported via pipelines, and then fed into the final use (e.g. at a power plant).  While perhaps small as a percentage of all the gas consumed (the EPA estimates a leak rate of 1.4%, although others estimate it to be more), natural gas (which is primarily methane) is itself a highly potent greenhouse gas with an impact on atmospheric warming that is 34 times as great as the same weight of CO2 over a 100 year time horizon, and 86 times as great over a 20 year horizon.  If one takes such leakage into account (of even just 1.4%), ands adds this warming impact to that of the CO2 that is produced by the gas that has not leaked out but is burned, natural gas turns out to have a similar if not greater atmospheric warming impact as that resulting from the burning of coal.  However, for the calculations below, I will leave out the impact from leakage.  Including this would lead to even stronger results.]

One then has:

D.  Cost of Greenhouse Gas Emissions:  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Tons of CO2 Emitted per MWHr

0.00

0.00

0.51

0.92

0.00

Cost at $20/ton CO2

$0.00

$0.00

$10.20

$18.40

$0.00

Cost at $40/ton CO2

$0.00

$0.00

$20.40

$36.80

$0.00

Cost at $60/ton CO2

$0.00

$0.00

$30.60

$55.20

$0.00

E.  Levelized Cost of Energy for a New Power Plant, including Cost of Greenhouse Gas Emissions (mid-point figures):  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Cost at $20/ton CO2

$32.85

$32.47

$54.13

$95.95

$114.99

Cost at $40/ton CO2

$32.85

$32.47

$64.33

$114.35

$114.99

Cost at $60/ton CO2

$32.85

$32.47

$74.53

$132.75

$114.99

F.  Short-term Variable Cost to Operate a Plant, including Fuel and Cost of Greenhouse Gas Emissions (mid-point figures):  $/MWHr

Solar

Wind

Gas

Coal

Nuclear

Cost at $20/ton CO2

$0.00

$0.00

$35.43

$36.94

$9.63

Cost at $40/ton CO2

$0.00

$0.00

$45.63

$55.34

$9.63

Cost at $60/ton CO2

$0.00

$0.00

$55.83

$73.74

$9.63

Panel D shows what would be paid, per MWHr, if greenhouse gas emissions were charged for at a rate of $20 per ton of CO2, of $40 per ton, or of $60 per ton.  The impact would be significant, ranging from $10 to $31 per MWHr for gas and $18 to $55 for coal.

If these costs are then included in the Levelized Cost of Energy figures (using the mid-point estimates for the LCOE), one gets the costs shown in Panel E.  The costs of new power generation capacity from solar or wind sources (as well as nuclear) are unchanged as they have no CO2 emissions.  But the full costs of new gas or coal fired generation capacity will now mean that such sources are even less competitive than before, as their costs now also reflect, in part, the damage done as a result of their greenhouse gas emissions.

But perhaps most interesting is the impact on the choice of whether to keep burning gas or coal in plants that have already been built and remain available for operation.  This is provided in Panel F, which shows the short-term variable cost (per MWHr) of power generated by the different sources.  These short-term costs were primarily the cost of the fuel used, but now also include the cost to compensate for the resulting greenhouse gas emissions.

If gas as well as coal had to pay for the damages caused by their greenhouse gas emissions, then even at a cost of just $20 per ton of CO2 emitted they would not be competitive with building new solar or wind plants (whose LCOEs, in Panel E, are less).  At a cost of $40 or $60 per ton of CO2 emitted, they would be far from competitive, with costs of 40% to 130% higher.  There would be a strong incentive then to build new solar and wind plants to serve what they can (including just the day time markets), while existing gas plants (primarily) would in the near term be kept in reserve for service at night or at other times when solar and wind generation is not possible.

G.  Summary and Conclusion

The cost of new clean sources of power generation capacity, wind and especially solar, has plummeted over the last decade, and it is now cheaper to build new solar or wind capacity than to build new gas, coal, and especially nuclear capacity.  One sees this not only in estimates based on assessments of the underlying costs, but also in the actual market prices for new generation capacity (the PPA prices in such contracts).  Both have plummeted, and indeed at an identical pace.

While it was only relatively recently that the solar and wind generation costs have fallen below the cost of generation from gas, one does see these relative costs reflected in the new power generation capacity built in recent years.  Solar plus wind (together) account for the largest single source of new capacity, with gas also high.  And there have been no new coal plants since 2013 (nor nuclear, with the exception of one plant coming online which had been under construction for decades).

But while solar plus wind plants accounted for the largest share of new generation capacity in recent years, the impact on the overall mix was low.  And that is because not that much new generation capacity has been needed.  Up until to at least 2017, efficiency in energy use was improving to such an extent that no net new capacity was needed despite robust GDP growth.  A large share of something small will still be something small.

However, the costs of building new solar or wind generation capacity have now fallen to the point where it is cheaper to build new solar or wind capacity than it costs to maintain and keep in operation many of the existing coal burning power plants.  This is particularly the case for the older coal plants, with their older technologies and higher maintenance costs.  Thus one should see many of these older plants being decommissioned, and one does.

But it is still cheaper, when one ignores the cost of the damage done by the resulting pollution, to maintain and operate existing gas burning plants, than it would cost to build new solar or wind plants to generate the power they are able to provide.  And since some of the new gas burning plants being built may be needed to add to night-time generation capacity, this means that such plants will also be used to generate power by burning gas during the day, instead of installing solar capacity.

This cost advantage only holds, however, because gas-burning plants do not have to pay for the costs resulting from the damage their pollution causes.  While they pollute in many different ways, one is from the greenhouse gases they emit.  But if one charged them just $20 for every ton of CO2 released into the atmosphere when the gas is burned, the result would be different.  It would then be more cost competitive to build new solar or wind capacity to provide power whenever they can, and to save the gas burning plants for those times when such clean power is not possible.

There is therefore a strong case for charging such a fee.  However, many of those who had previously supported such an approach to address global warming have backed away in recent months, arguing that it would be politically impossible.  That assessment of the politics might be correct, but it really makes no sense.  First, it would be politically important that whatever revenues are generated are returned in full to the population, and on an equal per person basis.  While individual situations will of course vary (and those who lose out on a net basis, or perceive that they will, will complain the loudest), assessments based on current consumption patterns indicate that those in the lowest seven deciles of income (the lowest 70%) will on average come out ahead, while only those in the richest 30% will pay more.  It is the rich who, per person, account for the largest share of greenhouse gas emissions, creating costs that others are bearing.  And a redistribution from the richest 30% to the poorest 70% would be a positive redistribution.

But second, the alternative to reducing greenhouse gas emissions would need to be some approach based on top-down directives (central planning in essence), or a centrally directed system of subsidies that aims to offset the subsidies implicit in not requiring those burning fossil fuels to pay for the damages they cause, by subsidizing other sources of power even more.  Such approaches are not only complex and costly, but rarely work well in practice.  And they end up costing more than a fee-based system would.  The political argument being made in their favor ultimately rests on the assumption that by hiding the higher costs they can be made politically more acceptable.  But relying on deception is unlikely to be sustainable for long.

The sharp fall in costs for clean energy of the last decade has created an opportunity to switch our power supply to clean sources at little to no cost.  This would have been impossible just a few years ago.  It would be unfortunate in the extreme if we were to let this opportunity pass.

The Purple Line Ridership Forecasts Are Wrong: An Example of Why We Get Our Infrastructure Wrong

Executive Summary

There are several major problems with the forecast ridership figures for the Purple Line, a proposed 16-mile light rail line that would pass in a partial arc around Washington, DC, in suburban Maryland.  The forecasts, as presented and described in the “Travel Forecasts Results Technical Report” of the Final Environmental Impact Statement for the project, are in a number of cases simply impossible.

Problems include:

a)  Forecast ridership in 2040 between many of the Transit Analysis Zone pairs along the Purple Line corridor would be higher on the Purple Line itself than it would be for total transit ridership (which includes bus, Metrorail, and commuter rail ridership, in addition to ridership on the Purple Line) between these zones.  This is impossible. Such cases are not only numerous (found in more than half of the possible cases for zones within the corridor) but often very large (12 times as high in one case).  If the forecasts for total transit ridership are correct, then correcting for this, with Purple Line ridership some reasonable share of the totals, would lead to far lower figures for Purple Line ridership.

b)  Figures on forecast hours of user benefits (primarily forecast time savings from a rail line) in a scenario where the Purple Line is built as compared to one where it is not, are often implausibly high.  In two extreme cases, the figures indicate average user benefits per trip between two specific zones, should the Purple Line be built, of 9.7 hours and 11.5 hours.  These cannot be right; one could walk faster.  But other figures on overall user benefits are also high, leading to an overall average predicted benefit of 30 minutes per trip.  Even with adjustments to the pure time savings that assign a premium to rail service, this is far too high and overestimates benefits by at least a factor of two or even three.  The user benefit figures are important for two reasons:  1) An overestimate leads to a cost-effectiveness estimate (an estimate of the cost of the project per hour of user benefits) that will be far off;  and 2) The figures used for user benefits from taking the proposed rail line enter directly into the estimation of ridership on the rail line (as part of the choice on whether to take the rail line rather than some other transit option, or to drive).  If the user benefit figures are overstated, ridership will be less.  With the user benefit figures overstated by a large margin, ridership will be far less.

c)  Figures on ridership from station to station are clearly incorrect.  They indicate, for example, that far more riders would exit at the Bethesda station (an end point on the line) each day (19,800) than would board there (10,210).  This is impossible.  More significantly, the figures indicate system capacity must be sufficient to handle 21,400 riders each day on the busiest segment (on the segment leaving Silver Spring heading towards Bethesda).  Even if the overall ridership numbers were correct, the figure for ridership on this segment is clearly too high (and it is this number which leads to the far higher number of those exiting the system in Bethesda than would enter there each day).  The figure is important as the rail line has been designed to a capacity sufficient to carry such a load.  With the true number far lower, there is even less of a case for investing in an expensive rail option.  Upgraded bus services could provide the capacity needed, and at far lower cost.

There appear to be other problems as well.  But even just these three indicate there are major issues with these forecasts.  This may also explain why a number of independent observers have noted for some time that the Purple Line ridership forecasts look implausibly high.  The figure for Purple Line ridership in 2040 of 69,300 per day is three times the average daily ridership actually observed in 2012 on 31 light rail lines built in the US over the last three decades.  It would also be 58% higher on the Purple Line than on the highest amongst those 31.  Yet the Purple Line would pass solely through suburban neighborhoods, of generally medium to low density.  Most of these other light rail lines in the US serve travel to and from downtown areas.

The causes of these errors in the ridership forecasts for the Purple Line are not always clear.  But the issues suggest at a minimum that quality checks were insufficient.  And while the Purple Line is just one example, inadequate attention to such issues might explain in part why ridership forecasts for light rail lines have often proven to be substantially wrong.

 

A.  Introduction

The Purple Line is a proposed light rail line that would be built in Suburban Maryland, stretching in a partial arc from east of Washington, DC, to north of the city.  I have written several posts previously in this blog on the proposed project (see the posts here, here, here, and here) and have been highly critical of it.  It is an extremely expensive project (the total cost to be paid to the private concessionaire to build and then operate the line for 30 years will sum to $5.6 billion, and other costs borne directly by the state and/or local counties will add at least a further $600 million to this).  And the state’s own analyses of the project found that upgraded bus services (including any one of several bus rapid transit, or BRT, options) to provide the transit services that are indeed needed in the corridor, would be both cheaper and more cost-effective.  Such alternatives would also avoid the environmental damage that is inevitable with the construction of dual rail lines along the proposed route, including the destruction of 48 acres of forest cover, the filling in of important wetland areas, and the destruction of a linear urban park that has the most visited trail in the state.

The state’s rationale for building a rail line rather than providing upgraded bus services is that ridership will be so high that at some point in the future (beyond 2040) only rail service would be able to handle the load.  But many independent analysts have long questioned those ridership forecasts.  A study from 2015 found that the forecast ridership on the Purple Line would be three times as high as the ridership actually observed in 2012 on 31 light rail lines built in the US over the last three decades.  Furthermore, the forecast Purple Line ridership would be 58% higher than ridership actually observed on the highest line among those 31.  And with the Purple Line route passing through suburban areas of generally medium to low density, in contrast to routes to and from major downtown areas for most of those 31, many have concluded the Purple Line forecasts are simply not credible.

Why did the Purple Line figures come out so high?  The most complete description provided by the State of Maryland of the ridership forecasts are provided in the chapter titled “Travel Forecasts Results Technical Report”, which is part of Volume III of the Final Environmental Impact Statement (FEIS) for the Purple Line, dated August 2013 (which I will hereafter often refer to simply as the “FEIS Travel Forecasts chapter”).  A close examination of that material indicates several clear problems with the figures.  This post will discuss three, although there might well be more.

These three are:

a)  The FEIS forecast ridership for 2040 on the Purple Line alone would be higher (in a number of cases far higher) in most of the 49 possible combinations of travel between the 7 Transit Analysis Zones (TAZs) defined along the Purple Line route, than the total number of transit riders among those zones (by bus, Metrorail, commuter rail, and the Purple Line itself).  This is impossible.

b)  Figures on user benefits per Purple Line trip (primarily the time forecast to be saved by use of a rail line) are implausibly high.  In two cases they come to 9.7 hours and 11.5 hours, respectively, per trip.  This cannot be.  One could walk faster.  But these figures for minutes of user benefits per trip were then passed through in the computations to the total forecast hours of user benefits that would accrue as a consequence of building the Purple Line, thus grossly over-estimating the benefits. Such user benefit figures would also have been used in the estimation of how many will choose to ride the Purple Line.  If these user benefit figures are overestimated (sometimes hugely overestimated), then the Purple Line ridership forecasts will be overestimated.

c)  The figure presenting rail ridership by line segment from station to station (which then was used to determine what ridership capacity would be needed to service the proposed route) shows almost twice as many riders exiting at the Bethesda station (an end of the line) as would board there each day (19,800 arriving versus 10,210 leaving each day).  While there could be some small difference (i.e. some people might take transit to work in the morning, and then get a car ride home with a colleague in the evening), it could not be so large.  The figures would imply that Bethesda would be accumulating close to 9,600 new residents each day.  The forecast ridership by line segment (which is what determines these figures) is critical as it determines what the capacity will need to be of the transit system to service such a number of riders.  With these figures over-stated, the design capacity is too high, and there is even less of a rationale for building a rail line as opposed to simply upgrading bus services in the corridor.

These three issues are clear just from an examination of the numbers presented.  But as noted, there might well be more.  We cannot say for sure what all the errors might be as the FEIS Travel Forecasts chapter does not give a complete set of the numbers and assumed relationships needed as inputs to the analysis and then resulting from it, nor more than just a cursory explanation of how the results were arrived at.  But with anomalies such as these, and with no explanations for them, one cannot treat any of the results with confidence.

And while necessarily more speculative, I will also discuss some possible reasons for why the mistakes may have been made.  This matters less than the errors themselves, but might provide a sense for why they arose.  Broadly, while the FEIS Travel Forecasts chapter (and indeed the entire FEIS report) only shows the Maryland Transit Administration (MTA) as the source for the documents, the MTA has acknowledged (and as would be the norm) that major portions of the work – in particular the ridership forecasts – were undertaken or led by hired consulting firms.  The consulting firms use standard but large models to prepare such ridership forecasts, but such models must be used carefully to ensure reliable results.  It is likely that results were generated by what might have been close to a “black box” to the user, that there were then less than sufficient quality checks to ensure the results were reasonable, and that the person assigned to write up the results (who may well have differed from the person generating the numbers) did not detect these anomalous results.

I will readily admit that this is speculation as to the possible underlying causes, and that I could be wrong on this.  But it might explain why figures were presented in the final report which were on their face impossible, with no explanation given.  In any case, what is most important is the problems themselves, regardless of the possible explanations on why they arose.

Each of the three issues will be taken up in turn.

B.  Forecast Ridership on the Purple Line Alone Would Be Higher in Many Cases than Total Transit Ridership

The first issue is that, according to the forecasts presented, there would be more riders on the Purple Line alone between many of the Transit Analysis Zones (TAZs) than the number of riders on all forms of transit.  This is impossible.

Forecast Ridership on All Transit Options in 2040:

Forecast Ridership on Purple Line Alone in 2040:

These two tables are screenshots of the upper left-hand corners of Table 16 and 22 from the FEIS Travel Forecasts chapter.  While they show the key numbers, I would recommend that the reader examine the full tables in the original FEIS Travel Forecasts chapter. Indeed, if your computer can handle it, it would be best to open the document twice in two separate browsers and then scroll down to the two tables to allow them to be compared side by side on your screen.

The tables show forecast ridership in 2040 on all forms of transit in the “Preferred Alternative” scenario where the Purple Line is built (Table 16), or for the sub-group of riders just on the Purple Line (Table 22).  And based on the total ridership figures presented at the bottoms of the full tables, the titles appear to be correct. That is, Table 16 forecasts that total transit ridership in the Washington metro region would be about 1.5 million trips per day in 2040, which is plausible (Table 13 says it was 1.1 million trips per day in 2005, which is consistent with WMATA bus and rail ridership, where WMATA accounts for 80 – 85% of total ridership in the region).  And Table 22 says the total number of trips per day on the Purple Line in 2040 would be 68,650, which is consistent (although still somewhat different from, with no explanation) with figures given elsewhere in the chapter on forecast total Purple Line trips per day in 2040 (of 69,330 in Table 24, for example, or 69,300 in Tables 25 and 26, with that small difference probably just rounding). So it does not appear that the tables were mislabeled, which was my first thought.

The full tables show the ridership between any two pairs of 22 defined Transit Analysis Zones (TAZs), in production/attraction format (which I will discuss below).  The 22 TAZs cover the entire Washington metro region, and are defined as relatively compact geographic zones along the Purple Line corridor and then progressively larger geographic areas as one goes further and further away.  They have seven TAZs defined along the Purple Line corridor itself (starting at the Bethesda zone and ending at the New Carrollton zone), but Northern Virginia has just two zones (where one, labeled “South”, also covers most of Southern Prince George’s County in Maryland).  See the map shown as Figure 4 on page 13 of the FEIS Travel Forecasts chapter for the full picture.  This aggregation to a manageable set of TAZs, with a focus on the Purple Line corridor itself, is reasonable.

The tables then show the forecast ridership between any two TAZ pairs.  For example, Table 16 says there will on average be 1,589 riders on all forms of transit each day in 2040 between Bethesda (TAZ 1, as a “producer” zone) and Silver Spring (TAZ 3, as an “attractor” zone).  But Table 22 says there will be 2,233 riders each day on average between these same two TAZs on the Purple Line alone.  This is impossible.  And there are many such impossibilities.  For the 49 possible pairs (7 x 7) for the 7 TAZs directly on the Purple Line corridor, more than half (29) have more riders on the Purple Line than on all forms of transit.  And for one pair, between Bethesda (TAZ 1) and New Carrollton (TAZ 7), the forecast is that there would be close to 12 times as many riders taking the Purple Line each day as would take all forms of public transit (which includes the Purple Line and more).

Furthermore, if one adds up all the transit ridership between these 49 possible pairs (where the totals are presented at the bottom of the tables; see the FEIS Travel Forecasts chapter), the total number of trips per day on all forms of transit sums to 29,890 among these 7 TAZs (Table 16), while the total for the Purple Line alone sums to 30,560 (Table 22).

How could such a mistake have been made?  One can only speculate, as the FEIS chapter had next to no description of the methods they followed.  One instead has to infer a good deal based on what was presented, in what sequence, and from what is commonly done in the profession to produce such forecasts.  This goes into fairly technical issues, and readers not interested in these details can skip directly to the next section below.  But it will likely be of interest at least to some, provides a short review of the modeling process commonly used to generate such ridership forecasts, and will be helpful to an understanding of the other two obvious errors in the forecasts discussed below.

To start, note that the tables say they are being presented in “production/attraction” format.  This is not the more intuitive “origin/destination” format that would have been more useful to show.  And I suspect that over 99% of readers have interpreted the figures as if they are showing travel between origin and destination pairs.  But that is not what is being shown.

The production/attraction format is an intermediate stage in the modeling process that is commonly used for such forecasts.  That modeling process is called the “four-step model”.  See this post from the Metropolitan Washington Council of Governments (MWCOG) for a non-technical short description, or this post for a more academic description.  The first step in the four-step model is to try to estimate (via a statistical regression process normally) how many trips will be “produced” in each TAZ by households and by businesses, based on their characteristics.  Trips to work, for example, will be “produced” by households at the TAZ where they live, and “attracted” by businesses at the TAZ where those businesses are located.  The number of trips so produced will be forecast based on some set of statistical regression equations (with parameters possibly taken from what might have been estimated for some other metro area, if the data does not exist here).  The number of trips per day by household will be some function of average household size in the TAZ, average household income, how many cars the households own, and other such factors.  Trips “attracted” by businesses in some TAZ will similarly be some function of how many people are employed by businesses in that TAZ, perhaps the nature of the businesses, and so on.  Businesses will also “produce” their own trips, for example for delivery of goods to other businesses, and statistical estimates will be made also for such trips.

Such estimates are unfortunately quite rough (statistical error is high), and the totals calculated for the region as a whole of the number of trips “produced” and the number of trips “attracted” will always be somewhat different, and often far different.  But by definition the totals have to be the same, as all trips involve going from somewhere to somewhere. Hence some scaling process will commonly be used to equate the totals.

This will then yield the total number of trips produced in each TAZ, and the total number attracted to each TAZ.  But this does not tell us yet the distribution of the trips.  That is, one will have the total number of trips produced in TAZ 1, say, but not how many go from TAZ 1 to TAZ 2 or to TAZ 3 or to TAZ 4, and so on.  For this, forecasters generally assume the travel patterns will fit what is called a “gravity model”, where it is assumed the trips from each TAZ will be distributed to the “attractor” TAZs in some statistical relationship which is higher depending on the “mass” (i.e. the number of jobs in some TAZ) and lower depending on the distance between them (typically measured in terms of travel times). This is also rough, and some iterative rescaling process will be needed to ensure the trips produced in each TAZ and attracted to each TAZ sum to the already determined totals for each.

This all seems crude, and it is.  Many might ask why not determine such trip distributions from a straightforward survey of households asking where they travel to.  Surveys are indeed important, and help inform what the parameters of these functions might be, but one must recognize that any practicable survey could not suffice.  The 22 TAZs defined for the Purple Line analysis were constructed (it appears; see below) from a more detailed set of TAZs defined by the Metropolitan Washington Council of Governments.  But MWCOG now identifies 3,722 separate TAZs for the Washington metro region, and travel between them would potentially involve 13.9 million possible pairs (3,722 squared)!  No survey could cover that.  Hence MWCOG had to use some form of a gravity model to allocate the trips from each zone to each zone, and that is indeed precisely what they say they did.

At this point in the process, one will have the total number of trips produced by each TAZ going to each TAZ as an attractor, which for 2040 appears as Table 8 in the FEIS chapter. This covers trips by all options, including driving.  The next step is to separate the total number of trips between those taken by car from those taken by transit, and then, at the level below, the separation of those taken by transit into each of the various transit options (e.g. Metrorail, bus, commuter rail, and the Purple Line in the scenario where it is built). This is the mode choice issue, and note that these are discrete choices where one chooses one or the other.  (A combined option such as taking a bus to a Metrorail station and then taking the train would be modeled as a separate mode choice.)  This separation into various travel modes is normally then done by what is called a nested logit (or logistic) regression model, where the choice is assumed to be a function of variables such as travel time required, out of pocket costs (such as for fares or tolls or parking), personal income, and so on.

Up to this stage, the modeling work as described above would have been carried out by MWCOG as part of its regular work program (although in the scenario of no Purple Line). Appendix A of the FEIS Travel Forecasts chapter, says specifically that the modelers producing the Purple Line ridership forecasts started from the MWCOG model results (Round 8.0 of that model for the FEIS forecasts).  By aggregating from the TAZs used by MWCOG (3,722 currently, but possibly some different number in the Round 8.0 version), to the 22 defined for the Purple Line work, the team doing the FEIS forecasts would have been able to arrive at the table showing total daily trips by all forms of transportation (including driving) between the 22 TAZs (Table 8 of the FEIS chapter), as well as the total trips by some form of transit between the 22 in the base case of no Purple Line being built (the “No Build” alternative; Table 14 of the FEIS chapter).

The next step was then to model how many total transit trips would be taken in the case where the Purple Line has been built and is operating in 2040, as well as how many of such transit trips will be taken on the Purple Line specifically.  The team producing the FEIS forecasts would likely have taken the nested logit model produced by MWCOG, and then adjusted it to incorporate the addition of the Purple Line travel option, with consequent changes in the TAZ to TAZ travel times and costs.  At the top level they then would have modeled the split in travel between by car or by any form of transit, and at the next level then modeled the split of any form of transit between the various transit options (bus, Metrorail, commuter rail, and the Purple Line itself).

This then would have led to the figures shown in Table 16 of the FEIS chapter for total transit trips each day by any transit mode (with the Purple Line built), and Table 22 for trips on the Purple Line only.  Portions of those tables are shown above.  They are still in “production/attraction” format, as noted in their headings.

While understandable as a step in the process by which such ridership forecasts are generated (as just described), trips among TAZs in production/attraction format are not terribly interesting in themselves.  They really should have gone one further step, which would have been to convert from a production/attraction format to an origin/destination format.  The fact that they did not is telling.

As discussed above, a production/attraction format will show the number of trips between each production TAZ and each attraction TAZ.  Thus a regular commute for a worker from home (production TAZ) to work (attraction TAZ) each day will appear as two trips each day between the production TAZ and the attraction TAZ.  Thus, for example, the 1,589 trips shown as total transit trips (Table 16) between TAZ 1 (Bethesda) and TAZ 3 (Silver Spring) includes not only the trips by a commuter from Bethesda to Silver Spring in the morning, but also the return trip from Silver Spring to Bethesda in the evening.  The return trip does not appear in this production/attraction format in the 4,379 trips from Silver Spring (TAZ 3) to Bethesda (TAZ 1) element of the matrix (see the portion of Table 16 shown above).  The latter is the forecast of the number of trips each day between Silver Spring as a production zone and Bethesda as an attractor.

This is easy to confuse, and I suspect that most readers seeing these tables are so confused.  What interests the reader is not this production/attraction format of the trips, which is just an intermediate stage in the modeling process, but rather the final stage showing trips from each origin TAZ to each destination TAZ.  And it only requires simple arithmetic to generate that, if one has the underlying information from the models on how many trips were produced from home to go to work or to shop or for some other purpose (where people will always then return home each day), and separately how many were produced by what they call in the profession “non-home based” activities (such as trips during the workday from business to business).

I strongly suspect that the standard software used for such models would have generated such trip distributions in origin/destination format, but they are never presented in the FEIS Travel Forecasts chapter.  Had they been, one would have seen what the forecast travel would have been between each of the TAZ pairs in each of the two possible directions. One would probably have observed an approximate (but not necessarily exact) symmetry in the matrix, as travel from one TAZ to another in one direction will mostly (but not necessarily fully) be matched by a similar flow in the reverse direction, when added up over the course of a day.  For that reason also, the row totals will match or almost match each of the column totals.  But that will not be the case in the production/attraction format.

That the person writing up the results for this FEIS chapter did not understand that an origin/destination presentation of the travel would have been of far greater interest to most readers than the production/attraction format is telling, I suspect.  They did not see the significance.  Rather, what was written up was mostly simply a restatement of some of the key numbers from the tables, with little to no attempt to explain why they were what they were.  It is perhaps then not surprising that the author did not notice the impossibility of the forecast ridership between many of the TAZ pairs being higher on the Purple Line alone (Table 22) than the total ridership on all transit options together (Table 16).

C.  User Benefits and Time Savings

The modeling exercise also produced a forecast of “user benefits” in the target year. These benefits are measured in units of time (minutes or hours) and arise primarily from the forecast savings in the time required for a trip, where estimates are made as to how much less time will be required for a trip if one has built the light rail line.  I would note that there are questions as to whether there would in fact be any time savings at all (light rail lines are slow, particularly in designs where they travel on streets with other traffic, which will be the case here for much of the proposed route), but for the moment let’s look at what the modelers evidently assumed.

“User benefits” then include a time-value equivalent of any out-of-pocket cost savings (to the extent any exists; it will be minor here for most), plus a subjective premium for what is judged to be the superior quality of a ride on a rail car rather than a regular bus. The figures in the AA/DEIS (see Table 6-2 in Chapter 6) indicate a premium of 19% was added in the case of the medium light rail alternative – the alternative that evolved into what is now the Purple Line.  The FEIS Travel Forecasts chapter does not indicate what premium they now included, but presumably it was similar.  User benefits are thus largely time savings, with some markup to reflect a subjective premium.

Forecast user benefits are important for two reasons.  One is that it is such benefits which are, to the extent they in fact exist, the primary driver of predicted ridership on the Purple Line, i.e. travelers switching to the Purple Line from other transit options (as well as from driving, although the forecast shifts out of driving were relatively small).  Second, the forecast user benefits are also important as they provide the primary metric used to estimate the benefit of building the Purple Line. Thus if the inputs used to indicate what the time savings would be by riding the Purple Line as opposed to some other option were over-estimated, one will be both over-estimating ridership on the line and over-estimating the benefits.

And it does appear that those time savings and user benefits were over-estimated.  Table 23 of the FEIS chapter presents what it labels the “Minutes of User Benefits per Project Trip”.  A screenshot of the upper left corner, focussed on the travel within the 7 TAZs through which the Purple Line would pass, is:

Note that while the author of the chapter never says what was actually done, it appears that Table 23 was calculated implicitly by dividing the figures in Table 21 of the FEIS Travel Forecasts chapter (showing calculated total hours of time savings daily for each TAZ pair) by those in Table 22 (showing the number of daily trips on the Purple Line, the same table as was discussed in the section above).  This would have been a reasonable approach, given that the time savings figures include that saved by all the forecast shifts among transit alternatives (as well as from driving) should the new rail line be built.  The Table 23 numbers thus show the overall time saved across all travel modes, per Purple Line trip.

But the figures are implausible.  Taking the most extreme cases first, the table says that there would be an average of 582 minutes of user benefits per trip for travel on the Purple line between Bethesda (TAZ 1) and Riverdale Park (TAZ 6), and 691 minutes per trip between Bethesda (TAZ 1) and New Carrollton (TAZ 7).  This works out to user benefits per trip of 9.7 hours and 11.5 hours respectively!  One could walk faster!  And this does not even take into account that travel between Bethesda and New Carrollton would be faster on Metrorail (assuming the system is still functioning in 2040).  The FEIS Travel Forecasts chapter itself, in its Table 6, shows that Metrorail between these two stations currently requires 55 minutes.  That time should remain unchanged in the future, assuming Metrorail continues to operate.  But traveling via the Purple Line would require 63 minutes (Table 11) for the same trip.  There would in fact be no time savings at all, but rather a time cost, if there were any riders between those two points.

Perhaps some of these individual cases were coding errors of some sort.  I cannot think of anything else which would have led to such results.  But even if one sets such individual cases aside, I find it impossible to understand how any of these user benefit figures could have followed from building a rail line.  They are all too large.  For example, the FEIS chapter provides in its Table 18 a detailed calculation of how much time would be saved by taking a bus (under the No Build alternative specifically) versus taking the proposed Purple Line.  Including average wait times, walking times, and transfers (when necessary), it found a savings of 11.4 minutes for a trip from Silver Spring (TAZ 3) to Bethesda (TAZ 1); 2.6 minutes for a trip from Bethesda (TAZ 1) to Glenmont (TAZ 9); and 8.0 minutes for a trip from North DC (TAZ 15) to Bethesda (TAZ 1).  Yet the minutes of user benefits per trip for these three examples from Table 23 (see the full table in the FEIS chapter) were 25 minutes, 19 minutes, and 25 minutes, respectively.  Even with a substantial premium for the rail options, I do not see how one could have arrived at such estimates.

And the figures matter.  The overall average minutes of user benefits per project trip (shown at the bottom of Table 23 in the FEIS chapter) came to 30 minutes.  If this were a more plausible average of 10 minutes, say, then with all else equal, the cost-effectiveness ratio would be three times worse.  This is not a small difference.

Importantly, the assumed figures on time savings will also matter to the estimates made of the total ridership on the Purple Line.  The forecast number of daily riders in 2040 of 68,650 (Table 22) or 69,300 (in other places in the FEIS chapter) was estimated based on inputs of travel times required by each of the various modes, and from this how much time would be saved by taking the Purple Line rather than some other option.  With implausibly large figures for travel time savings being fed in, the ridership forecasts will be too high.  If the time savings figures being fed in are far too large, the ridership forecasts will be far too high.  This is not a minor matter.

D.  Ridership by Line Segment

An important estimate is of how many riders there will be between any two station to station line segments, as that will determine what the system capacity will need to be.  Rail lines are inflexible, and completely so when, as would be the case here, the trains would be operated in full from one end of the line to the other.  The rider capacity (size) of the train cars and the spacing between each train (the headway) will then be set to accommodate what is needed to service ridership on what would be the most crowded segment.

Figure 10 of the FEIS Travel Forecasts chapter provides what would be a highly important and useful chart of ridership on each line segment, showing, it says, how many riders would (in terms of the daily average) arrive at each station, how many of those riders would get off at that station, and then how many riders would board at that station.  That would then produce the figure for how many riders will be on board traveling to the next station.  And one needs to work this out for going in each direction on the line.

Here is a portion of that figure, showing the upper left-hand corner:

Focussing on Bethesda (one end of the proposed line), the chart indicates 10,210 riders would board at Bethesda each day, while 19,800 riders would exit each day from arriving trains.  But how could that be?  While there might be a few riders who might take the Purple Line in one direction to go to work or for shopping or for whatever purpose, and then take an alternative transportation option to return home, that number is small, and would to some extent balance out by riders going in the opposite direction.  Setting this small possible number aside, the figures in the chart imply that close to twice as many riders will be exiting in Bethesda as will be entering.  They imply that Bethesda would be seeing its population grow by almost 9,600 people per day.  This is not possible.

But what happened is clear.  The tables immediately preceding this figure in the FEIS Travel Forecasts chapter (Tables 24 and 25) purport to show for each of the 21 stations on the proposed rail line, what the daily station boardings will be, with a column labeled “Total On” at each station and a column labeled “Total Off”.  Thus for Bethesda, the table indicates 10,210 riders will be getting on, while 19,800 will be getting off.  While for most of the stations, the riders getting on at that station could be taking the rail line in either direction (and those getting off could be arriving from either direction), for the two stations at the ends of the line (Bethesda, and at the other end New Carrollton) they can only go in one direction.

But as an asterisk for the “Total On” and “Total Off” column headings explicitly indicates, the figures in these two columns of Table 24 are in production/attraction format.  That is, they indicate that Bethesda will be “producing” (mostly from its households) a forecast total of 10,210 riders each day, and will be “attracting” (mostly from its businesses) 19,800 riders each day.  But as discussed above, one must not confuse the production/attraction presentation of the figures, with ridership according to origin/destination.  A household where a worker will be commuting each day to his or her office will be shown, in the production/attraction format, as two trips each day from the production TAZ going to the attraction TAZ.  They will not be shown as one trip in each direction, as they would have been had the figures been converted to an origin/destination presentation.  The person that generated the Figure 10 numbers confused this.

This was a simple and obvious error, but an important one.  Because of this mistake, the figures shown in Figure 10 for ridership between each of the station stops are completely wrong.  This is also important because ridership forecasts by line segment, such as what Figure 10 was supposed to show, are needed in order to determine system capacity.  The calculations depicted in the chart conclude that peak ridership in the line would be 21,400 each day on the segment heading west from the Woodside / 16th Street station (still part of Silver Spring) towards Lyttonsville.  Hence the train car sizes and the train frequency would need to be, according to these figures (but incorrectly), adequate to carry 21,400 riders each day. That is their forecast of ridership on the busiest segment.  The text of the chapter notes this specifically as well (see page 56).

That figure is critically important because the primary argument given by the State of Maryland for choosing a rail line rather than one of the less expensive as well as more cost-effective bus options, is that ridership will be so high at some point (not yet in 2040, but at some uncertain date not too long thereafter) that buses would be physically incapable of handling the load.  This all depends on whether the 21,400 figure for the maximum segment load in 2040 has any validity.  But it is clearly far too high; it leads to almost twice as many riders going into Bethesda as leave.  It was based on confusing ridership in a production/attraction format with ridership by origin/destination.

Correcting for this would lead to a far lower maximum load, even assuming the rest of the ridership forecasts were correct.  And at a far lower maximum load, there is even less of a case against investing in a far less expensive, as well as more cost-effective, system of upgraded bus services for the corridor.

E.  Other Issues

There are numerous other issues in the FEIS Travel Forecasts chapter which leads one to question how carefully the work was done.  One oddity, as an example and perhaps not important in itself, is that Tables 17 and 19, while titled differently, are large matrices where all the numbers contained therein are identical.  Table 17 is titled “Difference in Daily Transit Trips (2040 Preferred Alternative minus No Build Alternative) (Production/Attraction Format)”, while Table 19 is titled “New Transit Trips with the Preferred Alternative (Production/Attraction Format)”.  That the figures are all identical is not surprising – the titles suggest they should be the same.  But why show them twice?  And why, in the text discussing the tables (pp. 41-42), does the author treat them as if they were two different tables, showing different things?

But more importantly, there are a large number of inconsistencies in key figures between different parts of the chapter.  Examples include:

a)  New transit trips in 2040:  Table 17 (as well as 19) has that there would be 19,700 new transit trips daily in the Washington region in 2040, if the Purple Line is built (relative to the No Build alternative).  But on page 62, the text says the number would be 16,330 new transit trips in 2040 if it is built.  And Table B-1 on page 67 says there would be 28,626 new transit trips in 2040 (again relative to No Build).  Which is correct?  One is 75% higher than another, which is not a small difference.

b)  Total transit trips in 2040:  Table 16 says that there would be a total of 1,470,620 total transit trips in the Washington region in 2040 if the Purple Line is built, but Table B-1 on page 67 puts the figure at 1,683,700, a difference of over 213,000.

c)  Average travel time savings:  Table 23 indicates that average minutes of “user benefits” per project trip would be 30 minutes in 2040 if the Purple Line is built, but the text on page 62 says that average travel time savings would “range between 14 and 18 minutes per project trip”.  This might be explained if they assigned a 100% premium to the time savings for riding a rail line, but if so, such an assumed premium would be huge.  As noted above, the premium assigned in the AA/DEIS for the Medium Light Rail alternative (which was the alternative later chosen for the Purple Line) was just 19%.  And the 14 to 18 minutes figure for average time savings per trip itself looks too large. The simple average of the three representative examples worked out in Table 18 of the chapter was just 7.3 minutes.

d)  Total user benefit hours per day in 2040:  The text on page 62 says that the total user benefit hours per day in 2040 would sum to 17,175.  But Table B-5 says the total would come to 24,073 hours (shown as 1,444,403 minutes, and then divided by 60), while Table 21 gives a figure of 33,960 hours.  The highest figure is almost double the lowest.  Note the 33,960 hours figure is also shown in Table 20, but then shows this as 203,760 minutes (but should be 2,037,600 minutes – they multiplied by 6, not 60, for the conversion of hours to minutes).

There are other inconsistencies as well.  Perhaps some can be explained.  But they suggest that inadequate attention was paid to ensure accuracy.

F.  Conclusion

There are major problems with the forecasts of ridership on the proposed Purple Line.  The discussion above examined several of the more obvious ones.  There may well be more. Little explanation was provided in the documentation on how the forecasts were made and on the intermediate steps, so one cannot work through precisely what was done to see if all is reasonable and internally consistent.  Rather, the FEIS Travel Forecasts chapter largely presented just the final outcomes, with little description of why the numbers turned out to be what they were presented to be.

But the problems that are clear even with the limited information provided indicate that the correct Purple Line ridership forecasts would likely be well less than what their exercise produced.  Specifically:

a)  Since the Purple Line share of total transit use can never be greater than 100% (and will in general be far less), a proper division of transit ridership between the Purple Line and other transit modes will result in a figure that is well less than the 30,560 forecast for Purple Line ridership for trips wholly within the Purple Line corridor alone (shown in Table 22).  The corridor covers seven geographic zones which, as defined, stretch often from the Beltway to the DC line (or even into DC), and from Bethesda to New Carrollton.  There is a good deal of transit ridership within and between those zones, which include four Metrorail lines with a number of stations on each, plus numerous bus routes.  Based on the historical estimates for transit ridership (for 2005), the forecasts for total transit ridership in 2040 within and between those zones look reasonable.  The problem, rather, is with the specific Purple Line figures, with figures that are often higher (often far higher) than the figures for total transit use.  This is impossible.  Rather, one would expect Purple Line ridership to be some relatively small share (no more than a quarter or so, and probably well less than that) of all transit users in those zones.  Thus the Purple Line ridership forecasts, if properly done, would have been far lower than what was presented.  And while one cannot say what the precise figure would have been, it is a mathematical certainty that it cannot account for more than 100% of total transit use within and between those zones.

b)  The figures on user benefits per trip (Table 23) appear to be generally high (an overall average of 30 minutes) and sometimes ridiculously high (9.7 hours and 11.5 hours per trip in two cases).  At more plausible figures for time savings, Purple Line ridership would be far less.

c)  Even with total Purple Line ridership at the official forecast level (69,300), there will not be a concentration in ridership on the busiest segment of 21,400 (Figure 10).  The 21,400 figure was derived based on an obvious error – from a confusion in the meaning of the production/attraction format.  Furthermore, as just noted above, correcting for other obvious errors imply that total Purple Line ridership will also be far less than the 69,300 figure forecast, and hence the station to station loads will be far less.  The design capacity required to carry transit users in this corridor can therefore be far less than what these FEIS forecasts said it would need to be.  There is no need for a rail line.

These impossibilities, as well as inconsistencies in the figures cited at different points in the chapter for several of the key results, all suggest insufficient checks in the process to ensure the forecasts were, at a minimum, plausible and internally consistent.  For this, or whatever, reason, forecasts that are on their face impossible were nonetheless accepted and used to justify building an expensive rail line in this corridor.

And while the examination here has only been of the Purple Line, I suspect that such issues often arise in other such transit projects, and indeed in many proposed public infrastructure projects in the US.  When agencies responsible for assessing whether the projects are justified instead see their mission as project advocates, a hard look may not be taken at analyses whose results support going ahead.

The consequence is that a substantial share of the scarce funds available for transit and other public infrastructure projects is wasted.  Expensive new projects get funded (although only a few, as money is limited), while boring simple projects, as well as the maintenance of existing transit systems, get short-changed, and we end up with a public infrastructure that is far from what we need.

Fund the Washington Area Transit System With A Mandatory Fee on Commuter Parking Spaces

A.  Introduction

The Washington region’s primary transit authority (WMATA, for Washington Metropolitan Area Transit Authority, which operates both the Metrorail system and the primary bus system in the region) desperately needs additional funding.  While there are critical issues with management and governance which also need to be resolved, everyone agrees that additional funding is a necessary, albeit not sufficient, element of any recovery program. This post will address only the funding issue.  While important, I have nothing to contribute here on the management and governance issues.

WMATA has until now been funded, aside from fares, by a complex set of financial contributions from a disparate set of political jurisdictions in the Washington metropolitan region (four counties, three municipalities, plus Washington, DC, the states of Maryland and Virginia, and the federal government, for a total of 11 separate political jurisdictions). Like for governments everywhere, budgets are limited.  Not surprisingly, the decisions on how to share out the costs of WMATA are politically difficult, and especially so as a higher contribution by one jurisdiction, if not matched by others, will lead to a lower share in the costs by those others.  And unlike most large transit systems in the US, WMATA depends entirely (aside from fares) on funding from political jurisdictions.  It has no dedicated source of tax revenues.

This is clearly not working.  Everyone agrees that additional funding is needed, and most agree that a dedicated funding source needs to be created to supplement the funds available to WMATA.  But there is no agreement on what that additional funding source should be.  There have been several proposals, including an increase in the sales tax rate in the region or a special additional tax on properties located near Metro stations, but each has difficulties and there is no consensus.  As I will discuss below, there are indeed issues with each.  They would not provide a good basis for funding transit.

The recommendation developed here is that a fee on commuter parking spaces would provide the best approach to providing the additional funding needed by the Washington region’s transit system.  This alternative has not figured prominently in the recent discussion, and it is not clear why.  It might be because of an unfounded perception that such a fee would be difficult to implement.  As discussed below, this is not the case at all.  It could be easily implemented as part of the property tax system that is used throughout the Washington region.  It should be considered as an approach to raising the funds needed, and would perhaps serve as an alternative that could break the current impasse resulting from a lack of consensus for any of the other alternatives that have been put forward thus far.

Four factors need to be considered in any assessment of possible options to fund the transit systems.  These are:

  • Feasibility:  Would it be possible to implement the option in practical terms?  If it cannot be implemented, there is no point in considering it further.
  • Effectiveness:  Would the option be able to raise the amount of funds needed, with the parameters (such as the tax rates) at reasonable levels that would not be so high as to create problems themselves?
  • Efficiency:  Would the economic incentives created by the option work in the direction one wants, or the opposite?
  • Fairness:  Would the tax or option be fair in terms of who would pay for it?  Would it be disproportionately paid for by the poor, for example?

This blog post will assess to what degree these four tests are met by each of several major options that have been proposed to provide additional funding to WMATA.  A mandatory fee on parking spaces will be considered first, and in most detail.  Many will call this a tax on parking, and that is OK.  It is just a label.  But I would suggest it should be seen as a fee on rush hour drivers, who make use of our roads and fill them up to the point of congestion.  It can be considered similar to the fees we pay on our water bills – one would be paying a fee for using our roads at the times when their capacity is strained.  But one should not get caught up in the polemics:  Whether tax or mandatory fee, they would be a charge on the parking spaces used by those commuters who drive.

Other options then considered are an increase in the bus and rail fares charged, an increase in the sales tax rate on all goods purchased in the region, and enactment of a special or additional property tax on land and development close to the Metrorail stations in the region.

No one disputes that enactment of any of these taxes or fees or higher fares will be politically difficult.  But the Washington region would collapse if its Metrorail system collapsed.  Metrorail was until recently the second busiest rail transit system in the US in terms of ridership (after New York).  However, Metrorail ridership declined in recent years, to the point that it was 17% lower in FY2016 than what it was in FY2010.  The decline is commonly attributed to a combination of relatively high fares, lack of reliability, and the increased safety concerns of recent years, combined most recently with periodic shutdowns on line segments in order to carry out urgent repairs and maintenance. Despite this, Metrorail in 2016 was still the third busiest rail system in the country (just after Chicago).

But the Washington region cannot afford this decline in transit use.  Its traffic congestion, even with Metro operating, is by various measures either the worst in the nation or one of the worst.  Furthermore, the traffic congestion is not just in or near the downtown area.  As offices have migrated to suburban centers over the last several decades, traffic during rush hour is now horrendous not simply close to the city center, but throughout the region. See, for example, this screen shot from a Google Maps image I took at typical weekday afternoon during rush hour (5:30 pm on Tuesday, April 18):

The roads shown in red have traffic backed up.  The congestion is bad not simply around downtown, nor simply on the notoriously congested Capital Beltway as well, but also on roads at the very outer reaches of the suburbs.  The problem is region-wide, and it is in the interest of everyone in the region that it be addressed.

A good and well-run transit system will be a necessary component of what will be needed to fix this, although this is just the minimum.  And for this, it will be fundamental that there be a change in approach from a short-term focus on resolving the immediate crisis by some patch, to a perspective that focuses on how best to utilize, and over time enhance, the overall transportation system assets of the Washington region.  This includes both the Metro system assets (where a value of $40 billion has been commonly cited, presumably based on its historical cost) but also the value of the highways and bridges and parking facilities of the region, with a cost and a value that would add up to far more. These assets are not well utilized now.  A proper funding system for WMATA should take this into account.  If it is not, one can end up with empty seats on transit while the roads are even more congested.

The first question, however, is how much additional funding is required for WMATA.  The next section will examine that.

B.  WMATA’s Additional Funding Needs

How much is needed in additional funding for WMATA?  There is not a simple answer, and any answer will depend not only on the time frame considered but also on what the objective is.

To start, the FY18 budget for WMATA as originally drawn up in the fall of 2016 found there to be a $290 million gap between expenditures it considered to be necessary based on the current plans, and the revenues it forecast it would receive from fares (and other revenue generating activities such as parking fees at the stations and from advertising) and what would be provided under existing formulae from the political jurisdictions.  This gap was broadly similar in magnitude to the gaps found in recent years at a similar stage in the process.  And as in earlier years, this $290 million gap was largely closed by one-off measures that one could not (or at least should not) be used again.  In particular, funds were shifted from planned expenditures to maintain or build up the capital assets of the system, to cover current operating costs instead.

Looking forward, all the estimates of the additional funding needs are far higher.  To start, an analysis by Jeffrey DeWitt, the CFO of Washington, DC, released in October 2016 as part of a Metropolitan Washington Council of Governments (COG) report, estimated that at a minimum, WMATA faced a shortfall over the next ten years averaging $212 million per year on current operations and maintenance, and $330 million per year for capital needs, for a total of $542 million a year.  This estimate was based on an assumption of a capital investment program summing to $12 billion over the ten years.

But the “10-Year Capital Needs” report issued by WMATA a short time later estimated that the 10-year capital needs of WMATA would be $17.4 billion simply to bring Metro assets up to a “state of good repair” and maintain them there.  It estimated an additional $8 billion would be needed for modest new investments – needed in part to address certain safety issues.  But even if one limited the ten-year capital program to the $17.4 billion to get assets to a state of good repair, there would be a need for an additional $540 million a year over the October 2016 DeWitt estimates, i.e. a doubling of the earlier figure to almost $1.1 billion a year.

A more recent, and conservative, figure has been provided by Paul Wiedefeld, the General Manager of WMATA, in a report released on April 19.  He recommended that while Metro has capital needs totaling $25 billion over the next ten years, he would propose that a minimum of $15.5 billion be covered for the system “to remain safe and reliable”.  Even with this reduced capital investment program, he estimated that if funding from the jurisdictions remained at historical levels, there would be a 10-year funding gap of $7.5 billion remaining.  If jurisdictional funding were to rise at 3% a year in nominal terms, then he estimated that $500 million a year would still be necessary from some new funding source.

But this was just for the capital budget, and a highly constrained one at that.  There would, in addition, be a $100 million a year gap in the operating budget, even with the funding from the jurisdictions for operations rising also at 3% a year.  Wiedefeld suggested that it might be possible to reduce operating costs by that amount.  However, this would require cutting primarily labor expenditures, as direct labor costs account for 74% of operating expenditures.  Not surprisingly, the WMATA labor union is strongly opposed.

Even more recently, the Metropolitan Washington Council of Governments issued on April 26 the final report of a panel it convened (hereafter COG Panel or COG Panel Report) that examined Metro funding options.  The panel was made up of senior local administrative and budget officials.  While the focus of the report was an examination of different funding options (and will be discussed further below), it took as a basis of its estimated needs that WMATA would need to cover a ten-year capital investment program of $15.6 billion (to reach and maintain a “state of good repair” standard).  After assuming a 3% annual increase in what the political jurisdictions would provide, it estimated the funding gap for the capital budget would sum to $6.2 billion. Assuming also a 3% annual increase in funding from the political jurisdictions for operations and maintenance (O&M), it estimated a remaining funding gap of $1.3 billion for O&M.  The total gap for both capital and O&M expenses would thus sum to $7.5 billion over the period.

But while these COG estimates were referred to as 10-year funding gaps (thus averaging $750 billion per year), the table in its PowerPoint presentation on the report on page 13 makes clear that these are actually the funding gaps for the eight year period of FY19 to FY26.  FY17 is already almost over, and the FY18 budget has already been settled.  For the eight year period from FY19 going forward, the additional funding needed averages $930 million per year.  The COG Panel recommended, however, a dedicated funding source that would generate less, at $650 million per year to start (which it assumes would be in 2019).  But the reason for this difference is that the COG Panel recommended also that WMATA borrow additional funds in the early years against that new funding stream, so as to cover together the higher figure ($930 million on average per year over FY19-26) for what is in fact needed.  While such borrowing would supplement what could be funded in the early years, the resulting debt service would then subtract from what one could fund later.  While prudent borrowing certainly has a proper role, future funding needs will certainly be higher than what they are right now, and thus this will not provide a long-term solution to the funding issue.  More funding will eventually (and soon) be required.

All these figures reviewed thus far assume capital investment programs only just suffice to bring existing assets up to a “state of good repair”, with nothing done to add to these assets.  It also appears that the estimates were influenced at least to some extent by what the analysts thought might be politically feasible.  Yet additional capacity will be needed if the Washington region is to continue to grow.  While these additional amounts are much more speculative, there is no doubt that they are large, indeed huge.

The most careful recent study of long-term expansion needs is summarized in a series of reports released by WMATA in early 2016.   A number of rail options were examined (mostly extensions of existing rail lines), with the conclusion that the highest priority for a 2040 time horizon was to enhance the capacity at the center of the system.  Portions of these lines are already strained or at full capacity, including in particular the segment for the tunnel under the Potomac from Rosslyn.  Under this plan, there would be a new circular underground loop for the Metro lines around downtown Washington and extending across the Potomac to Rosslyn and the Pentagon.  It is not clear that a good estimate has yet been done on what this would cost, but the Washington Post gave a figure of $26 billion for an earlier variant (along with certain other expenditures).  This would clearly be a multi-decade project, and if anything like it is to be done by 2040, work would need to begin within the current 10-year WMATA planning horizon.  Yet given WMATA’s current difficulties, there has been little focus on these long-term needs.  And nothing has been provided for them.

To sum up, how much in additional funding is needed?  While there is no precise number, in part because the focus has been on the immediate crisis and on what might be considered politically feasible, for the purposes of this post we will use the following.  At a minimum, we will look at what would be needed to generate $650 million per year, the same figure arrived at in the COG Panel Report.  But this figure is clearly at the low end of the range of what will be needed.  At best, it will suffice only for a few years.  Our political leaders in the region should recognize that this will need to rise to at least $1 billion per year within a few years if necessary investments are to be made to ensure the system not only reaches a “state of good repair” but also sustains it.  Furthermore, it will need to rise further to perhaps $2.0 billion a year by around 2030 if anything close to the system capacity that will be needed by 2040 is to be achieved.

For the analysis below, we will therefore look at what the rates will need to be to generate $650 million a year at the low end and roughly three times this ($2.0 billion a year in nominal terms, by the year 2030) at the high end.  These figures are of course only illustrative of what might be required.  And for the forecast figures for 2030, I will assume (consistent with what the COG Panel did) that inflation from now to then will rise at 2% a year while real growth in the region will rise, conservatively, at 1% a year.  Note that $2.0 billion in 2030 in nominal terms would be equivalent to $1.55 billion in terms of dollars of today (2017) if inflation rises at 2% a year.

It is important to recognize that providing just the low-end figure of $650 million a year will not suffice for more than a few years.  It does provide a starting point, and while that is important, when considering such a major reform as moving to a dedicated funding source to supplement government funding sources, one should really be thinking longer term.  Not much would be gained by moving to a funding source which would prove insufficient after just a few years, leading to yet another crisis.

C.  A Mandatory Fee on Commuter Parking Spaces

A fee would be assessed (generally through the property tax system) on all parking spaces used by office and other commuting employees.  It would not be assessed on residential parking, nor on customer parking linked to retail or other such commercial space, but would be limited to the all-day parking spots that commuters use.

It would be straightforward to implement.  The owners of the property with the parking spaces would be assessed a fee for each parking space provided.  For example, if the fee is set at $1 per day per space, a fee of $250 per year would be assessed (based on 250 work-days a year, of 52 weeks at 5 days per week less 10 days for holidays).  It would be paid through the regular property tax system, and collected from the owners of that land along with their regular property taxes on the semi-annual (or quarterly or whatever) basis that they pay their property taxes. The owners of the spaces would be encouraged to pass along the costs to those employees who drive and use the spaces (and owners of commercial parking lots will presumably adjust their monthly fees to reflect this), but it would be the owners of the parking spaces themselves who would be immediately liable to pay the fees.

Property records will generally have the number of parking spaces provided on those plots of land.  This will certainly be so in the cases of underground parking provided in modern office buildings and in multi-story commercial parking garages.  And I suspect there will similarly be such a record of the number of spaces in surface parking lots.  But even if not, it would be straightforward to determine their number.  Property owners could be required to declare them, subject to spot-checks and fines if they did not declare them honestly. One can now even use satellite images available on Google Maps to count such spaces. And a few years ago my water bills started to include a monthly fee for the square footage of impermeable space on my land (from roofs and driveways primarily), as drainage from such surfaces feed into stormwater drains and must ultimately be treated before being discharged into the Potomac river.  They determined through the property records system and from satellite images the square footage of such spaces on all individual properties.  If that can be done, one certainly determine the number of parking spaces on open lots.

There are, however, a few special cases where property taxes are not collected and where different arrangements will need to be made.  But this can be done.  Specifically:

  1. Properties owned by federal, state, and local governments will generally not pay property taxes.  But the mandatory fees on parking spaces could still be collected by these government entities and paid into the system just as by private property owners.  Presumably, the governments support the reform as it is supplementing the funds they already provide to WMATA.
  2. Similarly, international organizations located in the Washington region, such as the World Bank, the IMF, the Inter-American Development Bank, and others (mostly much smaller) operate under international treaties which provide that they do not owe property taxes on properties they own.  But as with governments, they could collect such fees on parking spaces made available to their employees who drive to work.  They already charge their employees monthly fees for the spaces, and the new fee could be added on.  And while I am not a lawyer, it might well be the case that such a fee on parking spots could be made mandatory.  The institutions do pay the fees charged for the water they use, and employees do pay sales taxes on the food they purchase in their cafeterias.  Finally, these institutions advise governments to apply good policy.  The same should apply here.
  3. There are also non-profit hospitals, universities, and similar institutions, which are major employers in the region but which may not be charged property taxes. However, the fee on parking spaces, while collected for most through the property tax system, can be seen as separate from regular property taxes.  It is a fee on commuters who make use of our road system and add to its congestion.  The parking fees could still be collected and paid in, even if no regular property taxes are due.
  4. Finally, the Washington region has a large number of embassies and other properties with strict internationally recognized immunities.  It might well be the case that it will not be possible to collect such a mandatory fee on parking spots for their employees (although again, presumably the embassies pay the fees on their water bills).  But the total number employed through such embassies is tiny as a share of total employment in the DC region.  And some embassies might well pay voluntarily, recognizing that they too are members of the local community, making use of the same roads.  Finally, note that embassy employees with diplomatic status also do not pay sales tax on their day-to-day purchases, while the embassy compounds themselves do not pay property taxes.  Proposals to fund WMATA through new or higher property taxes or sales taxes (discussed below) will face similar issues.  But as noted above, the amounts involved are tiny.

How, then, would such a mandatory fee on commuter parking spaces stand up under the four criteria noted above?:

a)  Feasibility:  As just discussed, such a fee on commuter parking spaces, implemented generally through the regular property tax system, would certainly be feasible.  It could be done.  It may well be that a lack of recognition of this which explains why such an option has typically not been much considered when alternatives are reviewed for how to fund a transit system such as WMATA.  It appears that most believe that it would require some system to be set up which would mandate a payment each day as commuters enter their parking lots.  But there is no need for that.  Rather, the fee could be imposed on the owner of the parking space, and collected as part of their property tax payments.  It would be up to the owner of that space to decide whether to pass along that cost to the commuters making use of those spaces (although passing along the cost should certainly be encouraged, so that the commuters face the cost of their decision to drive).

b)  Effectiveness:  The next question is whether such a fee, at reasonable rates, would generate the funds needed.  To determine this, one first needs to know how many such parking spots there are in the Washington region.  While more precise figures can be generated later, all that is needed at this point is a rough estimate.

As of January 2017, the Bureau of Labor Statistics estimated there were 3,217,400 employees in the Washington region’s Metropolitan Statistical Area (MSA).  While this MSA area is slightly larger than the jurisdictions that participate in the WMATA regional compact, the additional counties at the fringes of the region are relatively small in population and employment.  This figure on regional employment can then be coupled with the estimate from the most recent (2016) Metropolitan Washington COG “State of the Commute” survey, which concluded that 61.0% of commuters drive alone to work, while an additional 5.4% drive in either car-pools or van-pools.  Assuming an average of 2.5 riders in car-pools and van-pools (van-pools are relatively minor in number), this would work out to 63.2% as the number of cars (as a share of total employment) that carry commuters to their jobs.  Applying the 63.2% to the 3,217,400 figure for the number employed, an estimated 2,033,400 cars are used to carry commuters.  The total number of parking spaces will be somewhat more, as the parking lots will normally have some degree of excess capacity, but this can be ignored for the estimate here.  Rounding down, there are roughly 2 million parking spaces for these cars in the DC region.  And this number can be expected to grow over time.

With 2 million parking spaces, a daily fee of $1 would generate $500 million per year (based on 250 work-days per year).  A fee of $1.30 per day would generate $650 million. And assuming commuter parking spots grow at 1% a year (along with the rest of the regional economy) to 2030, a $3.50 fee in 2030 would generate $2.0 billion in the prices of that year (equivalent to $2.70 per day in the prices of 2017, assuming 2% annual inflation for the period).

Compared to the cost of driving, fees of $1.30 per day or even $3.50 per day are modest. While many workers do not pay for their parking (or for the full cost of their parking), the actual cost can be estimated by what commercial parking firms charge for their monthly parking contracts.  For the 33 parking garages listed as “downtown DC” on the Parking Panda website, the average monthly fee (showing on April 29, 2017) was a bit over $270. This would come to $13 per work day (based on 250 work days per year).  While the charges will be less in the suburbs, there will still be a cost.  But the full cost to commuters to drive to work is in fact much more.  Assuming the average cost of the cars driven is $36,000, and with simple straight line depreciation over 10 years, the average monthly cost will be $300. To this one should add the cost of car insurance (on the order of $50 to $100 per month), of expected repair costs (probably of similar magnitude), and of gas. The full cost of driving would on average then total over $600 per month, or about $29 per work day.  Even if one ignores the cost of the parking spot itself (as drivers will if their employers provide the spots for free), the cost to the driver would still average about $16 per work day.  An added $1.30 per day to cover the funding needs of the public transit system is minor compared to any of these cost estimates, and would still be modest at $3.50 per day (equal to $2.70 in the prices of today).

Thus at reasonable rates on commuter parking spots, it would be possible to collect the $650 million to $2.0 billion a year needed to help fund WMATA.

c)  Efficiency:  Another consideration when choosing how best to provide additional funds to WMATA is the impact on efficiency of that option.  A fee on parking spaces would be a positive for this.  The Washington region stands out for its severe congestion, including not only in the city center but also in the suburbs (and often even more so in the suburbs).  A fee on parking spots, if passed along to the commuters who drive, would serve as an incentive to take transit, and might have some impact on those at the margin. The impact is likely to be modest, as a $1.30 to $3.50 fee per day would not be much.  As just discussed above, given the current cost of driving (even when commuters who drive are not charged for their parking spots), an additional $1.30 to $3.50 would be only a small additional cost, even when it is passed along.  But at least it would operate in the direction one wants to alleviate traffic congestion.

d)  Fairness:  Finally, the fee would be fair relative to the other options being considered in terms of who would be impacted.  Those who drive to work (over 90% of whom drive alone) are generally of higher income.  They can afford the high cost of driving, which is high (as noted above) even in those cases when they are provided free parking spaces by their employer.

Some would argue that since the drivers are not taking transit, they should not help pay for that transit.  But that is not correct.  First of all, they have a direct interest in reducing road congestion, and only a well-functioning transit system can help with that.  Drivers benefit directly (by reduced congestion) for every would-be driver who decides instead to take transit.  Second, all the other feasible funding options being considered for WMATA will be paid for in large part by drivers as well.  This is true whether a higher sales tax is imposed on the region, higher property taxes, or just higher government funding from their budgets (with this funding coming from the income taxes as well as sales taxes and property taxes these governments receive).  And as discussed below, higher fares on WMATA passengers to raise the amounts needed is simply not a feasible option.

Some drivers will likely also argue that they have no choice but to drive.  While they would still gain by any reduction in congestion (and would lose in a big way due to extreme congestion if WMATA service collapses due to inadequate funding), it is no doubt true that at least some commuters have no alternative but to drive.  However, the number is quite modest.  The 2016 survey of commuters undertaken by the Metropolitan Washington COG, referred to also above, asked their sample of commuters whether there was either bus service or train service “near” their homes (“near” as they would themselves consider it), and separately, “near” their place of work.  The response was 89% who said there were such transit services near their homes, and 86% who said there were such transit services near their places of work.  But note also that the 11% and 14%, respectively, who did not respond that there was such nearby transit, included those who responded that they did not know.  Many of those who drive to work might not know, as they never had a need to look into it.

The share of the Washington region’s population who do not have access to transit services is therefore relatively small, probably well less than 10% of commuters.  The transit options might not be convenient, and probably take longer than driving in many if not most cases given the current service provision, but transit alternatives exist for the overwhelming share of the regional population.  The issue is that those who can afford the high cost will drive, while the poorer workers who cannot will have no choice but to take transit.  Setting a fee on parking spaces for commuters in order to support the maintenance of decent transit services in the region is socially as well as economically fair.

D.  Alternative Funding Options That Have Been Proposed

1)   Higher Fares:  The first alternative that many would suggest for raising additional funds for the transit system is to charge higher fares.  While certainly feasible in a mechanical sense, such an alternative would fail the effectiveness test.  The fares are already high.  Any increase in fares will lead to yet more transit users choosing to drive instead (for those for whom this is an option).  The increase in fare revenues collected will be less than in proportion to the increase in fare rates set.  And at some point, so many transit users will switch that total fare revenue would in fact decrease.

In the recently passed FY18 budget for WMATA, the forecast revenues to be collected from fares is $709 million.  This is down from an expected $792 million in FY17 despite a fare increase averaging 4%.  Transit users are leaving as fares have increased and service has deteriorated.  To increase the fares to try to raise an additional $650 million would require an increase of over 90% if no riders then leave.  But more riders would of course leave, and it is not clear if anything additional (much less an extra $650 million) would be raised. And this would of course be even more so if one tried to raise an extra $2.0 billion.

So as all recognize, it will not be possible to resolve the WMATA funding issues by means of higher fares.  Any increase in fares will instead lead to more riders leaving the system for their cars, leading to even greater road congestion.

2)  Increase the Sales Tax Rate:  Mayor Muriel Bowser of Washington has pushed for this alternative, and the recent COG Technical Panel concluded with the recommendation that  “the best revenue solution is an addition to the general sales tax in all localities in the WMATA Compact area in the National Capital Region” (page 4).  This alternative has drawn support from some others in the region as well, but is also opposed by some. There is as yet no consensus.

Sales taxes are already imposed across the region, and it would certainly be feasible to add an extra percentage point to what is now charged.  But each jurisdiction sets the tax in somewhat different ways, in terms of what is covered and at what rates, and it is not clear to what the additional 1% rate would be applied.  For example, Washington, DC, imposes a general rate of 5.75%, but nothing on food or medicines, while liquor and restaurants are charged a sales tax of 10% and hotels a rate of 14.5%.  Would the additional 1% rate apply only to the general rate of 5.75%, or would there also be a 1% point increase in what is charged on liquor, restaurants, and the others?  And would there still be a zero rate on food and medicines?  Virginia, in contrast, has a general sales tax rate (in Northern Virginia) of 6.0%, but it charges a rate on food of 2.5%.  Would the Virginia rate on food rise to 3.5%, or stay at 2.5%?  There is also a higher sales tax rate on restaurant meals in certain of the local jurisdictions in Virginia (such as a 10% rate in Arlington County) but not in others (just the base 6% rate in Fairfax County).  How would these be affected?  And similar to DC, there are also special rates on hotels and certain other categories.  Maryland also has its own set of rules, with a base rate of 6.0%, a rate of 9% on alcohol, and no sales tax on food.

Such specifics could presumably be worked out, but the distribution of the burden across individuals as well as the jurisdictions will depend on the specific choices made.  Would food be subject to the tax in Virginia but not in Maryland or DC, for example?  The COG Technical Panel must have made certain assumptions on this, but what they were was not explained in its report.

But it concluded that an additional 1% point on some base would generate $650 million in FY2019.  This is higher than the estimate made last October as part of the COG Panel work, where it estimated that a 1% point increase in the sales tax rate would raise $500 million annually.  It is not clear what the underlying reasons were for this difference, but the recent estimates might have been more thoroughly done.  Or there might have been differing assumptions on what would be included in the base to be taxed, such as food.

A 1% point rise in the sales tax imposed in the region would, under these estimates, then suffice to raise the minimum $650 million needed now.  But to raise $1.0 billion annually, rising to $2.0 billion a few years later, substantial further increases would soon be needed. The amount would of course depend on the extent to which local sales of taxable goods and services grew over time within the region.  Assuming that sales of items subject to the sales tax were to rise at a 3% annual rate in nominal terms (2% for inflation and 1% for real growth), and that one would need to raise $2.0 billion by 2030 (in terms of the prices of 2030), then the base sales tax rate would need to rise by about 2.2% points.  A 6% rate would need to rise to 8.2%.  A rate that high would likely generate concerns.

Thus while a sales tax increase would be effective in raising the amounts needed to fund WMATA in the immediate future, with a 1% rise in the tax rate sufficing, the sales tax rate would need to rise further to quite high levels for it to raise the amounts needed a few years later.  Whether such high rates would be politically possible is not clear.

Also likely to be a concern, as the COG Panel itself recognized in its report, is that the distribution of the increased tax burden across the local jurisdictions would differ substantially from what these jurisdictions contribute now to fund WMATA, as well as from what it estimates each jurisdiction would be called on to contribute (under the existing sharing rules) to cover the funding gap anticipated for FY17 – FY26:

Funding Shares:

FY17 Actual

FY17-26 Gap

From Sales Tax

DC

37.3%

35.8%

22.8%

Maryland

38.4%

33.5%

26.5%

Virginia

24.3%

30.7%

50.8%

Source:  COG Panel Final Report, pages 9 and 15.

If an extra 1% point were added to the sales tax across the region, 50.8% of the revenues thus generated would come from the Northern Virginian jurisdictions that participate in the WMATA compact.  This is substantially higher than the 24.3% share these jurisdictions contributed in WMATA funding in FY17, or the 30.7% share they would be called on to contribute to cover the anticipated FY17-26 gap (higher than in just FY17 primarily due to the opening of the second phase of the Silver Line).  The mirror image of this is that DC and Maryland would gain, with much lower shares paid in through the sales tax increase than what they are funding now.  Whether this would be politically acceptable remains to be seen.

Use of a higher sales tax to fund WMATA needs would also not lead to efficiency gains for the transportation system.  The sales tax on goods and services sold in the region would not have an impact on incentives, positive or negative, on decisions on whether to drive for your commute or to take transit.  It would be neutral in this regard, rather than beneficial.

Finally, and perhaps most importantly, sales taxes are regressive, costing the poor more as a share of their income than what they cost the well-off.  A sales tax rise would not meet the fairness test.  Even with exemptions granted for foods and medicines, poor households spend a high share of their incomes on items subject to sales taxes, while the well-off spend a lower share.  The well-off are able to devote a higher share of their incomes to items not subject to the general sale tax, such as luxury housing, or vacations elsewhere, or services not subject to sales taxes, or can devote a higher share of their incomes to savings.

Aside from the regressive nature of a sales tax, an increase in the sales tax to fund transit (and through this to reduce road congestion) will be paid by all in the region, including those who do not commute to work.  It would be paid, for example, also by retirees, by students, and by others who may not normally make use of transit or the road system to get to work during rush hour periods.  But they would pay similarly to others, and some may question the fairness of this.

An increase in the sales tax rate would thus be feasible.  And while a 1% point rise in the rate would be effective in raising the amounts needed in the immediate future, there is a question as to whether this approach would be effective in raising the amounts needed a few years later, given constraints (political and otherwise) on how high the sales tax rate could go.  The region would likely then face another crisis and dilemma as to how WMATA can then be adequately funded.  There are also political issues in the distribution of the sales tax burden across the jurisdictions of the region, with Northern Virginia paying a disproportionate share.  This would be even more of a concern when the tax rate would need to be increased further to cover rising WMATA funding needs.  There would also be no efficiency gains through the use of a sales tax.  Finally and importantly, a higher sales tax is regressive and not fair as it taxes a higher share of the income of the poor than of the well-off, as well as of groups who do not use transit or the roads during the rush hour periods of peak congestion.

3)  A Special Property Tax Rate on Properties Near Metro Stations

Some have argued for a special additional property tax to be imposed on properties that are located close to Metro stations.  The largest trade union at WMATA has advocated for this, for example, and the COG Technical Panel looked at this as one option it considered.

The logic is that the value of such properties has been enhanced by their location close to transit, and that therefore the owners of these more valuable properties should pay a higher property tax rate on them.  But while superficially this might look logical, in fact it is not, as we will discuss below.  There are several issues, both practical and in terms of what would be good policy.  I will start with the practical issues.

The special, higher, tax rate would be imposed on properties located “close” to Metro stations, but there is the immediate question of how one defines “close”.  Most commonly, it appears that the proponents would set the higher tax on all properties, residential as well as commercial, that are within a half-mile of a station.  That would mean, of course, that a property near the dividing line would see a sharply higher property tax rate than its neighbor across the street that lies on the other side of the line.

And the difference would be substantial.  The COG Technical Panel estimated that the additional tax rate would need to be 0.43% of the assessed value of all properties within a half mile of the DC area Metro stations to raise the same $650 million that an extra 1% on the sales tax rate would generate.  It was not clear from the COG Panel Report, however, whether the higher tax of 0.43% was determined based on the value of all properties within a half-mile of Metro stations, or only on the base of all such properties which currently pay property tax.  Governmental entities (including international organizations such as the World Bank and IMF) and non-profits (such as hospitals and universities) do not pay this tax (as was discussed above), and such properties account for a substantial share of properties located close to Metro stations in the Washington region.  If the 0.43% rate was estimated based on the value of all such properties, but if (just for the sake of illustration; I do not know what the share actually is) properties not subject to tax make up half of such properties, then the additional tax rate on taxable properties that would be needed to generate the $650 million would be twice as high, or 0.86%.

But even at just the 0.43% rate, the increase in taxes on such properties would be large. For Washington, DC, it would amount to an increase of 50% on the current general residential property tax rate of 0.85%, an increase of 26% on the 1.65% rate for commercial properties valued at less than $3 million, and an increase of 23% on the 1.85% rate for commercial properties valued at more than $3 million.  Property tax rates vary by jurisdiction across the region, but this provides some sense of the magnitudes involved.

The higher tax rate paid would also be the same for properties sitting right on top of the Metro stations and those a half mile away.  But the locational value is highest for those properties that are right at the Metro stations, and then tapers down with distance. One should in principle reflect this in such a tax, but in practice it would be difficult to do. What would the rate of tapering be?  And would one apply the distance based on the direct geographic distance to the Metro station (i.e. “as the crow flies”), or based on the path that one would need to take to walk to the Metro station, which could be significantly different?

Thus while it would be feasible to implement the higher property tax as a fixed amount on all properties within a half-mile (at least on those properties which are not exempt from property tax), the half-mile mark is arbitrary and does not in fact reflect the locational advantages properly.

The rate would also have to be substantially higher if the goal is to ensure WMATA is funded adequately by the new revenue source beyond just the next few years.  Assuming, as was done above for the other options, that property values rise at a 3% rate over time going forward (due both to growth and to price inflation), the 0.43% special tax rate would raise $900 million by 2030.  If one needed, however, $2 billion by that year for WMATA funding needs, the rate would need to rise to 0.96%.  This would mean that residential properties within a half mile would be paying more than double the property tax paid by neighbors just beyond the half-mile mark (assuming basic property tax rates are similar in the future to what they are now, and based on the current DC rates), while commercial rates would be over 50% more.  The effectiveness in raising the amounts required is therefore not clear, given the political constraints on how high one could set such a special tax.

But the major drawback would be the impact on efficiency.  With the severe congestion on Washington region roads, one should want to encourage, not discourage, concentrated development near Metro stations.  Indeed, that is a core rationale for investing so much in building and sustaining the Metro system.  To the extent a higher property tax discourages such development, the impact of such a special property tax on real estate near Metro stations would be to discourage precisely what the Metro system was built to encourage.  This is perverse.  One could indeed make the case that properties located close to Metro stations should pay a lower property tax rather than a higher one.  I would not, as it would be complex to implement and difficult to explain.  But technically it would have merit.

Finally, a special additional tax on the current owners of the properties near Metro stations would not meet the fairness test as the current owners, with very few if any exceptions, were not the owners of that land when the Metro system locations were first announced a half century ago.  The owners of the land at that time, in the 1960s, would have enjoyed an increase in the value of their land due to the then newly announced locations of the Metro stations.  And even if the higher values did not immediately materialize when the locations of the new Metro system stations were announced, those higher values certainly would have materialized in the subsequent many decades, as ownership turned over and the properties were sold and resold.  One can be sure the prices they sold for reflected the choice locations.

But those who purchased that land or properties then or subsequently would not have enjoyed the windfall the original owners had.  The current owners would have paid the higher prices following from the locational advantages near the Metro stations, and they are the ones who own those properties now.  While they certainly can charge higher rents for space in properties close to the Metro stations, the prices they paid for the properties themselves would have reflected the fact they could charge such higher rents.  They did not and do not enjoy a windfall from this locational advantage.  Rather, the original owners did, and they have already pocketed those profits and left.

Note that while a special tax imposed now on properties close to Metro stations cannot be justified, this does not mean that such a tax would not have been justified at an earlier stage.  That is, one could justify that or a similar tax that focused on the initial windfall gain on land or properties that would be close to a newly announced Metro line.  When new such rail lines are being built (in the Washington region or elsewhere), part of the cost could be covered by a special tax (time-limited, or perhaps structured as a share of the windfall gain at the first subsequent arms-length sale of the property) that would capture a share of the windfall from the newly announced locations of the stations.

An example of this being done is the special tax assessments on properties close to where the Silver Line stations are being built.  The Silver Line is a new line for the Washington region Metro system, where the first phase opened recently and the second phase is under construction.  A special property tax assessment district was established, with a higher tax rate and with the funds generated used to help construct the line.  One should also consider such a special tax for properties close to the stations on the proposed Purple Line (not part of the WMATA system, but connected to it), should that light rail line be built. The real estate developers with properties along that line have been strong proponents of building that line.  This is understandable; they would enjoy major windfall gains on their properties if the line is built.  But while the windfall gains could easily be in the hundreds of millions of dollars, there has been no discussion of their covering a portion of the cost, which will sum to $5.6 billion in payments to the private contractor to build and then operate the line for 30 years.  Under current plans, the general taxpayer would be obliged to pay this full amount, with only a small share of this (less than one-fourth) recovered in forecast fares.

While setting a special (but temporary) tax for properties close to stations can be justified for new lines, such as the Silver Line or the Purple Line, the issues are quite different for the existing Metro lines.  Such a special, additional, tax on properties close to the Metro stations is not warranted, would be unfair to the current owners, and could indeed have the perverse outcome of discouraging concentrated development near the Metro stations when one should want to do precisely the reverse.

4)  Other Funding Options

There can, of course, be other approaches to raising the funds that WMATA needs.  But there are issues with each, they in general have few advocates, and most agree that one of the options discussed above would be preferable.

The COG Technical Panel reviewed several, but rejected them in favor of its preference for a higher sales tax rate.  For example, the COG Panel estimated that it would be possible to raise their target for WMATA funding of $650 million if all local jurisdictions raised their property tax rates by 0.08% of the assessed values on all properties located in the region. But general property taxes are used as the primary means local jurisdictions raise the funds they need for their local government operations, and it would be best to keep this separate from WMATA funding.  The COG Panel also considered the possibility of creating a new Value-Added Tax (or VAT), a tax that is common elsewhere in the world but has never been instituted in the US.  It is commonly described as similar to a sales tax, but is imposed only on the extra value created at each stage in the production and sale process. But it would be complicated to develop and implement any new tax such as this, and it also has never been imposed (as far as I am aware) on a regional rather than national basis.  A regional VAT might be especially complicated.  The COG Panel also noted the possibility of a “commuter tax”.  Such a tax would have income taxes being imposed on a worker based on where they work rather than where they live.  But since there would be an offset for any such taxes against what the worker would otherwise pay where they are resident, the overall revenues generated at the level of the region as a whole would be essentially nothing.  It would be a wash.  There is also the issue that Congress has by law prohibited Washington, DC, from imposing any such commuter tax.

The COG Panel also looked at the imposition of an additional tax on motor vehicle fuels (gasoline and diesel) sold in the region.  This would in principle be more attractive as a means for funding transit, as it would affect the cost of commuting by car (by raising the cost of fuel) and thus might encourage, at the margin, more to take transit and thus reduce congestion.  Fuel taxes in the US are also extremely low compared to the levels charged in most other developed countries around the world.  And federal fuel taxes have not been changed since 1993, with a consequent fall in real, inflation-adjusted, terms. There is a strong case that the rates should be raised, as has been discussed in an earlier post on this blog.  But such fuel taxes have been earmarked primarily for road construction and maintenance (the Highway Trust Fund at the federal level), and any such funds are desperately needed there.  It would be best to keep such fuel taxes earmarked for that purpose, and separated from the funding needed to support WMATA.

E.  Summary and Conclusion

All agree that there is a need to create a dedicated source of funds to provide additional funds to WMATA.  While there are a number of issues with WMATA, including management and governance issues, no one disagrees that a necessary element in any solution is increased funding.  WMATA has underinvested for decades, with the result that the current system cannot operate reliably or safely.

Estimates for the additional funding required by WMATA vary, but most agree that a minimum of an additional $650 million per annum is required now simply to bring the assets up to a minimum level of reliability and safety.  But estimates of what will in fact be needed once the current most urgent rehabilitation investments are made are substantially higher.  It is likely that the system will need on the order of $2 billion a year more than what would follow under current funding formulae by the end of the next decade, if the system’s capacity is to grow by what will be necessary to support the region’s growth.

A mandatory fee on parking spaces for all commuters in the region would work best to provide such funds.  It would be feasible as it can be implemented largely through the existing property tax system.  It would be effective in raising the amounts needed, as a fee equivalent to $1.30 per day would raise $650 million per year under current conditions, and a fee of $3.50 per day would raise $2 billion per year in the year 2030.  These rates are modest or even low compared to what it costs now to drive.

A mandatory fee on parking spaces would also contribute to a more efficient use of the transportation assets in the region not only by helping to ensure the Metro system can function safely and reliably, but by also encouraging at least some who now drive instead to take transit and hence reduce road congestion.  Finally, such a fee would be fair as it is those of higher income who most commonly drive (in part because driving is expensive), while it is the poor who are most likely to take transit.

An increase in the sales tax rate in the region would not have these advantages.  While an increase in the rate by 1% point was estimated by the COG Panel to generate $650 million a year under current conditions, the rate would need to increase by substantially more to generate the funds that will be needed to support WMATA in the future.  This could be politically difficult.  The revenues generated would also come disproportionately from Northern Virginia, which itself will create political difficulties.  It would also not lead to greater efficiencies in transport use, other than by keeping WMATA operational (as all the options would do).  Most importantly, a sales tax is regressive (even when foods and medicine are not taxed), with the poor bearing a disproportionate share of the costs.

A special property tax on all properties located a half mile (or whatever specified distance) of existing Metro stations could also be imposed, although readily so only on such properties that are currently subject to property tax.  But there would be arbitrariness with such a rigidly specified distance being imposed, with a sharp fall in the tax rate for properties just across that artificial border line.  There is also a question as to whether it would be politically feasible to set the rates to such high rates as would be necessary as to address the WMATA funding needs of beyond just the next few years.

But most important, such a special tax on the current owners would not be a tax on those who gained a windfall when the locations of the Metro stations were announced many decades ago.  Those original owners have already pocketed their windfall gains and have left.  The current owners paid a high price for that land or the developments on them, and are not themselves enjoying a special windfall.  And indeed, a new special property tax on developments near the Metro stations would have the effect of discouraging any such new investment.  But that is the precise opposite of what we should want.  The policy aim has long been to encourage, not discourage, concentrated development around the Metro stations.

This does not mean that some such special tax, if time-constrained, would not be a good choice when a new Metro line (or rail line such as the proposed Purple Line) is to be built. The owners of land near the planned future Metro stops would enjoy a windfall gain, and a special tax on that is warranted.  Such a special tax district has been set for the new Silver Line, and would be warranted also if the Purple Line is to be built.  Those who own that land will of course object, as they wish to keep their windfall in full.

To conclude, no one denies that any new tax or fee will be controversial and politically difficult.  But the Metro system is critical to the Washington region, and cannot be allowed to continue to deteriorate.  Increased funding (as well as other measures) will be necessary to fix this.  Among the possible options, the best approach is to set a mandatory fee that would be collected on all commuter parking spaces in the region.

Productivity: Do Low Real Wages Explain the Slowdown?

GDP per Worker, 1947Q1 to 2016Q2,rev

A.  Introduction, and the Record on Productivity Growth

There is nothing more important to long term economic growth than the growth in productivity.  And as shown in the chart above, productivity (measured here by real GDP in 2009 dollars per worker employed) is now over $115,000.  This is 2.6 times what it was in 1947 (when it was $44,400 per worker), and largely explains why living standards are higher now than then.  But productivity growth in recent decades has not matched what was achieved between 1947 and the mid-1960s, and there has been an especially sharp slowdown since late 2010.  The question is why?

Productivity is not the whole story; distribution also matters.  And as this blog has discussed before, while all income groups enjoyed similar improvements in their incomes between 1947 and 1980 (with those improvements also similar to the growth in productivity over that period), since then the fruits of economic growth have gone only to the higher income groups, while the real incomes of the bottom 90% have stagnated.  The importance of this will be discussed further below.  But for the moment, we will concentrate on overall productivity, and what has happened to it especially in recent years.

As noted, the overall growth in productivity since 1947 has been huge.  The chart above is calculated from data reported by the BEA (for GDP) and the BLS (for employment).  It is productivity at its most basic:  Output per person employed.  Note that there are other, more elaborate, measures of productivity one might often see, which seek to control, for example, for the level of capital or for the education structure of the labor force.  But for this post, we will focus simply on output per person employed.

(Technical Note on the Data: The most reliable data on employment comes from the CES survey of employers of the BLS, but this survey excludes farm employment.  However, this exclusion is small and will not have a significant impact on the growth rates.  Total employment in agriculture, forestry, fishing, and hunting, which is broader than farm employment only, accounts for only 1.4% of total employment, and this sector is 1.2% of GDP.)

While the overall rise in productivity since 1947 has been huge, the pace of productivity growth was not always the same.  There have been year-to-year fluctuations, not surprisingly, but these even out over time and are not significant. There are also somewhat longer term fluctuations tied to the business cycle, and these can be significant on time scales of a decade or so.  Productivity growth slows in the later phases of a business expansion, and may well fall as an economic downturn starts to develop.  But once well into a downturn, with businesses laying off workers rapidly (with the least productive workers the most likely to be laid off first), one will often see productivity (of those still employed) rise.  And it will then rise further in the early stages of an expansion as output grows while new hiring lags.

Setting aside these shorter-term patterns, one can break down productivity growth over the close to 70 year period here into three major sub-periods.  Between the first quarter of 1947 and the first quarter of 1966, productivity rose at a 2.2% annual pace.  There was then a slowdown, for reasons that are not fully clear and which economists still debate, to just a 0.4% pace between the first quarter of 1966 and the first quarter of 1982.  The pace of productivity growth then rose again, to 1.4% a year between the first quarter of 1982 and the second quarter of 2016.  But this was well less than the 2.2% pace the US enjoyed before.

An important question is why did productivity growth slow from a 2.2% pace between the late 1940s and mid-1960s, to a 1.4% pace since 1982.  Such a slowdown, if sustained, might not appear like much, but the impact would in fact be significant.  Over a 50 year period, for example, real output per worker would be 50% higher with growth at a 2.2% than it would be with growth at a 1.4% pace.

There is also an important question of whether productivity growth has slowed even further in recent years.  This might well still be a business cycle effect, as the economy has recovered from the 2008/09 downturn but only slowly (due to the fiscal drag from cuts in government spending).  The pace of productivity growth has been especially slow since late 2010, as is clear by blowing up the chart from above to focus on the period since 2000:

GDP per Worker, 2000Q1 to 2016Q2,rev

Productivity has increased at a rate of just 0.13% a year since late 2010.  This is slow, and a real problem if it continues.  I would hasten to add that the period here (5 1/2 years) is still too short to say with any certainty whether this will remain an issue.  There have been similar multi-year periods since 1947 when the pace of productivity growth appeared to slow, and then bounced back.  Indeed, as seen in the chart above, one would have found a similar pattern had one looked back in early 2009, with a slow pace of productivity growth observed from about 2005.

There has been a good deal of work done by excellent economists on why productivity growth has been what it was, and what it might be in the future.  But there is no consensus.  Robert J. Gordon of Northwestern University, considered by many to be the “dean in the field”, takes a pessimistic view on the prospects in his recently published magnum opus “The Rise and Fall of American Growth”.  Erik Brynjolfsson and Andrew McAfee of MIT, in contrast, argue for a more optimistic view in their recent work “The Second Machine Age” (although “optimistic” might not be the right word because of their concern for the implication of this for jobs).  They see productivity growth progressing rapidly, if not accelerating.

But such explanations are focused on possible productivity growth as dictated by what is possible technologically.  A separate factor, I would argue, is whether investment in fact takes place that makes use of the technology that is available.  And this may well be a dominant consideration when examining the change in productivity over the short and medium terms.  A technology is irrelevant if it is not incorporated into the actual production process.  And it is only incorporated into the production process via investment.

To understand productivity growth, and why it has fallen in recent decades and perhaps especially so in recent years, one must therefore also look at the investment taking place, and why it is what it is.  The rest of this blog post will do that.

B.  The Slowdown in the Pace of Investment

The first point to note is that net investment (i.e. after depreciation) has been falling in recent decades when expressed as a share of GDP, with this true for both private and public investment:

Domestic Fixed Investment, Total, Public, and Private, Net, percentage of GDP, 1951 to 2015, updated Aug 16, 2016

Total net investment has been on a clear downward trend since the mid-1960s.  Private net investment has been volatile, falling sharply with the onset of an economic downturn and then recovering.  But since the late 1970s its trend has also clearly been downward. Net private investment has been less than 3 1/2% of GDP in recent years, or less than half what it averaged between 1951 and 1980 (of over 7% of GDP).  And net public investment, while less volatile, has plummeted over time.  It averaged 3.1% of GDP between 1951 and 1968, but is only 0.5% of GDP now (as of 2015), or less than one-sixth of what it was before.

With falling net investment, the rates of growth of public and private capital stocks (fixed assets) have fallen (where 2014 is the most recent year for which the BEA has released such data):

Rate of Growth In Per Capita Net Stock of Private and Government Fixed Assets, edited, 1951 to 2014

Indeed, expressed in per capita terms, the stock of public capital is now falling.  The decrepit state of our highways, bridges, and other public infrastructure should not be a surprise.  And the stock of private capital fell each year between 2009 and 2011, with some recovery since but still at almost record low growth.

Even setting aside the recent low (or even negative) figures, the trend in the pace of growth for both public and private capital has declined since the mid-1960s.  Why might this be?

C.  Why Has Investment Slowed?

The answer is simple and clear for pubic capital.  Conservative politicians, in both the US Congress and in many states, have forced cuts in public investment over the years to the current low levels.  For whatever reasons, whether ideological or something else, conservative politicians have insisted on cutting or even blocking much of what the United States used to invest in publicly.

Yet public, like private, investment is important to productivity.  It is not only commuters trying to get to work who spend time in traffic jams from inadequate roads, and hence face work days of not 8 1/2 hours, but rather 10 or 11 or even 12 hours (with consequent adverse impacts on their productivity).  It affects also truck drivers and repairmen, who can accomplish less on their jobs due to time spent in jams.  Or, as a consequence of inadequate public investment in computer technology, a greater number of public sector workers are required than otherwise, in jobs ranging from issuing driver’s licenses to enrolling people in Medicare.  Inadequate public investment can hold back economic productivity in many ways.

The reasons behind the fall in private investment are less obvious, but more interesting. An obvious possible cause to check is whether private profitability has fallen.  If it has, then a reduction in private investment relative to output would not be a surprise.  But this has in fact not been the case:

Rate of Return on Produced Assets, 1951 to 2015, updated

The nominal rate of return on private investment has not only been high, but also surprisingly steady over the years.  Profits are defined here as the net operating surplus of all private entities, and is taken from the national account figures of the BEA.  They are then taken as a ratio to the stock of private produced assets (fixed assets plus inventories) as of the beginning of the year.  This rate of return has varied only between 8 and 13% over the period since at least 1951, and over the last several years has been around 11%.

Many might be surprised by both this high level of profitability and its lack of volatility.  I was.  But it should be noted that the measure of profitability here, net operating surplus, is a broad measure of all the returns to capital.  It includes not only corporate profitability, but also profits of unincorporated businesses, payments of interest (on borrowed capital), and payments of rents (as on buildings). That is, this is the return on all forms of private productive capital in the economy.

The real rates of return have been more volatile, and were especially low between 1974 and 1983, when inflation was high.  They are measured here by adjusting the nominal returns for inflation, using the GDP deflator as the measure for inflation.  But this real rate of return was a good 9.6% in 2015.  That is high for a real rate of return.  It was higher than that only for one year late in the Clinton administration, and for several years between the early 1950s and the mid-1960s.  But it was never higher than 11%.  The current real rate of return on private capital is far from low.

Why then has private investment slowed, in relation to output, if profitability is as high now as it has ever been since the 1950s?  One could conceive of several possible reasons. They include:

a)  Along the lines of what Robert Gordon has argued, perhaps the underlying pace of technological progress has slowed, and thus there is less of an incentive to undertake new investments (since the returns to replacing old capital with new capital will be less).  The rate of growth of capital then slows, and this keeps up profitability (as the capital becomes more scarce relative to output) even as the attractiveness of new investment diminishes.

b)  Conservatives might argue that the reduced pace of investment could be due to increased governmental regulations, which makes investment more difficult and raises its cost.  This might be difficult to reconcile with the rate of return on capital nonetheless remaining high, but in principle could be if one argues that the slower pace of new investment keeps up profitability as capital then becomes more scarce relative to output. But note that this argument would require that the increased burden of regulation began during the Reagan years in the early 1980s (when the share of private investment in GDP first started to slow – see the chart above), and built up steadily since then through both Republican and Democratic administrations.  It would not be something that started only recently under Obama.

c)  One could also argue that the reduced investment might be a consequence of “Baumol’s Cost Disease”.  This was discussed in earlier posts on this blog, both for overall government spending and for government investment in infrastructure specifically.  As discussed in those posts, Baumol’s Cost Disease explains why activities where productivity growth may be relatively more difficult to achieve than in other activities, will see their relative costs increase over time.  Construction is an example, where productivity growth has been historically more difficult to achieve than has been the case in manufacturing.  Thus the cost of investing, both public and private, relative to the cost of other items will increase over time.  This can then also be a possible explanation of slowing new investment, with that slower investment then keeping profitability up due to increasing scarcity of capital.

One problem with each of the possible explanations described above is that they all depend on capital investments becoming less attractive than before, either due to higher costs or due to reduced prospective return.  If such factors were indeed critical, one would need to take into account also the effect of taxes on investment returns.  And such taxes have been cut sharply over this same period.  As discussed in an earlier blog post, taxes on corporate profits, for example, are taxed now at an effective rate of less than 20%, based on what is actually paid after all the legal deductions and credits are included.  And this tax rate has fallen steadily over time.  The current 20% rate is less than half the effective rate that applied in the 1950s and 1960s, when the effective rate averaged almost 45%.  And the tax rate on long-term capital gains, as would apply to returns on capital to individuals, fell from a peak of just below 40% in the mid-1970s to just 15% following the Bush II tax cuts and to 20% since 2013.

Such sharp cuts in taxes on profits implies that the after-tax rate of return on assets has risen sharply (the before-tax rate of return, shown on the chart above, has been flat).  Yet despite this, private investment has fallen steadily since the early 1980s as a share of GDP.

Such explanations for the reason behind the fall in private investment since the early 1980s are therefore questionable.  However, the purpose of this blog post is not to debate this. Economists are good at coming up with models, possibly convoluted, which can explain things ex post.  Several could apply here.

Rather, I would suggest that there might be an alternative explanation for why private investment has been declining.  While consistent with basic economics, I have not seen it before.  This explanation focuses on the stagnant real wages seen since the early 1980s, and the impact this would have on whether or not to invest.

D.  The Impact of Low Real Wages

Real wages have stagnated in the US since the early 1980s, as has been discussed in earlier posts on this blog (see in particular this post).  The chart below, updated to the most recent figures available, compares the real median wage since 1979 (the earliest year available for this data series) to real GDP per worker employed:

Real GDP per Worker versus Real Median Wage, 1979Q1 to 2016Q2, rev

Real median wages have been flat overall:  Just 3% higher in 2016 than what they were 37 years before.  But real GDP per worker is almost 60% higher over this same period.  This has critically important implications for both private investment and for productivity growth. To sum up in one line the discussion that will follow below, there is less and less reason to invest in new, productivity enhancing, capital, if labor is available at a stagnant real wage that has changed little in 37 years.

Traditional economics, as commonly taught, would find it difficult to explain the observed stagnation in real wages while productivity has risen (even if at a slower pace than before). A core result taught in microeconomics is that in “perfectly competitive” markets, labor will be paid the value of its marginal product.  One would not then see a divergence such as that seen in this chart between growth in productivity and a lack of growth in the real wage.

(The more careful observers among the readers of this post might note that the productivity curve shown here is for average productivity, and not the marginal productivity of an extra worker.  This is true.  Marginal productivity for the economy as a whole cannot be easily observed, nor indeed even be well defined.  However, one should note that the average productivity curve, as shown here, is rising over time.  This can only happen if marginal productivity on new investments are above average productivity at any point in time.  For other reasons, the real average wage would not rise permanently above average productivity (there would be an “adding-up” problem otherwise), but the theory would still predict a rise in the real wage with the increase in observed productivity.)

There are, however, clear reasons why workers might not be paid the value of their marginal product in the real world.  As noted, the theory applies in markets that are assumed to be perfectly competitive, and there are many reasons why this is not the case in the world we live in.  Perfect competition assumes that both parties to the transaction (the workers and employers) have complete information on not only the opportunities available in the market and on the abilities of the individual worker, but also that there are no costs to switching to an alternative worker or employer.  If there is a job on the other side of the country that would pay the individual worker a bit more, then the theory assumes the worker will switch to it.  But there are, of course, significant costs to moving to the other side of the country.  Furthermore, there will be uncertainty on what the abilities of any individual worker will be, so employers will normally seek to keep the workers they already have to fill their needs (as they know what these workers can do), than take a risk on a largely unknown new worker who might be willing to work for a lower wage.

For these and other reasons, labor markets are not perfectly competitive, and one should not then be surprised to find workers are not being paid the value of their marginal product.  But there is also an important factor coming from the macroeconomy. Microeconomics assumes that all resources, including labor resources, are being fully employed.  But unemployment exists and is often substantial.  Additional workers can then be hired at the current wage, without a need for the firm to raise that wage.  And that will hold whether or not the productivity of those workers has risen.

In such an environment, when unemployment is substantial one should not be surprised to find a divergence between growth in productivity and growth in the real wage.  And while there have of course been sharp fluctuations arising from the business cycle in the rate of unemployment from year to year, the simple average in the rate since 1979 has been 6.4%.  This is well in excess of what is normally considered the full employment rate of unemployment (of 5% or less).  Macro policy (both fiscal and monetary) has not done a very good job in most of the years since 1979 in ensuring there is sufficient demand in the aggregate in the economy to allow all workers who want to be employed in fact to be employed.

In such an environment, of workers being available for hire at a stagnant real wage which over time diverges more and more from their productivity, consider the investment decision a private firm faces.  Suppose they see a market opportunity and can sell more. To produce more, they have two options.  They can hire more labor to work with their existing plant and equipment to produce more, or they can invest in new plant and equipment.  If they choose the latter, they can produce more with fewer workers than they would otherwise need at the new level of production.  There will be more output per unit of labor input, or put another way, productivity will rise if the latter option is chosen.

But in an economy where labor is available at a flat real wage that has not changed in decades, the best choice will often simply be to hire more labor.  The labor is cheap.  New investment has a cost, and if the cost of the alternative (hire more labor) is low enough, then it is more profitable for the firm simply to hire more labor.  Productivity in such a case will then not go up, and may indeed even go down.  But this could be the economically wise choice, if labor is cheap enough.

Viewed in this way, one can see that the interpretation of many conservatives on the relationship between productivity growth and the real wage has it backwards.  Real wages have not been stagnant because productivity growth has been slow.  Labor productivity since 1979 has grown by a cumulative 60%, while real median wages have been basically flat.

Rather, the causation may well be going the other way.  Stagnant and low real wages have led to less and less of an incentive for private firms to invest.  And such a cut-back is precisely what we saw in the chart above on private (as well as public) investment as a share of GDP.  With less investment, the pace of productivity growth has then slowed.

As a reflection of this confusion, conservatives have denounced any effort to raise wages, asserting that if this is done, jobs will be lost as firms choose instead to invest and automate.  They assert that raising the minimum wage, which is currently lower in real terms than what it was when Harry Truman was president, would lead to minimum wage workers losing their jobs.  As a former CEO of McDonalds put it in a widely cited news report from last May, a $15 minimum wage would lead to “a job loss like you can’t believe.”   Fast food outlets like McDonalds would then find it better to invest in robotic arms to bag the french fries, he said, rather than hire workers to do this.

This is true.  The confusion comes from the widespread presumption that this is necessarily bad.  Outlets like McDonalds would then require fewer workers, but they would still need workers (including to operate the robotic arms), and those workers would be more productive.  They could be paid more, and would be if the minimum wage is raised.

The error in the argument comes from the presumption that the workers being employed at the current minimum wage of $7.25 an hour do not and can not possess the skills needed to be employed in some other job.  There is no reason to believe this to be the case.  There was no problem with ensuring workers could be fully employed at a minimum wage which in real terms was higher in 1950, when Harry Truman was president, than what it is now.  And average worker productivity is 2.4 times higher now than what it was then.

Ensuring full employment in the economy as a whole is not a responsibility of private business.  Rather, it is a government responsibility.  Fiscal and monetary policy need to be managed so that labor markets are tight enough to ensure all workers who want a job can get a job, while not so tight at to lead to inflation.

Following the economic collapse at the end of the Bush administration in 2008, monetary policy did all it could to try to ensure sufficient aggregate demand in the economy (interest rates were held at or close to zero).  But monetary policy alone will not be enough when the economy collapsed as far as it did in 2008.  It needs to be complemented by supportive fiscal policy.  While there was the initial stimulus package of Obama which was critical to stabilizing the economy, it did not go far enough and was allowed to run out. And government spending from 2010 was then cut, acting as a drag which kept the pace of recovery slow.  The economy has only in the past year returned to close to full employment.  It is not a coincidence that real wages are finally starting to rise (as seen in the chart above).

E.  Conclusion

Productivity growth is key in any economy.  Over the long run, living standards can only improve if productivity does.  Hence there is reason to be concerned with the slower pace of productivity growth seen since the early 1980s, and especially in recent years.

Investment, both public and private, is what leads to productivity growth, but the pace of investment has slowed since the levels seen in the 1950s and 60s.  The cause of the decline in public investment is clear:  Conservative politicians have slowed or even blocked public investment.  The result is obvious in our public infrastructure:  It is overused, under-maintained, and often an embarrassment.

The cause of the slowdown in private investment is less obvious, but equally important. First, one cannot blame a decline in private investment on a fall in profitability:  Profitability is higher now than it has been in all but one year since the mid-1960s.

Rather, one needs to recognize that the incentive to invest in productivity enhancing tools will not be there (or not there to the same extent) if labor can be hired at a wage that has stagnated for decades, and which over time became lower and lower relative to existing productivity.  It then makes more sense for firms to hire more workers with their existing stock of capital and other equipment, rather than invest in new, productivity enhancing, capital.  And this is what we have observed:  Workers are being hired, but productivity is not growing.

An argument is often made that if firms did indeed invest in capital and equipment that would raise productivity, that workers would then lose their jobs.  This is actually true by definition:  If productivity is higher, then the firm needs fewer workers per unit of output than they would otherwise.  But whether more workers would be employed in the economy as a whole does not depend on the actions of any individual firm, but rather on whether fiscal and monetary policy is managed to ensure full employment.

That is, it is the investment decisions of private firms which determine whether productivity will grow or not.  It is the macro management decisions of government which determine whether workers will be fully employed or not.

To put this bluntly, and in simplistic “bumper sticker” type terms, one could say that private businesses are not job creators, but rather job destroyers.  And that is fine.  Higher productivity means that a firm needs fewer workers to produce what they make than would otherwise have been needed, and this is important for ensuring efficiency.  As a necessary complement to this, however, it is the actions of government, through its fiscal and monetary policies, which “creates” jobs by managing aggregate demand to ensure all workers who want to be employed, are employed.

More on the High Cost of the Purple Line: A Comparison to BRT on the Silver Spring to Bethesda Segment

Comparison of Purple Line to BRT Cost, Silver Spring to Bethesda

This is a quick post drawing on a report in today’s Washington Post on the implementation of bus rapid transit (BRT) in Montgomery County, Maryland.  The article notes that one of the early BRT routes planned in the county would run from Burtonsville to Silver Spring down US Highway 29, with an estimated capital cost of $200 million.

This would be a distance of 10.2 miles, so the cost would be $19.6 million per mile on average.  This BRT line is currently slated to stop in Silver Spring, but it would be straightforward to extend it along East-West Highway for a further 3.7 miles to Bethesda. Assuming the same average cost per mile, the capital cost of this addition would be $72 million.

The current plan is for the Purple Line light rail line to cover this same basic route, connecting Silver Spring to Bethesda.  As I have discussed in earlier blog posts, the Purple Line is incredibly expensive, even if one ignores (as the official cost estimates do) the environment costs of building and operating the line (including the value of parkland destroyed, which is implicitly being valued at zero, as well as the environmental costs from storm water run-off, habitat destruction, hazardous waste issues, higher greenhouse gas emissions, and more).  The current capital cost estimate, following the service and other cuts that Governor Hogan has imposed to bring down costs, is $2.25 billion.  This also does not include the costs that Montgomery County will cover directly for building the Bethesda station and well as the cost of a utilitarian path to be built adjacent to the train tracks.  The Purple Line would also cost more to operate per rider than the Montgomery County BRT routes are expected to cost, so there is no cost savings from lower operating costs.

The Purple Line would be 16.2 miles long in total.  Using just the $2.25 billion cost figure, this comes to $139 million per mile.  This is extremely high.  Indeed, the Columbia Pike streetcar line in Arlington County, which was recently cancelled due to its high cost, would have cost “only” $117 million per mile despite it being built through a high density urban corridor for most of its entire route.

The distance from Silver Spring to Bethesda on the Purple Line will be 4.4 miles if it is built. This is longer than the direct route by road since it will follow a more indirect path passing up and around the direct route.  Assuming the cost of this 4.4 miles is the same on average as for the rest of the Purple Line (it might be higher due to the need to build some major bridges, including over Rock Creek), the cost would come to $612 million.

The choice therefore is between spending $612 million to build this segment of the Purple Line from Silver Spring to Bethesda, or spending $72 million by extending the BRT.  The Purple Line cost is 8.5 times as much, and government could save $540 million ( = $612m – $72m) by terminating the Purple Line in Silver Spring and using BRT service instead.

As an earlier blog post argued, new thinking is necessary if we are to resolve the very real transportation issues we face in this region.  This is one more example of what could be done.  A half billion dollar savings is not small.