The Purple Line Ridership Forecasts Are Wrong: An Example of Why We Get Our Infrastructure Wrong

Executive Summary

There are several major problems with the forecast ridership figures for the Purple Line, a proposed 16-mile light rail line that would pass in a partial arc around Washington, DC, in suburban Maryland.  The forecasts, as presented and described in the “Travel Forecasts Results Technical Report” of the Final Environmental Impact Statement for the project, are in a number of cases simply impossible.

Problems include:

a)  Forecast ridership in 2040 between many of the Transit Analysis Zone pairs along the Purple Line corridor would be higher on the Purple Line itself than it would be for total transit ridership (which includes bus, Metrorail, and commuter rail ridership, in addition to ridership on the Purple Line) between these zones.  This is impossible. Such cases are not only numerous (found in more than half of the possible cases for zones within the corridor) but often very large (12 times as high in one case).  If the forecasts for total transit ridership are correct, then correcting for this, with Purple Line ridership some reasonable share of the totals, would lead to far lower figures for Purple Line ridership.

b)  Figures on forecast hours of user benefits (primarily forecast time savings from a rail line) in a scenario where the Purple Line is built as compared to one where it is not, are often implausibly high.  In two extreme cases, the figures indicate average user benefits per trip between two specific zones, should the Purple Line be built, of 9.7 hours and 11.5 hours.  These cannot be right; one could walk faster.  But other figures on overall user benefits are also high, leading to an overall average predicted benefit of 30 minutes per trip.  Even with adjustments to the pure time savings that assign a premium to rail service, this is far too high and overestimates benefits by at least a factor of two or even three.  The user benefit figures are important for two reasons:  1) An overestimate leads to a cost-effectiveness estimate (an estimate of the cost of the project per hour of user benefits) that will be far off;  and 2) The figures used for user benefits from taking the proposed rail line enter directly into the estimation of ridership on the rail line (as part of the choice on whether to take the rail line rather than some other transit option, or to drive).  If the user benefit figures are overstated, ridership will be less.  With the user benefit figures overstated by a large margin, ridership will be far less.

c)  Figures on ridership from station to station are clearly incorrect.  They indicate, for example, that far more riders would exit at the Bethesda station (an end point on the line) each day (19,800) than would board there (10,210).  This is impossible.  More significantly, the figures indicate system capacity must be sufficient to handle 21,400 riders each day on the busiest segment (on the segment leaving Silver Spring heading towards Bethesda).  Even if the overall ridership numbers were correct, the figure for ridership on this segment is clearly too high (and it is this number which leads to the far higher number of those exiting the system in Bethesda than would enter there each day).  The figure is important as the rail line has been designed to a capacity sufficient to carry such a load.  With the true number far lower, there is even less of a case for investing in an expensive rail option.  Upgraded bus services could provide the capacity needed, and at far lower cost.

There appear to be other problems as well.  But even just these three indicate there are major issues with these forecasts.  This may also explain why a number of independent observers have noted for some time that the Purple Line ridership forecasts look implausibly high.  The figure for Purple Line ridership in 2040 of 69,300 per day is three times the average daily ridership actually observed in 2012 on 31 light rail lines built in the US over the last three decades.  It would also be 58% higher on the Purple Line than on the highest amongst those 31.  Yet the Purple Line would pass solely through suburban neighborhoods, of generally medium to low density.  Most of these other light rail lines in the US serve travel to and from downtown areas.

The causes of these errors in the ridership forecasts for the Purple Line are not always clear.  But the issues suggest at a minimum that quality checks were insufficient.  And while the Purple Line is just one example, inadequate attention to such issues might explain in part why ridership forecasts for light rail lines have often proven to be substantially wrong.

 

A.  Introduction

The Purple Line is a proposed light rail line that would be built in Suburban Maryland, stretching in a partial arc from east of Washington, DC, to north of the city.  I have written several posts previously in this blog on the proposed project (see the posts here, here, here, and here) and have been highly critical of it.  It is an extremely expensive project (the total cost to be paid to the private concessionaire to build and then operate the line for 30 years will sum to $5.6 billion, and other costs borne directly by the state and/or local counties will add at least a further $600 million to this).  And the state’s own analyses of the project found that upgraded bus services (including any one of several bus rapid transit, or BRT, options) to provide the transit services that are indeed needed in the corridor, would be both cheaper and more cost-effective.  Such alternatives would also avoid the environmental damage that is inevitable with the construction of dual rail lines along the proposed route, including the destruction of 48 acres of forest cover, the filling in of important wetland areas, and the destruction of a linear urban park that has the most visited trail in the state.

The state’s rationale for building a rail line rather than providing upgraded bus services is that ridership will be so high that at some point in the future (beyond 2040) only rail service would be able to handle the load.  But many independent analysts have long questioned those ridership forecasts.  A study from 2015 found that the forecast ridership on the Purple Line would be three times as high as the ridership actually observed in 2012 on 31 light rail lines built in the US over the last three decades.  Furthermore, the forecast Purple Line ridership would be 58% higher than ridership actually observed on the highest line among those 31.  And with the Purple Line route passing through suburban areas of generally medium to low density, in contrast to routes to and from major downtown areas for most of those 31, many have concluded the Purple Line forecasts are simply not credible.

Why did the Purple Line figures come out so high?  The most complete description provided by the State of Maryland of the ridership forecasts are provided in the chapter titled “Travel Forecasts Results Technical Report”, which is part of Volume III of the Final Environmental Impact Statement (FEIS) for the Purple Line, dated August 2013 (which I will hereafter often refer to simply as the “FEIS Travel Forecasts chapter”).  A close examination of that material indicates several clear problems with the figures.  This post will discuss three, although there might well be more.

These three are:

a)  The FEIS forecast ridership for 2040 on the Purple Line alone would be higher (in a number of cases far higher) in most of the 49 possible combinations of travel between the 7 Transit Analysis Zones (TAZs) defined along the Purple Line route, than the total number of transit riders among those zones (by bus, Metrorail, commuter rail, and the Purple Line itself).  This is impossible.

b)  Figures on user benefits per Purple Line trip (primarily the time forecast to be saved by use of a rail line) are implausibly high.  In two cases they come to 9.7 hours and 11.5 hours, respectively, per trip.  This cannot be.  One could walk faster.  But these figures for minutes of user benefits per trip were then passed through in the computations to the total forecast hours of user benefits that would accrue as a consequence of building the Purple Line, thus grossly over-estimating the benefits. Such user benefit figures would also have been used in the estimation of how many will choose to ride the Purple Line.  If these user benefit figures are overestimated (sometimes hugely overestimated), then the Purple Line ridership forecasts will be overestimated.

c)  The figure presenting rail ridership by line segment from station to station (which then was used to determine what ridership capacity would be needed to service the proposed route) shows almost twice as many riders exiting at the Bethesda station (an end of the line) as would board there each day (19,800 arriving versus 10,210 leaving each day).  While there could be some small difference (i.e. some people might take transit to work in the morning, and then get a car ride home with a colleague in the evening), it could not be so large.  The figures would imply that Bethesda would be accumulating close to 9,600 new residents each day.  The forecast ridership by line segment (which is what determines these figures) is critical as it determines what the capacity will need to be of the transit system to service such a number of riders.  With these figures over-stated, the design capacity is too high, and there is even less of a rationale for building a rail line as opposed to simply upgrading bus services in the corridor.

These three issues are clear just from an examination of the numbers presented.  But as noted, there might well be more.  We cannot say for sure what all the errors might be as the FEIS Travel Forecasts chapter does not give a complete set of the numbers and assumed relationships needed as inputs to the analysis and then resulting from it, nor more than just a cursory explanation of how the results were arrived at.  But with anomalies such as these, and with no explanations for them, one cannot treat any of the results with confidence.

And while necessarily more speculative, I will also discuss some possible reasons for why the mistakes may have been made.  This matters less than the errors themselves, but might provide a sense for why they arose.  Broadly, while the FEIS Travel Forecasts chapter (and indeed the entire FEIS report) only shows the Maryland Transit Administration (MTA) as the source for the documents, the MTA has acknowledged (and as would be the norm) that major portions of the work – in particular the ridership forecasts – were undertaken or led by hired consulting firms.  The consulting firms use standard but large models to prepare such ridership forecasts, but such models must be used carefully to ensure reliable results.  It is likely that results were generated by what might have been close to a “black box” to the user, that there were then less than sufficient quality checks to ensure the results were reasonable, and that the person assigned to write up the results (who may well have differed from the person generating the numbers) did not detect these anomalous results.

I will readily admit that this is speculation as to the possible underlying causes, and that I could be wrong on this.  But it might explain why figures were presented in the final report which were on their face impossible, with no explanation given.  In any case, what is most important is the problems themselves, regardless of the possible explanations on why they arose.

Each of the three issues will be taken up in turn.

B.  Forecast Ridership on the Purple Line Alone Would Be Higher in Many Cases than Total Transit Ridership

The first issue is that, according to the forecasts presented, there would be more riders on the Purple Line alone between many of the Transit Analysis Zones (TAZs) than the number of riders on all forms of transit.  This is impossible.

Forecast Ridership on All Transit Options in 2040:

Forecast Ridership on Purple Line Alone in 2040:

These two tables are screenshots of the upper left-hand corners of Table 16 and 22 from the FEIS Travel Forecasts chapter.  While they show the key numbers, I would recommend that the reader examine the full tables in the original FEIS Travel Forecasts chapter. Indeed, if your computer can handle it, it would be best to open the document twice in two separate browsers and then scroll down to the two tables to allow them to be compared side by side on your screen.

The tables show forecast ridership in 2040 on all forms of transit in the “Preferred Alternative” scenario where the Purple Line is built (Table 16), or for the sub-group of riders just on the Purple Line (Table 22).  And based on the total ridership figures presented at the bottoms of the full tables, the titles appear to be correct. That is, Table 16 forecasts that total transit ridership in the Washington metro region would be about 1.5 million trips per day in 2040, which is plausible (Table 13 says it was 1.1 million trips per day in 2005, which is consistent with WMATA bus and rail ridership, where WMATA accounts for 80 – 85% of total ridership in the region).  And Table 22 says the total number of trips per day on the Purple Line in 2040 would be 68,650, which is consistent (although still somewhat different from, with no explanation) with figures given elsewhere in the chapter on forecast total Purple Line trips per day in 2040 (of 69,330 in Table 24, for example, or 69,300 in Tables 25 and 26, with that small difference probably just rounding). So it does not appear that the tables were mislabeled, which was my first thought.

The full tables show the ridership between any two pairs of 22 defined Transit Analysis Zones (TAZs), in production/attraction format (which I will discuss below).  The 22 TAZs cover the entire Washington metro region, and are defined as relatively compact geographic zones along the Purple Line corridor and then progressively larger geographic areas as one goes further and further away.  They have seven TAZs defined along the Purple Line corridor itself (starting at the Bethesda zone and ending at the New Carrollton zone), but Northern Virginia has just two zones (where one, labeled “South”, also covers most of Southern Prince George’s County in Maryland).  See the map shown as Figure 4 on page 13 of the FEIS Travel Forecasts chapter for the full picture.  This aggregation to a manageable set of TAZs, with a focus on the Purple Line corridor itself, is reasonable.

The tables then show the forecast ridership between any two TAZ pairs.  For example, Table 16 says there will on average be 1,589 riders on all forms of transit each day in 2040 between Bethesda (TAZ 1, as a “producer” zone) and Silver Spring (TAZ 3, as an “attractor” zone).  But Table 22 says there will be 2,233 riders each day on average between these same two TAZs on the Purple Line alone.  This is impossible.  And there are many such impossibilities.  For the 49 possible pairs (7 x 7) for the 7 TAZs directly on the Purple Line corridor, more than half (29) have more riders on the Purple Line than on all forms of transit.  And for one pair, between Bethesda (TAZ 1) and New Carrollton (TAZ 7), the forecast is that there would be close to 12 times as many riders taking the Purple Line each day as would take all forms of public transit (which includes the Purple Line and more).

Furthermore, if one adds up all the transit ridership between these 49 possible pairs (where the totals are presented at the bottom of the tables; see the FEIS Travel Forecasts chapter), the total number of trips per day on all forms of transit sums to 29,890 among these 7 TAZs (Table 16), while the total for the Purple Line alone sums to 30,560 (Table 22).

How could such a mistake have been made?  One can only speculate, as the FEIS chapter had next to no description of the methods they followed.  One instead has to infer a good deal based on what was presented, in what sequence, and from what is commonly done in the profession to produce such forecasts.  This goes into fairly technical issues, and readers not interested in these details can skip directly to the next section below.  But it will likely be of interest at least to some, provides a short review of the modeling process commonly used to generate such ridership forecasts, and will be helpful to an understanding of the other two obvious errors in the forecasts discussed below.

To start, note that the tables say they are being presented in “production/attraction” format.  This is not the more intuitive “origin/destination” format that would have been more useful to show.  And I suspect that over 99% of readers have interpreted the figures as if they are showing travel between origin and destination pairs.  But that is not what is being shown.

The production/attraction format is an intermediate stage in the modeling process that is commonly used for such forecasts.  That modeling process is called the “four-step model”.  See this post from the Metropolitan Washington Council of Governments (MWCOG) for a non-technical short description, or this post for a more academic description.  The first step in the four-step model is to try to estimate (via a statistical regression process normally) how many trips will be “produced” in each TAZ by households and by businesses, based on their characteristics.  Trips to work, for example, will be “produced” by households at the TAZ where they live, and “attracted” by businesses at the TAZ where those businesses are located.  The number of trips so produced will be forecast based on some set of statistical regression equations (with parameters possibly taken from what might have been estimated for some other metro area, if the data does not exist here).  The number of trips per day by household will be some function of average household size in the TAZ, average household income, how many cars the households own, and other such factors.  Trips “attracted” by businesses in some TAZ will similarly be some function of how many people are employed by businesses in that TAZ, perhaps the nature of the businesses, and so on.  Businesses will also “produce” their own trips, for example for delivery of goods to other businesses, and statistical estimates will be made also for such trips.

Such estimates are unfortunately quite rough (statistical error is high), and the totals calculated for the region as a whole of the number of trips “produced” and the number of trips “attracted” will always be somewhat different, and often far different.  But by definition the totals have to be the same, as all trips involve going from somewhere to somewhere. Hence some scaling process will commonly be used to equate the totals.

This will then yield the total number of trips produced in each TAZ, and the total number attracted to each TAZ.  But this does not tell us yet the distribution of the trips.  That is, one will have the total number of trips produced in TAZ 1, say, but not how many go from TAZ 1 to TAZ 2 or to TAZ 3 or to TAZ 4, and so on.  For this, forecasters generally assume the travel patterns will fit what is called a “gravity model”, where it is assumed the trips from each TAZ will be distributed to the “attractor” TAZs in some statistical relationship which is higher depending on the “mass” (i.e. the number of jobs in some TAZ) and lower depending on the distance between them (typically measured in terms of travel times). This is also rough, and some iterative rescaling process will be needed to ensure the trips produced in each TAZ and attracted to each TAZ sum to the already determined totals for each.

This all seems crude, and it is.  Many might ask why not determine such trip distributions from a straightforward survey of households asking where they travel to.  Surveys are indeed important, and help inform what the parameters of these functions might be, but one must recognize that any practicable survey could not suffice.  The 22 TAZs defined for the Purple Line analysis were constructed (it appears; see below) from a more detailed set of TAZs defined by the Metropolitan Washington Council of Governments.  But MWCOG now identifies 3,722 separate TAZs for the Washington metro region, and travel between them would potentially involve 13.9 million possible pairs (3,722 squared)!  No survey could cover that.  Hence MWCOG had to use some form of a gravity model to allocate the trips from each zone to each zone, and that is indeed precisely what they say they did.

At this point in the process, one will have the total number of trips produced by each TAZ going to each TAZ as an attractor, which for 2040 appears as Table 8 in the FEIS chapter. This covers trips by all options, including driving.  The next step is to separate the total number of trips between those taken by car from those taken by transit, and then, at the level below, the separation of those taken by transit into each of the various transit options (e.g. Metrorail, bus, commuter rail, and the Purple Line in the scenario where it is built). This is the mode choice issue, and note that these are discrete choices where one chooses one or the other.  (A combined option such as taking a bus to a Metrorail station and then taking the train would be modeled as a separate mode choice.)  This separation into various travel modes is normally then done by what is called a nested logit (or logistic) regression model, where the choice is assumed to be a function of variables such as travel time required, out of pocket costs (such as for fares or tolls or parking), personal income, and so on.

Up to this stage, the modeling work as described above would have been carried out by MWCOG as part of its regular work program (although in the scenario of no Purple Line). Appendix A of the FEIS Travel Forecasts chapter, says specifically that the modelers producing the Purple Line ridership forecasts started from the MWCOG model results (Round 8.0 of that model for the FEIS forecasts).  By aggregating from the TAZs used by MWCOG (3,722 currently, but possibly some different number in the Round 8.0 version), to the 22 defined for the Purple Line work, the team doing the FEIS forecasts would have been able to arrive at the table showing total daily trips by all forms of transportation (including driving) between the 22 TAZs (Table 8 of the FEIS chapter), as well as the total trips by some form of transit between the 22 in the base case of no Purple Line being built (the “No Build” alternative; Table 14 of the FEIS chapter).

The next step was then to model how many total transit trips would be taken in the case where the Purple Line has been built and is operating in 2040, as well as how many of such transit trips will be taken on the Purple Line specifically.  The team producing the FEIS forecasts would likely have taken the nested logit model produced by MWCOG, and then adjusted it to incorporate the addition of the Purple Line travel option, with consequent changes in the TAZ to TAZ travel times and costs.  At the top level they then would have modeled the split in travel between by car or by any form of transit, and at the next level then modeled the split of any form of transit between the various transit options (bus, Metrorail, commuter rail, and the Purple Line itself).

This then would have led to the figures shown in Table 16 of the FEIS chapter for total transit trips each day by any transit mode (with the Purple Line built), and Table 22 for trips on the Purple Line only.  Portions of those tables are shown above.  They are still in “production/attraction” format, as noted in their headings.

While understandable as a step in the process by which such ridership forecasts are generated (as just described), trips among TAZs in production/attraction format are not terribly interesting in themselves.  They really should have gone one further step, which would have been to convert from a production/attraction format to an origin/destination format.  The fact that they did not is telling.

As discussed above, a production/attraction format will show the number of trips between each production TAZ and each attraction TAZ.  Thus a regular commute for a worker from home (production TAZ) to work (attraction TAZ) each day will appear as two trips each day between the production TAZ and the attraction TAZ.  Thus, for example, the 1,589 trips shown as total transit trips (Table 16) between TAZ 1 (Bethesda) and TAZ 3 (Silver Spring) includes not only the trips by a commuter from Bethesda to Silver Spring in the morning, but also the return trip from Silver Spring to Bethesda in the evening.  The return trip does not appear in this production/attraction format in the 4,379 trips from Silver Spring (TAZ 3) to Bethesda (TAZ 1) element of the matrix (see the portion of Table 16 shown above).  The latter is the forecast of the number of trips each day between Silver Spring as a production zone and Bethesda as an attractor.

This is easy to confuse, and I suspect that most readers seeing these tables are so confused.  What interests the reader is not this production/attraction format of the trips, which is just an intermediate stage in the modeling process, but rather the final stage showing trips from each origin TAZ to each destination TAZ.  And it only requires simple arithmetic to generate that, if one has the underlying information from the models on how many trips were produced from home to go to work or to shop or for some other purpose (where people will always then return home each day), and separately how many were produced by what they call in the profession “non-home based” activities (such as trips during the workday from business to business).

I strongly suspect that the standard software used for such models would have generated such trip distributions in origin/destination format, but they are never presented in the FEIS Travel Forecasts chapter.  Had they been, one would have seen what the forecast travel would have been between each of the TAZ pairs in each of the two possible directions. One would probably have observed an approximate (but not necessarily exact) symmetry in the matrix, as travel from one TAZ to another in one direction will mostly (but not necessarily fully) be matched by a similar flow in the reverse direction, when added up over the course of a day.  For that reason also, the row totals will match or almost match each of the column totals.  But that will not be the case in the production/attraction format.

That the person writing up the results for this FEIS chapter did not understand that an origin/destination presentation of the travel would have been of far greater interest to most readers than the production/attraction format is telling, I suspect.  They did not see the significance.  Rather, what was written up was mostly simply a restatement of some of the key numbers from the tables, with little to no attempt to explain why they were what they were.  It is perhaps then not surprising that the author did not notice the impossibility of the forecast ridership between many of the TAZ pairs being higher on the Purple Line alone (Table 22) than the total ridership on all transit options together (Table 16).

C.  User Benefits and Time Savings

The modeling exercise also produced a forecast of “user benefits” in the target year. These benefits are measured in units of time (minutes or hours) and arise primarily from the forecast savings in the time required for a trip, where estimates are made as to how much less time will be required for a trip if one has built the light rail line.  I would note that there are questions as to whether there would in fact be any time savings at all (light rail lines are slow, particularly in designs where they travel on streets with other traffic, which will be the case here for much of the proposed route), but for the moment let’s look at what the modelers evidently assumed.

“User benefits” then include a time-value equivalent of any out-of-pocket cost savings (to the extent any exists; it will be minor here for most), plus a subjective premium for what is judged to be the superior quality of a ride on a rail car rather than a regular bus. The figures in the AA/DEIS (see Table 6-2 in Chapter 6) indicate a premium of 19% was added in the case of the medium light rail alternative – the alternative that evolved into what is now the Purple Line.  The FEIS Travel Forecasts chapter does not indicate what premium they now included, but presumably it was similar.  User benefits are thus largely time savings, with some markup to reflect a subjective premium.

Forecast user benefits are important for two reasons.  One is that it is such benefits which are, to the extent they in fact exist, the primary driver of predicted ridership on the Purple Line, i.e. travelers switching to the Purple Line from other transit options (as well as from driving, although the forecast shifts out of driving were relatively small).  Second, the forecast user benefits are also important as they provide the primary metric used to estimate the benefit of building the Purple Line. Thus if the inputs used to indicate what the time savings would be by riding the Purple Line as opposed to some other option were over-estimated, one will be both over-estimating ridership on the line and over-estimating the benefits.

And it does appear that those time savings and user benefits were over-estimated.  Table 23 of the FEIS chapter presents what it labels the “Minutes of User Benefits per Project Trip”.  A screenshot of the upper left corner, focussed on the travel within the 7 TAZs through which the Purple Line would pass, is:

Note that while the author of the chapter never says what was actually done, it appears that Table 23 was calculated implicitly by dividing the figures in Table 21 of the FEIS Travel Forecasts chapter (showing calculated total hours of time savings daily for each TAZ pair) by those in Table 22 (showing the number of daily trips on the Purple Line, the same table as was discussed in the section above).  This would have been a reasonable approach, given that the time savings figures include that saved by all the forecast shifts among transit alternatives (as well as from driving) should the new rail line be built.  The Table 23 numbers thus show the overall time saved across all travel modes, per Purple Line trip.

But the figures are implausible.  Taking the most extreme cases first, the table says that there would be an average of 582 minutes of user benefits per trip for travel on the Purple line between Bethesda (TAZ 1) and Riverdale Park (TAZ 6), and 691 minutes per trip between Bethesda (TAZ 1) and New Carrollton (TAZ 7).  This works out to user benefits per trip of 9.7 hours and 11.5 hours respectively!  One could walk faster!  And this does not even take into account that travel between Bethesda and New Carrollton would be faster on Metrorail (assuming the system is still functioning in 2040).  The FEIS Travel Forecasts chapter itself, in its Table 6, shows that Metrorail between these two stations currently requires 55 minutes.  That time should remain unchanged in the future, assuming Metrorail continues to operate.  But traveling via the Purple Line would require 63 minutes (Table 11) for the same trip.  There would in fact be no time savings at all, but rather a time cost, if there were any riders between those two points.

Perhaps some of these individual cases were coding errors of some sort.  I cannot think of anything else which would have led to such results.  But even if one sets such individual cases aside, I find it impossible to understand how any of these user benefit figures could have followed from building a rail line.  They are all too large.  For example, the FEIS chapter provides in its Table 18 a detailed calculation of how much time would be saved by taking a bus (under the No Build alternative specifically) versus taking the proposed Purple Line.  Including average wait times, walking times, and transfers (when necessary), it found a savings of 11.4 minutes for a trip from Silver Spring (TAZ 3) to Bethesda (TAZ 1); 2.6 minutes for a trip from Bethesda (TAZ 1) to Glenmont (TAZ 9); and 8.0 minutes for a trip from North DC (TAZ 15) to Bethesda (TAZ 1).  Yet the minutes of user benefits per trip for these three examples from Table 23 (see the full table in the FEIS chapter) were 25 minutes, 19 minutes, and 25 minutes, respectively.  Even with a substantial premium for the rail options, I do not see how one could have arrived at such estimates.

And the figures matter.  The overall average minutes of user benefits per project trip (shown at the bottom of Table 23 in the FEIS chapter) came to 30 minutes.  If this were a more plausible average of 10 minutes, say, then with all else equal, the cost-effectiveness ratio would be three times worse.  This is not a small difference.

Importantly, the assumed figures on time savings will also matter to the estimates made of the total ridership on the Purple Line.  The forecast number of daily riders in 2040 of 68,650 (Table 22) or 69,300 (in other places in the FEIS chapter) was estimated based on inputs of travel times required by each of the various modes, and from this how much time would be saved by taking the Purple Line rather than some other option.  With implausibly large figures for travel time savings being fed in, the ridership forecasts will be too high.  If the time savings figures being fed in are far too large, the ridership forecasts will be far too high.  This is not a minor matter.

D.  Ridership by Line Segment

An important estimate is of how many riders there will be between any two station to station line segments, as that will determine what the system capacity will need to be.  Rail lines are inflexible, and completely so when, as would be the case here, the trains would be operated in full from one end of the line to the other.  The rider capacity (size) of the train cars and the spacing between each train (the headway) will then be set to accommodate what is needed to service ridership on what would be the most crowded segment.

Figure 10 of the FEIS Travel Forecasts chapter provides what would be a highly important and useful chart of ridership on each line segment, showing, it says, how many riders would (in terms of the daily average) arrive at each station, how many of those riders would get off at that station, and then how many riders would board at that station.  That would then produce the figure for how many riders will be on board traveling to the next station.  And one needs to work this out for going in each direction on the line.

Here is a portion of that figure, showing the upper left-hand corner:

Focussing on Bethesda (one end of the proposed line), the chart indicates 10,210 riders would board at Bethesda each day, while 19,800 riders would exit each day from arriving trains.  But how could that be?  While there might be a few riders who might take the Purple Line in one direction to go to work or for shopping or for whatever purpose, and then take an alternative transportation option to return home, that number is small, and would to some extent balance out by riders going in the opposite direction.  Setting this small possible number aside, the figures in the chart imply that close to twice as many riders will be exiting in Bethesda as will be entering.  They imply that Bethesda would be seeing its population grow by almost 9,600 people per day.  This is not possible.

But what happened is clear.  The tables immediately preceding this figure in the FEIS Travel Forecasts chapter (Tables 24 and 25) purport to show for each of the 21 stations on the proposed rail line, what the daily station boardings will be, with a column labeled “Total On” at each station and a column labeled “Total Off”.  Thus for Bethesda, the table indicates 10,210 riders will be getting on, while 19,800 will be getting off.  While for most of the stations, the riders getting on at that station could be taking the rail line in either direction (and those getting off could be arriving from either direction), for the two stations at the ends of the line (Bethesda, and at the other end New Carrollton) they can only go in one direction.

But as an asterisk for the “Total On” and “Total Off” column headings explicitly indicates, the figures in these two columns of Table 24 are in production/attraction format.  That is, they indicate that Bethesda will be “producing” (mostly from its households) a forecast total of 10,210 riders each day, and will be “attracting” (mostly from its businesses) 19,800 riders each day.  But as discussed above, one must not confuse the production/attraction presentation of the figures, with ridership according to origin/destination.  A household where a worker will be commuting each day to his or her office will be shown, in the production/attraction format, as two trips each day from the production TAZ going to the attraction TAZ.  They will not be shown as one trip in each direction, as they would have been had the figures been converted to an origin/destination presentation.  The person that generated the Figure 10 numbers confused this.

This was a simple and obvious error, but an important one.  Because of this mistake, the figures shown in Figure 10 for ridership between each of the station stops are completely wrong.  This is also important because ridership forecasts by line segment, such as what Figure 10 was supposed to show, are needed in order to determine system capacity.  The calculations depicted in the chart conclude that peak ridership in the line would be 21,400 each day on the segment heading west from the Woodside / 16th Street station (still part of Silver Spring) towards Lyttonsville.  Hence the train car sizes and the train frequency would need to be, according to these figures (but incorrectly), adequate to carry 21,400 riders each day. That is their forecast of ridership on the busiest segment.  The text of the chapter notes this specifically as well (see page 56).

That figure is critically important because the primary argument given by the State of Maryland for choosing a rail line rather than one of the less expensive as well as more cost-effective bus options, is that ridership will be so high at some point (not yet in 2040, but at some uncertain date not too long thereafter) that buses would be physically incapable of handling the load.  This all depends on whether the 21,400 figure for the maximum segment load in 2040 has any validity.  But it is clearly far too high; it leads to almost twice as many riders going into Bethesda as leave.  It was based on confusing ridership in a production/attraction format with ridership by origin/destination.

Correcting for this would lead to a far lower maximum load, even assuming the rest of the ridership forecasts were correct.  And at a far lower maximum load, there is even less of a case against investing in a far less expensive, as well as more cost-effective, system of upgraded bus services for the corridor.

E.  Other Issues

There are numerous other issues in the FEIS Travel Forecasts chapter which leads one to question how carefully the work was done.  One oddity, as an example and perhaps not important in itself, is that Tables 17 and 19, while titled differently, are large matrices where all the numbers contained therein are identical.  Table 17 is titled “Difference in Daily Transit Trips (2040 Preferred Alternative minus No Build Alternative) (Production/Attraction Format)”, while Table 19 is titled “New Transit Trips with the Preferred Alternative (Production/Attraction Format)”.  That the figures are all identical is not surprising – the titles suggest they should be the same.  But why show them twice?  And why, in the text discussing the tables (pp. 41-42), does the author treat them as if they were two different tables, showing different things?

But more importantly, there are a large number of inconsistencies in key figures between different parts of the chapter.  Examples include:

a)  New transit trips in 2040:  Table 17 (as well as 19) has that there would be 19,700 new transit trips daily in the Washington region in 2040, if the Purple Line is built (relative to the No Build alternative).  But on page 62, the text says the number would be 16,330 new transit trips in 2040 if it is built.  And Table B-1 on page 67 says there would be 28,626 new transit trips in 2040 (again relative to No Build).  Which is correct?  One is 75% higher than another, which is not a small difference.

b)  Total transit trips in 2040:  Table 16 says that there would be a total of 1,470,620 total transit trips in the Washington region in 2040 if the Purple Line is built, but Table B-1 on page 67 puts the figure at 1,683,700, a difference of over 213,000.

c)  Average travel time savings:  Table 23 indicates that average minutes of “user benefits” per project trip would be 30 minutes in 2040 if the Purple Line is built, but the text on page 62 says that average travel time savings would “range between 14 and 18 minutes per project trip”.  This might be explained if they assigned a 100% premium to the time savings for riding a rail line, but if so, such an assumed premium would be huge.  As noted above, the premium assigned in the AA/DEIS for the Medium Light Rail alternative (which was the alternative later chosen for the Purple Line) was just 19%.  And the 14 to 18 minutes figure for average time savings per trip itself looks too large. The simple average of the three representative examples worked out in Table 18 of the chapter was just 7.3 minutes.

d)  Total user benefit hours per day in 2040:  The text on page 62 says that the total user benefit hours per day in 2040 would sum to 17,175.  But Table B-5 says the total would come to 24,073 hours (shown as 1,444,403 minutes, and then divided by 60), while Table 21 gives a figure of 33,960 hours.  The highest figure is almost double the lowest.  Note the 33,960 hours figure is also shown in Table 20, but then shows this as 203,760 minutes (but should be 2,037,600 minutes – they multiplied by 6, not 60, for the conversion of hours to minutes).

There are other inconsistencies as well.  Perhaps some can be explained.  But they suggest that inadequate attention was paid to ensure accuracy.

F.  Conclusion

There are major problems with the forecasts of ridership on the proposed Purple Line.  The discussion above examined several of the more obvious ones.  There may well be more. Little explanation was provided in the documentation on how the forecasts were made and on the intermediate steps, so one cannot work through precisely what was done to see if all is reasonable and internally consistent.  Rather, the FEIS Travel Forecasts chapter largely presented just the final outcomes, with little description of why the numbers turned out to be what they were presented to be.

But the problems that are clear even with the limited information provided indicate that the correct Purple Line ridership forecasts would likely be well less than what their exercise produced.  Specifically:

a)  Since the Purple Line share of total transit use can never be greater than 100% (and will in general be far less), a proper division of transit ridership between the Purple Line and other transit modes will result in a figure that is well less than the 30,560 forecast for Purple Line ridership for trips wholly within the Purple Line corridor alone (shown in Table 22).  The corridor covers seven geographic zones which, as defined, stretch often from the Beltway to the DC line (or even into DC), and from Bethesda to New Carrollton.  There is a good deal of transit ridership within and between those zones, which include four Metrorail lines with a number of stations on each, plus numerous bus routes.  Based on the historical estimates for transit ridership (for 2005), the forecasts for total transit ridership in 2040 within and between those zones look reasonable.  The problem, rather, is with the specific Purple Line figures, with figures that are often higher (often far higher) than the figures for total transit use.  This is impossible.  Rather, one would expect Purple Line ridership to be some relatively small share (no more than a quarter or so, and probably well less than that) of all transit users in those zones.  Thus the Purple Line ridership forecasts, if properly done, would have been far lower than what was presented.  And while one cannot say what the precise figure would have been, it is a mathematical certainty that it cannot account for more than 100% of total transit use within and between those zones.

b)  The figures on user benefits per trip (Table 23) appear to be generally high (an overall average of 30 minutes) and sometimes ridiculously high (9.7 hours and 11.5 hours per trip in two cases).  At more plausible figures for time savings, Purple Line ridership would be far less.

c)  Even with total Purple Line ridership at the official forecast level (69,300), there will not be a concentration in ridership on the busiest segment of 21,400 (Figure 10).  The 21,400 figure was derived based on an obvious error – from a confusion in the meaning of the production/attraction format.  Furthermore, as just noted above, correcting for other obvious errors imply that total Purple Line ridership will also be far less than the 69,300 figure forecast, and hence the station to station loads will be far less.  The design capacity required to carry transit users in this corridor can therefore be far less than what these FEIS forecasts said it would need to be.  There is no need for a rail line.

These impossibilities, as well as inconsistencies in the figures cited at different points in the chapter for several of the key results, all suggest insufficient checks in the process to ensure the forecasts were, at a minimum, plausible and internally consistent.  For this, or whatever, reason, forecasts that are on their face impossible were nonetheless accepted and used to justify building an expensive rail line in this corridor.

And while the examination here has only been of the Purple Line, I suspect that such issues often arise in other such transit projects, and indeed in many proposed public infrastructure projects in the US.  When agencies responsible for assessing whether the projects are justified instead see their mission as project advocates, a hard look may not be taken at analyses whose results support going ahead.

The consequence is that a substantial share of the scarce funds available for transit and other public infrastructure projects is wasted.  Expensive new projects get funded (although only a few, as money is limited), while boring simple projects, as well as the maintenance of existing transit systems, get short-changed, and we end up with a public infrastructure that is far from what we need.

Fund the Washington Area Transit System With A Mandatory Fee on Commuter Parking Spaces

A.  Introduction

The Washington region’s primary transit authority (WMATA, for Washington Metropolitan Area Transit Authority, which operates both the Metrorail system and the primary bus system in the region) desperately needs additional funding.  While there are critical issues with management and governance which also need to be resolved, everyone agrees that additional funding is a necessary, albeit not sufficient, element of any recovery program. This post will address only the funding issue.  While important, I have nothing to contribute here on the management and governance issues.

WMATA has until now been funded, aside from fares, by a complex set of financial contributions from a disparate set of political jurisdictions in the Washington metropolitan region (four counties, three municipalities, plus Washington, DC, the states of Maryland and Virginia, and the federal government, for a total of 11 separate political jurisdictions). Like for governments everywhere, budgets are limited.  Not surprisingly, the decisions on how to share out the costs of WMATA are politically difficult, and especially so as a higher contribution by one jurisdiction, if not matched by others, will lead to a lower share in the costs by those others.  And unlike most large transit systems in the US, WMATA depends entirely (aside from fares) on funding from political jurisdictions.  It has no dedicated source of tax revenues.

This is clearly not working.  Everyone agrees that additional funding is needed, and most agree that a dedicated funding source needs to be created to supplement the funds available to WMATA.  But there is no agreement on what that additional funding source should be.  There have been several proposals, including an increase in the sales tax rate in the region or a special additional tax on properties located near Metro stations, but each has difficulties and there is no consensus.  As I will discuss below, there are indeed issues with each.  They would not provide a good basis for funding transit.

The recommendation developed here is that a fee on commuter parking spaces would provide the best approach to providing the additional funding needed by the Washington region’s transit system.  This alternative has not figured prominently in the recent discussion, and it is not clear why.  It might be because of an unfounded perception that such a fee would be difficult to implement.  As discussed below, this is not the case at all.  It could be easily implemented as part of the property tax system that is used throughout the Washington region.  It should be considered as an approach to raising the funds needed, and would perhaps serve as an alternative that could break the current impasse resulting from a lack of consensus for any of the other alternatives that have been put forward thus far.

Four factors need to be considered in any assessment of possible options to fund the transit systems.  These are:

  • Feasibility:  Would it be possible to implement the option in practical terms?  If it cannot be implemented, there is no point in considering it further.
  • Effectiveness:  Would the option be able to raise the amount of funds needed, with the parameters (such as the tax rates) at reasonable levels that would not be so high as to create problems themselves?
  • Efficiency:  Would the economic incentives created by the option work in the direction one wants, or the opposite?
  • Fairness:  Would the tax or option be fair in terms of who would pay for it?  Would it be disproportionately paid for by the poor, for example?

This blog post will assess to what degree these four tests are met by each of several major options that have been proposed to provide additional funding to WMATA.  A mandatory fee on parking spaces will be considered first, and in most detail.  Many will call this a tax on parking, and that is OK.  It is just a label.  But I would suggest it should be seen as a fee on rush hour drivers, who make use of our roads and fill them up to the point of congestion.  It can be considered similar to the fees we pay on our water bills – one would be paying a fee for using our roads at the times when their capacity is strained.  But one should not get caught up in the polemics:  Whether tax or mandatory fee, they would be a charge on the parking spaces used by those commuters who drive.

Other options then considered are an increase in the bus and rail fares charged, an increase in the sales tax rate on all goods purchased in the region, and enactment of a special or additional property tax on land and development close to the Metrorail stations in the region.

No one disputes that enactment of any of these taxes or fees or higher fares will be politically difficult.  But the Washington region would collapse if its Metrorail system collapsed.  Metrorail was until recently the second busiest rail transit system in the US in terms of ridership (after New York).  However, Metrorail ridership declined in recent years, to the point that it was 17% lower in FY2016 than what it was in FY2010.  The decline is commonly attributed to a combination of relatively high fares, lack of reliability, and the increased safety concerns of recent years, combined most recently with periodic shutdowns on line segments in order to carry out urgent repairs and maintenance. Despite this, Metrorail in 2016 was still the third busiest rail system in the country (just after Chicago).

But the Washington region cannot afford this decline in transit use.  Its traffic congestion, even with Metro operating, is by various measures either the worst in the nation or one of the worst.  Furthermore, the traffic congestion is not just in or near the downtown area.  As offices have migrated to suburban centers over the last several decades, traffic during rush hour is now horrendous not simply close to the city center, but throughout the region. See, for example, this screen shot from a Google Maps image I took at typical weekday afternoon during rush hour (5:30 pm on Tuesday, April 18):

The roads shown in red have traffic backed up.  The congestion is bad not simply around downtown, nor simply on the notoriously congested Capital Beltway as well, but also on roads at the very outer reaches of the suburbs.  The problem is region-wide, and it is in the interest of everyone in the region that it be addressed.

A good and well-run transit system will be a necessary component of what will be needed to fix this, although this is just the minimum.  And for this, it will be fundamental that there be a change in approach from a short-term focus on resolving the immediate crisis by some patch, to a perspective that focuses on how best to utilize, and over time enhance, the overall transportation system assets of the Washington region.  This includes both the Metro system assets (where a value of $40 billion has been commonly cited, presumably based on its historical cost) but also the value of the highways and bridges and parking facilities of the region, with a cost and a value that would add up to far more. These assets are not well utilized now.  A proper funding system for WMATA should take this into account.  If it is not, one can end up with empty seats on transit while the roads are even more congested.

The first question, however, is how much additional funding is required for WMATA.  The next section will examine that.

B.  WMATA’s Additional Funding Needs

How much is needed in additional funding for WMATA?  There is not a simple answer, and any answer will depend not only on the time frame considered but also on what the objective is.

To start, the FY18 budget for WMATA as originally drawn up in the fall of 2016 found there to be a $290 million gap between expenditures it considered to be necessary based on the current plans, and the revenues it forecast it would receive from fares (and other revenue generating activities such as parking fees at the stations and from advertising) and what would be provided under existing formulae from the political jurisdictions.  This gap was broadly similar in magnitude to the gaps found in recent years at a similar stage in the process.  And as in earlier years, this $290 million gap was largely closed by one-off measures that one could not (or at least should not) be used again.  In particular, funds were shifted from planned expenditures to maintain or build up the capital assets of the system, to cover current operating costs instead.

Looking forward, all the estimates of the additional funding needs are far higher.  To start, an analysis by Jeffrey DeWitt, the CFO of Washington, DC, released in October 2016 as part of a Metropolitan Washington Council of Governments (COG) report, estimated that at a minimum, WMATA faced a shortfall over the next ten years averaging $212 million per year on current operations and maintenance, and $330 million per year for capital needs, for a total of $542 million a year.  This estimate was based on an assumption of a capital investment program summing to $12 billion over the ten years.

But the “10-Year Capital Needs” report issued by WMATA a short time later estimated that the 10-year capital needs of WMATA would be $17.4 billion simply to bring Metro assets up to a “state of good repair” and maintain them there.  It estimated an additional $8 billion would be needed for modest new investments – needed in part to address certain safety issues.  But even if one limited the ten-year capital program to the $17.4 billion to get assets to a state of good repair, there would be a need for an additional $540 million a year over the October 2016 DeWitt estimates, i.e. a doubling of the earlier figure to almost $1.1 billion a year.

A more recent, and conservative, figure has been provided by Paul Wiedefeld, the General Manager of WMATA, in a report released on April 19.  He recommended that while Metro has capital needs totaling $25 billion over the next ten years, he would propose that a minimum of $15.5 billion be covered for the system “to remain safe and reliable”.  Even with this reduced capital investment program, he estimated that if funding from the jurisdictions remained at historical levels, there would be a 10-year funding gap of $7.5 billion remaining.  If jurisdictional funding were to rise at 3% a year in nominal terms, then he estimated that $500 million a year would still be necessary from some new funding source.

But this was just for the capital budget, and a highly constrained one at that.  There would, in addition, be a $100 million a year gap in the operating budget, even with the funding from the jurisdictions for operations rising also at 3% a year.  Wiedefeld suggested that it might be possible to reduce operating costs by that amount.  However, this would require cutting primarily labor expenditures, as direct labor costs account for 74% of operating expenditures.  Not surprisingly, the WMATA labor union is strongly opposed.

Even more recently, the Metropolitan Washington Council of Governments issued on April 26 the final report of a panel it convened (hereafter COG Panel or COG Panel Report) that examined Metro funding options.  The panel was made up of senior local administrative and budget officials.  While the focus of the report was an examination of different funding options (and will be discussed further below), it took as a basis of its estimated needs that WMATA would need to cover a ten-year capital investment program of $15.6 billion (to reach and maintain a “state of good repair” standard).  After assuming a 3% annual increase in what the political jurisdictions would provide, it estimated the funding gap for the capital budget would sum to $6.2 billion. Assuming also a 3% annual increase in funding from the political jurisdictions for operations and maintenance (O&M), it estimated a remaining funding gap of $1.3 billion for O&M.  The total gap for both capital and O&M expenses would thus sum to $7.5 billion over the period.

But while these COG estimates were referred to as 10-year funding gaps (thus averaging $750 billion per year), the table in its PowerPoint presentation on the report on page 13 makes clear that these are actually the funding gaps for the eight year period of FY19 to FY26.  FY17 is already almost over, and the FY18 budget has already been settled.  For the eight year period from FY19 going forward, the additional funding needed averages $930 million per year.  The COG Panel recommended, however, a dedicated funding source that would generate less, at $650 million per year to start (which it assumes would be in 2019).  But the reason for this difference is that the COG Panel recommended also that WMATA borrow additional funds in the early years against that new funding stream, so as to cover together the higher figure ($930 million on average per year over FY19-26) for what is in fact needed.  While such borrowing would supplement what could be funded in the early years, the resulting debt service would then subtract from what one could fund later.  While prudent borrowing certainly has a proper role, future funding needs will certainly be higher than what they are right now, and thus this will not provide a long-term solution to the funding issue.  More funding will eventually (and soon) be required.

All these figures reviewed thus far assume capital investment programs only just suffice to bring existing assets up to a “state of good repair”, with nothing done to add to these assets.  It also appears that the estimates were influenced at least to some extent by what the analysts thought might be politically feasible.  Yet additional capacity will be needed if the Washington region is to continue to grow.  While these additional amounts are much more speculative, there is no doubt that they are large, indeed huge.

The most careful recent study of long-term expansion needs is summarized in a series of reports released by WMATA in early 2016.   A number of rail options were examined (mostly extensions of existing rail lines), with the conclusion that the highest priority for a 2040 time horizon was to enhance the capacity at the center of the system.  Portions of these lines are already strained or at full capacity, including in particular the segment for the tunnel under the Potomac from Rosslyn.  Under this plan, there would be a new circular underground loop for the Metro lines around downtown Washington and extending across the Potomac to Rosslyn and the Pentagon.  It is not clear that a good estimate has yet been done on what this would cost, but the Washington Post gave a figure of $26 billion for an earlier variant (along with certain other expenditures).  This would clearly be a multi-decade project, and if anything like it is to be done by 2040, work would need to begin within the current 10-year WMATA planning horizon.  Yet given WMATA’s current difficulties, there has been little focus on these long-term needs.  And nothing has been provided for them.

To sum up, how much in additional funding is needed?  While there is no precise number, in part because the focus has been on the immediate crisis and on what might be considered politically feasible, for the purposes of this post we will use the following.  At a minimum, we will look at what would be needed to generate $650 million per year, the same figure arrived at in the COG Panel Report.  But this figure is clearly at the low end of the range of what will be needed.  At best, it will suffice only for a few years.  Our political leaders in the region should recognize that this will need to rise to at least $1 billion per year within a few years if necessary investments are to be made to ensure the system not only reaches a “state of good repair” but also sustains it.  Furthermore, it will need to rise further to perhaps $2.0 billion a year by around 2030 if anything close to the system capacity that will be needed by 2040 is to be achieved.

For the analysis below, we will therefore look at what the rates will need to be to generate $650 million a year at the low end and roughly three times this ($2.0 billion a year in nominal terms, by the year 2030) at the high end.  These figures are of course only illustrative of what might be required.  And for the forecast figures for 2030, I will assume (consistent with what the COG Panel did) that inflation from now to then will rise at 2% a year while real growth in the region will rise, conservatively, at 1% a year.  Note that $2.0 billion in 2030 in nominal terms would be equivalent to $1.55 billion in terms of dollars of today (2017) if inflation rises at 2% a year.

It is important to recognize that providing just the low-end figure of $650 million a year will not suffice for more than a few years.  It does provide a starting point, and while that is important, when considering such a major reform as moving to a dedicated funding source to supplement government funding sources, one should really be thinking longer term.  Not much would be gained by moving to a funding source which would prove insufficient after just a few years, leading to yet another crisis.

C.  A Mandatory Fee on Commuter Parking Spaces

A fee would be assessed (generally through the property tax system) on all parking spaces used by office and other commuting employees.  It would not be assessed on residential parking, nor on customer parking linked to retail or other such commercial space, but would be limited to the all-day parking spots that commuters use.

It would be straightforward to implement.  The owners of the property with the parking spaces would be assessed a fee for each parking space provided.  For example, if the fee is set at $1 per day per space, a fee of $250 per year would be assessed (based on 250 work-days a year, of 52 weeks at 5 days per week less 10 days for holidays).  It would be paid through the regular property tax system, and collected from the owners of that land along with their regular property taxes on the semi-annual (or quarterly or whatever) basis that they pay their property taxes. The owners of the spaces would be encouraged to pass along the costs to those employees who drive and use the spaces (and owners of commercial parking lots will presumably adjust their monthly fees to reflect this), but it would be the owners of the parking spaces themselves who would be immediately liable to pay the fees.

Property records will generally have the number of parking spaces provided on those plots of land.  This will certainly be so in the cases of underground parking provided in modern office buildings and in multi-story commercial parking garages.  And I suspect there will similarly be such a record of the number of spaces in surface parking lots.  But even if not, it would be straightforward to determine their number.  Property owners could be required to declare them, subject to spot-checks and fines if they did not declare them honestly. One can now even use satellite images available on Google Maps to count such spaces. And a few years ago my water bills started to include a monthly fee for the square footage of impermeable space on my land (from roofs and driveways primarily), as drainage from such surfaces feed into stormwater drains and must ultimately be treated before being discharged into the Potomac river.  They determined through the property records system and from satellite images the square footage of such spaces on all individual properties.  If that can be done, one certainly determine the number of parking spaces on open lots.

There are, however, a few special cases where property taxes are not collected and where different arrangements will need to be made.  But this can be done.  Specifically:

  1. Properties owned by federal, state, and local governments will generally not pay property taxes.  But the mandatory fees on parking spaces could still be collected by these government entities and paid into the system just as by private property owners.  Presumably, the governments support the reform as it is supplementing the funds they already provide to WMATA.
  2. Similarly, international organizations located in the Washington region, such as the World Bank, the IMF, the Inter-American Development Bank, and others (mostly much smaller) operate under international treaties which provide that they do not owe property taxes on properties they own.  But as with governments, they could collect such fees on parking spaces made available to their employees who drive to work.  They already charge their employees monthly fees for the spaces, and the new fee could be added on.  And while I am not a lawyer, it might well be the case that such a fee on parking spots could be made mandatory.  The institutions do pay the fees charged for the water they use, and employees do pay sales taxes on the food they purchase in their cafeterias.  Finally, these institutions advise governments to apply good policy.  The same should apply here.
  3. There are also non-profit hospitals, universities, and similar institutions, which are major employers in the region but which may not be charged property taxes. However, the fee on parking spaces, while collected for most through the property tax system, can be seen as separate from regular property taxes.  It is a fee on commuters who make use of our road system and add to its congestion.  The parking fees could still be collected and paid in, even if no regular property taxes are due.
  4. Finally, the Washington region has a large number of embassies and other properties with strict internationally recognized immunities.  It might well be the case that it will not be possible to collect such a mandatory fee on parking spots for their employees (although again, presumably the embassies pay the fees on their water bills).  But the total number employed through such embassies is tiny as a share of total employment in the DC region.  And some embassies might well pay voluntarily, recognizing that they too are members of the local community, making use of the same roads.  Finally, note that embassy employees with diplomatic status also do not pay sales tax on their day-to-day purchases, while the embassy compounds themselves do not pay property taxes.  Proposals to fund WMATA through new or higher property taxes or sales taxes (discussed below) will face similar issues.  But as noted above, the amounts involved are tiny.

How, then, would such a mandatory fee on commuter parking spaces stand up under the four criteria noted above?:

a)  Feasibility:  As just discussed, such a fee on commuter parking spaces, implemented generally through the regular property tax system, would certainly be feasible.  It could be done.  It may well be that a lack of recognition of this which explains why such an option has typically not been much considered when alternatives are reviewed for how to fund a transit system such as WMATA.  It appears that most believe that it would require some system to be set up which would mandate a payment each day as commuters enter their parking lots.  But there is no need for that.  Rather, the fee could be imposed on the owner of the parking space, and collected as part of their property tax payments.  It would be up to the owner of that space to decide whether to pass along that cost to the commuters making use of those spaces (although passing along the cost should certainly be encouraged, so that the commuters face the cost of their decision to drive).

b)  Effectiveness:  The next question is whether such a fee, at reasonable rates, would generate the funds needed.  To determine this, one first needs to know how many such parking spots there are in the Washington region.  While more precise figures can be generated later, all that is needed at this point is a rough estimate.

As of January 2017, the Bureau of Labor Statistics estimated there were 3,217,400 employees in the Washington region’s Metropolitan Statistical Area (MSA).  While this MSA area is slightly larger than the jurisdictions that participate in the WMATA regional compact, the additional counties at the fringes of the region are relatively small in population and employment.  This figure on regional employment can then be coupled with the estimate from the most recent (2016) Metropolitan Washington COG “State of the Commute” survey, which concluded that 61.0% of commuters drive alone to work, while an additional 5.4% drive in either car-pools or van-pools.  Assuming an average of 2.5 riders in car-pools and van-pools (van-pools are relatively minor in number), this would work out to 63.2% as the number of cars (as a share of total employment) that carry commuters to their jobs.  Applying the 63.2% to the 3,217,400 figure for the number employed, an estimated 2,033,400 cars are used to carry commuters.  The total number of parking spaces will be somewhat more, as the parking lots will normally have some degree of excess capacity, but this can be ignored for the estimate here.  Rounding down, there are roughly 2 million parking spaces for these cars in the DC region.  And this number can be expected to grow over time.

With 2 million parking spaces, a daily fee of $1 would generate $500 million per year (based on 250 work-days per year).  A fee of $1.30 per day would generate $650 million. And assuming commuter parking spots grow at 1% a year (along with the rest of the regional economy) to 2030, a $3.50 fee in 2030 would generate $2.0 billion in the prices of that year (equivalent to $2.70 per day in the prices of 2017, assuming 2% annual inflation for the period).

Compared to the cost of driving, fees of $1.30 per day or even $3.50 per day are modest. While many workers do not pay for their parking (or for the full cost of their parking), the actual cost can be estimated by what commercial parking firms charge for their monthly parking contracts.  For the 33 parking garages listed as “downtown DC” on the Parking Panda website, the average monthly fee (showing on April 29, 2017) was a bit over $270. This would come to $13 per work day (based on 250 work days per year).  While the charges will be less in the suburbs, there will still be a cost.  But the full cost to commuters to drive to work is in fact much more.  Assuming the average cost of the cars driven is $36,000, and with simple straight line depreciation over 10 years, the average monthly cost will be $300. To this one should add the cost of car insurance (on the order of $50 to $100 per month), of expected repair costs (probably of similar magnitude), and of gas. The full cost of driving would on average then total over $600 per month, or about $29 per work day.  Even if one ignores the cost of the parking spot itself (as drivers will if their employers provide the spots for free), the cost to the driver would still average about $16 per work day.  An added $1.30 per day to cover the funding needs of the public transit system is minor compared to any of these cost estimates, and would still be modest at $3.50 per day (equal to $2.70 in the prices of today).

Thus at reasonable rates on commuter parking spots, it would be possible to collect the $650 million to $2.0 billion a year needed to help fund WMATA.

c)  Efficiency:  Another consideration when choosing how best to provide additional funds to WMATA is the impact on efficiency of that option.  A fee on parking spaces would be a positive for this.  The Washington region stands out for its severe congestion, including not only in the city center but also in the suburbs (and often even more so in the suburbs).  A fee on parking spots, if passed along to the commuters who drive, would serve as an incentive to take transit, and might have some impact on those at the margin. The impact is likely to be modest, as a $1.30 to $3.50 fee per day would not be much.  As just discussed above, given the current cost of driving (even when commuters who drive are not charged for their parking spots), an additional $1.30 to $3.50 would be only a small additional cost, even when it is passed along.  But at least it would operate in the direction one wants to alleviate traffic congestion.

d)  Fairness:  Finally, the fee would be fair relative to the other options being considered in terms of who would be impacted.  Those who drive to work (over 90% of whom drive alone) are generally of higher income.  They can afford the high cost of driving, which is high (as noted above) even in those cases when they are provided free parking spaces by their employer.

Some would argue that since the drivers are not taking transit, they should not help pay for that transit.  But that is not correct.  First of all, they have a direct interest in reducing road congestion, and only a well-functioning transit system can help with that.  Drivers benefit directly (by reduced congestion) for every would-be driver who decides instead to take transit.  Second, all the other feasible funding options being considered for WMATA will be paid for in large part by drivers as well.  This is true whether a higher sales tax is imposed on the region, higher property taxes, or just higher government funding from their budgets (with this funding coming from the income taxes as well as sales taxes and property taxes these governments receive).  And as discussed below, higher fares on WMATA passengers to raise the amounts needed is simply not a feasible option.

Some drivers will likely also argue that they have no choice but to drive.  While they would still gain by any reduction in congestion (and would lose in a big way due to extreme congestion if WMATA service collapses due to inadequate funding), it is no doubt true that at least some commuters have no alternative but to drive.  However, the number is quite modest.  The 2016 survey of commuters undertaken by the Metropolitan Washington COG, referred to also above, asked their sample of commuters whether there was either bus service or train service “near” their homes (“near” as they would themselves consider it), and separately, “near” their place of work.  The response was 89% who said there were such transit services near their homes, and 86% who said there were such transit services near their places of work.  But note also that the 11% and 14%, respectively, who did not respond that there was such nearby transit, included those who responded that they did not know.  Many of those who drive to work might not know, as they never had a need to look into it.

The share of the Washington region’s population who do not have access to transit services is therefore relatively small, probably well less than 10% of commuters.  The transit options might not be convenient, and probably take longer than driving in many if not most cases given the current service provision, but transit alternatives exist for the overwhelming share of the regional population.  The issue is that those who can afford the high cost will drive, while the poorer workers who cannot will have no choice but to take transit.  Setting a fee on parking spaces for commuters in order to support the maintenance of decent transit services in the region is socially as well as economically fair.

D.  Alternative Funding Options That Have Been Proposed

1)   Higher Fares:  The first alternative that many would suggest for raising additional funds for the transit system is to charge higher fares.  While certainly feasible in a mechanical sense, such an alternative would fail the effectiveness test.  The fares are already high.  Any increase in fares will lead to yet more transit users choosing to drive instead (for those for whom this is an option).  The increase in fare revenues collected will be less than in proportion to the increase in fare rates set.  And at some point, so many transit users will switch that total fare revenue would in fact decrease.

In the recently passed FY18 budget for WMATA, the forecast revenues to be collected from fares is $709 million.  This is down from an expected $792 million in FY17 despite a fare increase averaging 4%.  Transit users are leaving as fares have increased and service has deteriorated.  To increase the fares to try to raise an additional $650 million would require an increase of over 90% if no riders then leave.  But more riders would of course leave, and it is not clear if anything additional (much less an extra $650 million) would be raised. And this would of course be even more so if one tried to raise an extra $2.0 billion.

So as all recognize, it will not be possible to resolve the WMATA funding issues by means of higher fares.  Any increase in fares will instead lead to more riders leaving the system for their cars, leading to even greater road congestion.

2)  Increase the Sales Tax Rate:  Mayor Muriel Bowser of Washington has pushed for this alternative, and the recent COG Technical Panel concluded with the recommendation that  “the best revenue solution is an addition to the general sales tax in all localities in the WMATA Compact area in the National Capital Region” (page 4).  This alternative has drawn support from some others in the region as well, but is also opposed by some. There is as yet no consensus.

Sales taxes are already imposed across the region, and it would certainly be feasible to add an extra percentage point to what is now charged.  But each jurisdiction sets the tax in somewhat different ways, in terms of what is covered and at what rates, and it is not clear to what the additional 1% rate would be applied.  For example, Washington, DC, imposes a general rate of 5.75%, but nothing on food or medicines, while liquor and restaurants are charged a sales tax of 10% and hotels a rate of 14.5%.  Would the additional 1% rate apply only to the general rate of 5.75%, or would there also be a 1% point increase in what is charged on liquor, restaurants, and the others?  And would there still be a zero rate on food and medicines?  Virginia, in contrast, has a general sales tax rate (in Northern Virginia) of 6.0%, but it charges a rate on food of 2.5%.  Would the Virginia rate on food rise to 3.5%, or stay at 2.5%?  There is also a higher sales tax rate on restaurant meals in certain of the local jurisdictions in Virginia (such as a 10% rate in Arlington County) but not in others (just the base 6% rate in Fairfax County).  How would these be affected?  And similar to DC, there are also special rates on hotels and certain other categories.  Maryland also has its own set of rules, with a base rate of 6.0%, a rate of 9% on alcohol, and no sales tax on food.

Such specifics could presumably be worked out, but the distribution of the burden across individuals as well as the jurisdictions will depend on the specific choices made.  Would food be subject to the tax in Virginia but not in Maryland or DC, for example?  The COG Technical Panel must have made certain assumptions on this, but what they were was not explained in its report.

But it concluded that an additional 1% point on some base would generate $650 million in FY2019.  This is higher than the estimate made last October as part of the COG Panel work, where it estimated that a 1% point increase in the sales tax rate would raise $500 million annually.  It is not clear what the underlying reasons were for this difference, but the recent estimates might have been more thoroughly done.  Or there might have been differing assumptions on what would be included in the base to be taxed, such as food.

A 1% point rise in the sales tax imposed in the region would, under these estimates, then suffice to raise the minimum $650 million needed now.  But to raise $1.0 billion annually, rising to $2.0 billion a few years later, substantial further increases would soon be needed. The amount would of course depend on the extent to which local sales of taxable goods and services grew over time within the region.  Assuming that sales of items subject to the sales tax were to rise at a 3% annual rate in nominal terms (2% for inflation and 1% for real growth), and that one would need to raise $2.0 billion by 2030 (in terms of the prices of 2030), then the base sales tax rate would need to rise by about 2.2% points.  A 6% rate would need to rise to 8.2%.  A rate that high would likely generate concerns.

Thus while a sales tax increase would be effective in raising the amounts needed to fund WMATA in the immediate future, with a 1% rise in the tax rate sufficing, the sales tax rate would need to rise further to quite high levels for it to raise the amounts needed a few years later.  Whether such high rates would be politically possible is not clear.

Also likely to be a concern, as the COG Panel itself recognized in its report, is that the distribution of the increased tax burden across the local jurisdictions would differ substantially from what these jurisdictions contribute now to fund WMATA, as well as from what it estimates each jurisdiction would be called on to contribute (under the existing sharing rules) to cover the funding gap anticipated for FY17 – FY26:

Funding Shares:

FY17 Actual

FY17-26 Gap

From Sales Tax

DC

37.3%

35.8%

22.8%

Maryland

38.4%

33.5%

26.5%

Virginia

24.3%

30.7%

50.8%

Source:  COG Panel Final Report, pages 9 and 15.

If an extra 1% point were added to the sales tax across the region, 50.8% of the revenues thus generated would come from the Northern Virginian jurisdictions that participate in the WMATA compact.  This is substantially higher than the 24.3% share these jurisdictions contributed in WMATA funding in FY17, or the 30.7% share they would be called on to contribute to cover the anticipated FY17-26 gap (higher than in just FY17 primarily due to the opening of the second phase of the Silver Line).  The mirror image of this is that DC and Maryland would gain, with much lower shares paid in through the sales tax increase than what they are funding now.  Whether this would be politically acceptable remains to be seen.

Use of a higher sales tax to fund WMATA needs would also not lead to efficiency gains for the transportation system.  The sales tax on goods and services sold in the region would not have an impact on incentives, positive or negative, on decisions on whether to drive for your commute or to take transit.  It would be neutral in this regard, rather than beneficial.

Finally, and perhaps most importantly, sales taxes are regressive, costing the poor more as a share of their income than what they cost the well-off.  A sales tax rise would not meet the fairness test.  Even with exemptions granted for foods and medicines, poor households spend a high share of their incomes on items subject to sales taxes, while the well-off spend a lower share.  The well-off are able to devote a higher share of their incomes to items not subject to the general sale tax, such as luxury housing, or vacations elsewhere, or services not subject to sales taxes, or can devote a higher share of their incomes to savings.

Aside from the regressive nature of a sales tax, an increase in the sales tax to fund transit (and through this to reduce road congestion) will be paid by all in the region, including those who do not commute to work.  It would be paid, for example, also by retirees, by students, and by others who may not normally make use of transit or the road system to get to work during rush hour periods.  But they would pay similarly to others, and some may question the fairness of this.

An increase in the sales tax rate would thus be feasible.  And while a 1% point rise in the rate would be effective in raising the amounts needed in the immediate future, there is a question as to whether this approach would be effective in raising the amounts needed a few years later, given constraints (political and otherwise) on how high the sales tax rate could go.  The region would likely then face another crisis and dilemma as to how WMATA can then be adequately funded.  There are also political issues in the distribution of the sales tax burden across the jurisdictions of the region, with Northern Virginia paying a disproportionate share.  This would be even more of a concern when the tax rate would need to be increased further to cover rising WMATA funding needs.  There would also be no efficiency gains through the use of a sales tax.  Finally and importantly, a higher sales tax is regressive and not fair as it taxes a higher share of the income of the poor than of the well-off, as well as of groups who do not use transit or the roads during the rush hour periods of peak congestion.

3)  A Special Property Tax Rate on Properties Near Metro Stations

Some have argued for a special additional property tax to be imposed on properties that are located close to Metro stations.  The largest trade union at WMATA has advocated for this, for example, and the COG Technical Panel looked at this as one option it considered.

The logic is that the value of such properties has been enhanced by their location close to transit, and that therefore the owners of these more valuable properties should pay a higher property tax rate on them.  But while superficially this might look logical, in fact it is not, as we will discuss below.  There are several issues, both practical and in terms of what would be good policy.  I will start with the practical issues.

The special, higher, tax rate would be imposed on properties located “close” to Metro stations, but there is the immediate question of how one defines “close”.  Most commonly, it appears that the proponents would set the higher tax on all properties, residential as well as commercial, that are within a half-mile of a station.  That would mean, of course, that a property near the dividing line would see a sharply higher property tax rate than its neighbor across the street that lies on the other side of the line.

And the difference would be substantial.  The COG Technical Panel estimated that the additional tax rate would need to be 0.43% of the assessed value of all properties within a half mile of the DC area Metro stations to raise the same $650 million that an extra 1% on the sales tax rate would generate.  It was not clear from the COG Panel Report, however, whether the higher tax of 0.43% was determined based on the value of all properties within a half-mile of Metro stations, or only on the base of all such properties which currently pay property tax.  Governmental entities (including international organizations such as the World Bank and IMF) and non-profits (such as hospitals and universities) do not pay this tax (as was discussed above), and such properties account for a substantial share of properties located close to Metro stations in the Washington region.  If the 0.43% rate was estimated based on the value of all such properties, but if (just for the sake of illustration; I do not know what the share actually is) properties not subject to tax make up half of such properties, then the additional tax rate on taxable properties that would be needed to generate the $650 million would be twice as high, or 0.86%.

But even at just the 0.43% rate, the increase in taxes on such properties would be large. For Washington, DC, it would amount to an increase of 50% on the current general residential property tax rate of 0.85%, an increase of 26% on the 1.65% rate for commercial properties valued at less than $3 million, and an increase of 23% on the 1.85% rate for commercial properties valued at more than $3 million.  Property tax rates vary by jurisdiction across the region, but this provides some sense of the magnitudes involved.

The higher tax rate paid would also be the same for properties sitting right on top of the Metro stations and those a half mile away.  But the locational value is highest for those properties that are right at the Metro stations, and then tapers down with distance. One should in principle reflect this in such a tax, but in practice it would be difficult to do. What would the rate of tapering be?  And would one apply the distance based on the direct geographic distance to the Metro station (i.e. “as the crow flies”), or based on the path that one would need to take to walk to the Metro station, which could be significantly different?

Thus while it would be feasible to implement the higher property tax as a fixed amount on all properties within a half-mile (at least on those properties which are not exempt from property tax), the half-mile mark is arbitrary and does not in fact reflect the locational advantages properly.

The rate would also have to be substantially higher if the goal is to ensure WMATA is funded adequately by the new revenue source beyond just the next few years.  Assuming, as was done above for the other options, that property values rise at a 3% rate over time going forward (due both to growth and to price inflation), the 0.43% special tax rate would raise $900 million by 2030.  If one needed, however, $2 billion by that year for WMATA funding needs, the rate would need to rise to 0.96%.  This would mean that residential properties within a half mile would be paying more than double the property tax paid by neighbors just beyond the half-mile mark (assuming basic property tax rates are similar in the future to what they are now, and based on the current DC rates), while commercial rates would be over 50% more.  The effectiveness in raising the amounts required is therefore not clear, given the political constraints on how high one could set such a special tax.

But the major drawback would be the impact on efficiency.  With the severe congestion on Washington region roads, one should want to encourage, not discourage, concentrated development near Metro stations.  Indeed, that is a core rationale for investing so much in building and sustaining the Metro system.  To the extent a higher property tax discourages such development, the impact of such a special property tax on real estate near Metro stations would be to discourage precisely what the Metro system was built to encourage.  This is perverse.  One could indeed make the case that properties located close to Metro stations should pay a lower property tax rather than a higher one.  I would not, as it would be complex to implement and difficult to explain.  But technically it would have merit.

Finally, a special additional tax on the current owners of the properties near Metro stations would not meet the fairness test as the current owners, with very few if any exceptions, were not the owners of that land when the Metro system locations were first announced a half century ago.  The owners of the land at that time, in the 1960s, would have enjoyed an increase in the value of their land due to the then newly announced locations of the Metro stations.  And even if the higher values did not immediately materialize when the locations of the new Metro system stations were announced, those higher values certainly would have materialized in the subsequent many decades, as ownership turned over and the properties were sold and resold.  One can be sure the prices they sold for reflected the choice locations.

But those who purchased that land or properties then or subsequently would not have enjoyed the windfall the original owners had.  The current owners would have paid the higher prices following from the locational advantages near the Metro stations, and they are the ones who own those properties now.  While they certainly can charge higher rents for space in properties close to the Metro stations, the prices they paid for the properties themselves would have reflected the fact they could charge such higher rents.  They did not and do not enjoy a windfall from this locational advantage.  Rather, the original owners did, and they have already pocketed those profits and left.

Note that while a special tax imposed now on properties close to Metro stations cannot be justified, this does not mean that such a tax would not have been justified at an earlier stage.  That is, one could justify that or a similar tax that focused on the initial windfall gain on land or properties that would be close to a newly announced Metro line.  When new such rail lines are being built (in the Washington region or elsewhere), part of the cost could be covered by a special tax (time-limited, or perhaps structured as a share of the windfall gain at the first subsequent arms-length sale of the property) that would capture a share of the windfall from the newly announced locations of the stations.

An example of this being done is the special tax assessments on properties close to where the Silver Line stations are being built.  The Silver Line is a new line for the Washington region Metro system, where the first phase opened recently and the second phase is under construction.  A special property tax assessment district was established, with a higher tax rate and with the funds generated used to help construct the line.  One should also consider such a special tax for properties close to the stations on the proposed Purple Line (not part of the WMATA system, but connected to it), should that light rail line be built. The real estate developers with properties along that line have been strong proponents of building that line.  This is understandable; they would enjoy major windfall gains on their properties if the line is built.  But while the windfall gains could easily be in the hundreds of millions of dollars, there has been no discussion of their covering a portion of the cost, which will sum to $5.6 billion in payments to the private contractor to build and then operate the line for 30 years.  Under current plans, the general taxpayer would be obliged to pay this full amount, with only a small share of this (less than one-fourth) recovered in forecast fares.

While setting a special (but temporary) tax for properties close to stations can be justified for new lines, such as the Silver Line or the Purple Line, the issues are quite different for the existing Metro lines.  Such a special, additional, tax on properties close to the Metro stations is not warranted, would be unfair to the current owners, and could indeed have the perverse outcome of discouraging concentrated development near the Metro stations when one should want to do precisely the reverse.

4)  Other Funding Options

There can, of course, be other approaches to raising the funds that WMATA needs.  But there are issues with each, they in general have few advocates, and most agree that one of the options discussed above would be preferable.

The COG Technical Panel reviewed several, but rejected them in favor of its preference for a higher sales tax rate.  For example, the COG Panel estimated that it would be possible to raise their target for WMATA funding of $650 million if all local jurisdictions raised their property tax rates by 0.08% of the assessed values on all properties located in the region. But general property taxes are used as the primary means local jurisdictions raise the funds they need for their local government operations, and it would be best to keep this separate from WMATA funding.  The COG Panel also considered the possibility of creating a new Value-Added Tax (or VAT), a tax that is common elsewhere in the world but has never been instituted in the US.  It is commonly described as similar to a sales tax, but is imposed only on the extra value created at each stage in the production and sale process. But it would be complicated to develop and implement any new tax such as this, and it also has never been imposed (as far as I am aware) on a regional rather than national basis.  A regional VAT might be especially complicated.  The COG Panel also noted the possibility of a “commuter tax”.  Such a tax would have income taxes being imposed on a worker based on where they work rather than where they live.  But since there would be an offset for any such taxes against what the worker would otherwise pay where they are resident, the overall revenues generated at the level of the region as a whole would be essentially nothing.  It would be a wash.  There is also the issue that Congress has by law prohibited Washington, DC, from imposing any such commuter tax.

The COG Panel also looked at the imposition of an additional tax on motor vehicle fuels (gasoline and diesel) sold in the region.  This would in principle be more attractive as a means for funding transit, as it would affect the cost of commuting by car (by raising the cost of fuel) and thus might encourage, at the margin, more to take transit and thus reduce congestion.  Fuel taxes in the US are also extremely low compared to the levels charged in most other developed countries around the world.  And federal fuel taxes have not been changed since 1993, with a consequent fall in real, inflation-adjusted, terms. There is a strong case that the rates should be raised, as has been discussed in an earlier post on this blog.  But such fuel taxes have been earmarked primarily for road construction and maintenance (the Highway Trust Fund at the federal level), and any such funds are desperately needed there.  It would be best to keep such fuel taxes earmarked for that purpose, and separated from the funding needed to support WMATA.

E.  Summary and Conclusion

All agree that there is a need to create a dedicated source of funds to provide additional funds to WMATA.  While there are a number of issues with WMATA, including management and governance issues, no one disagrees that a necessary element in any solution is increased funding.  WMATA has underinvested for decades, with the result that the current system cannot operate reliably or safely.

Estimates for the additional funding required by WMATA vary, but most agree that a minimum of an additional $650 million per annum is required now simply to bring the assets up to a minimum level of reliability and safety.  But estimates of what will in fact be needed once the current most urgent rehabilitation investments are made are substantially higher.  It is likely that the system will need on the order of $2 billion a year more than what would follow under current funding formulae by the end of the next decade, if the system’s capacity is to grow by what will be necessary to support the region’s growth.

A mandatory fee on parking spaces for all commuters in the region would work best to provide such funds.  It would be feasible as it can be implemented largely through the existing property tax system.  It would be effective in raising the amounts needed, as a fee equivalent to $1.30 per day would raise $650 million per year under current conditions, and a fee of $3.50 per day would raise $2 billion per year in the year 2030.  These rates are modest or even low compared to what it costs now to drive.

A mandatory fee on parking spaces would also contribute to a more efficient use of the transportation assets in the region not only by helping to ensure the Metro system can function safely and reliably, but by also encouraging at least some who now drive instead to take transit and hence reduce road congestion.  Finally, such a fee would be fair as it is those of higher income who most commonly drive (in part because driving is expensive), while it is the poor who are most likely to take transit.

An increase in the sales tax rate in the region would not have these advantages.  While an increase in the rate by 1% point was estimated by the COG Panel to generate $650 million a year under current conditions, the rate would need to increase by substantially more to generate the funds that will be needed to support WMATA in the future.  This could be politically difficult.  The revenues generated would also come disproportionately from Northern Virginia, which itself will create political difficulties.  It would also not lead to greater efficiencies in transport use, other than by keeping WMATA operational (as all the options would do).  Most importantly, a sales tax is regressive (even when foods and medicine are not taxed), with the poor bearing a disproportionate share of the costs.

A special property tax on all properties located a half mile (or whatever specified distance) of existing Metro stations could also be imposed, although readily so only on such properties that are currently subject to property tax.  But there would be arbitrariness with such a rigidly specified distance being imposed, with a sharp fall in the tax rate for properties just across that artificial border line.  There is also a question as to whether it would be politically feasible to set the rates to such high rates as would be necessary as to address the WMATA funding needs of beyond just the next few years.

But most important, such a special tax on the current owners would not be a tax on those who gained a windfall when the locations of the Metro stations were announced many decades ago.  Those original owners have already pocketed their windfall gains and have left.  The current owners paid a high price for that land or the developments on them, and are not themselves enjoying a special windfall.  And indeed, a new special property tax on developments near the Metro stations would have the effect of discouraging any such new investment.  But that is the precise opposite of what we should want.  The policy aim has long been to encourage, not discourage, concentrated development around the Metro stations.

This does not mean that some such special tax, if time-constrained, would not be a good choice when a new Metro line (or rail line such as the proposed Purple Line) is to be built. The owners of land near the planned future Metro stops would enjoy a windfall gain, and a special tax on that is warranted.  Such a special tax district has been set for the new Silver Line, and would be warranted also if the Purple Line is to be built.  Those who own that land will of course object, as they wish to keep their windfall in full.

To conclude, no one denies that any new tax or fee will be controversial and politically difficult.  But the Metro system is critical to the Washington region, and cannot be allowed to continue to deteriorate.  Increased funding (as well as other measures) will be necessary to fix this.  Among the possible options, the best approach is to set a mandatory fee that would be collected on all commuter parking spaces in the region.

Long-Term Structural Change in the US Economy: Manufacturing is Simply Following the Path of Agriculture

A.  Introduction

A major theme of Trump, both during his campaign and now as president, has been that jobs in manufacturing have been decimated as a direct consequence of the free trade agreements that started with NAFTA.  He repeated the assertion in his speech to Congress of February 28, where he complained that “we’ve lost more than one-fourth of our manufacturing jobs since NAFTA was approved”, but that because of him “Dying industries will come roaring back to life”.  He is confused.  But to be fair, there are those on the political left as well who are similarly confused.

All this reflects a sad lack of understanding of history.  Manufacturing jobs have indeed been declining in recent decades, and as the chart above shows, they have been declining as a share of total jobs in the economy since the 1940s.  Of all those employed, the share employed in manufacturing (including mining) fell by 7.6% points between 1994 (when NAFTA entered into effect) and 2015 (the most recent year in the sector data of the Bureau of Economic Analysis, used for consistency throughout this post), a period of 21 years. But the share employed in manufacturing fell by an even steeper 9.2% points in the 21 years before 1994.  The decline in manufacturing jobs (both as a share and in absolute number) is nothing new, and it is wrong to blame it on NAFTA.

It is also the case that manufacturing production has been growing steadily over this period.  Total manufacturing production (measured in real value-added terms) rose by 64% over the 21 years since NAFTA went into effect in 1994.  And this is also substantially higher than the 42% real growth in the 21 years prior to 1994.  Blaming NAFTA (and the other free trade agreements of recent decades) for a decline in manufacturing is absurd.  Manufacturing production has grown.

For those only interested in the assertion by Trump that NAFTA and the other free trade agreements have killed manufacturing in the US and with it the manufacturing jobs, one could stop here.  Manufacturing has actually grown strongly since NAFTA went into effect, and there are fewer manufacturing jobs now than before not because manufacturing has declined, but because workers in manufacturing are now more productive than ever before (with this a continuation of the pattern underway over at least the entire post-World War II period, and not something new).  But the full story is a bit more complex, as one also needs to examine why manufacturing production is at the level that it is.  For this, one needs to bring in the rest of the economy, in particular services. The rest of this blog post will address this broader issue,

Manufacturing jobs have nonetheless indeed declined.  To understand why, one needs to look at what has happened to productivity, not only in manufacturing but also in the other sectors of the economy (in particular in services).  And I would suggest that one could learn much by an examination of the similar factors behind the even steeper decline over the years in the share of jobs in agriculture.  It is not because of adverse effects of free trade.  The US is in fact the largest exporter of food products in the world.  Yet the share of workers employed in the agricultural sectors (including forestry and fishing) is now just 0.9% of the total.  It used to be higher:  4.3% in 1947 and 8.4% in 1929 (using the BEA data).  If one wants to go really far back, academics have estimated that agricultural employment accounted for 74% of all US employment in 1800, with this still at 56% in 1860.

Employment in agriculture has declined so much, from 74% of total employment in 1800 to 8.4% in 1929 to less than 1% today, because those employed in agriculture are far more productive today than they were before.  And while it leads to less employment in the sector, whether as a share of total employment or in absolute numbers, higher productivity is a good thing.  The US could hardly enjoy a modern standard of living if 74% of those employed still had to be working in agriculture in order to provide us food to eat. And while stretching the analysis back to 1800 is extreme, one can learn much by examining and understanding the factors behind the long-term trends in agricultural employment.  Manufacturing is following the same basic path.  And there is nothing wrong with that.  Indeed, that is exactly what one would hope for in order for the economy to grow and develop.

Furthermore, the effects of foreign trade on employment in the sectors, positive or negative, are minor compared to the long-term impacts of higher productivity.  In the post below we will look at what would have happened to employment if net trade would somehow be forced to zero by Trumpian policies.  The impact relative to the long term trends would be trivial.

This post will focus on the period since 1947, the earliest date for which the BEA has issued data on both sector outputs and employment.  The shares of agriculture as well as of manufacturing in both total employment and in output (with output measured in current prices) have both declined sharply over this period, but not because those sectors are producing less than before.  Indeed, their production in real terms are both far higher. Employment in those sectors has nevertheless declined in absolute numbers.  The reason is their high rates of productivity growth.  Importantly, productivity in those two sectors has grown at a faster pace than in the services sector (the rest of the economy).  As we will discuss, it is this differential rate of productivity growth (faster in agriculture and in manufacturing than in services) which explains the decline in the share employed in agriculture and manufacturing.

These structural changes, resulting ultimately from the differing rates of productivity growth in the sectors, can nonetheless be disruptive.  With fewer workers needed in a sector because of a high rate of productivity growth, while more workers are needed in those sectors where productivity is growing more slowly (although still positively and possibly strongly, just relatively less strongly), there is a need for workers to transfer from one sector to another.  This can be difficult, in particular for individuals who are older or who have fewer general skills.  But this was achieved before in the US as well as in other now-rich countries, as workers shifted out of agriculture and into manufacturing a century to two centuries ago.  Critically important was the development of the modern public school educational system, leading to almost universal education up through high school. The question the country faces now is whether the educational system can be similarly extended today to educate the workers needed for jobs in the modern services economy.

First, however, is the need to understand how the economy has reached the position it is now in, and the role of productivity growth in this.

B.  Sector Shares and Prices

As Chart 1 at the top of this post shows, employment in agriculture and in manufacturing have been falling steadily as a share of total employment since the 1940s, while jobs in services have risen.

[A note on the data:  The data here comes from the Bureau of Economic Analysis (BEA), which, as part of its National Income and Product Accounts (NIPA), estimates sector outputs as well as employment.  Employment is measured in full-time equivalent terms (so that two half-time workers, say, count as the equivalent of one full-time worker), which is important for measuring productivity growth.

And while the BEA provides figures on its web site for employment going all the way back to 1929, the figures for sector output on its web site only go back to 1947.  Thus while the chart at the top of this post goes back to 1929, all the analysis shown below will cover the period from 1947 only.  Note also that there is a break in the employment series in 1998, when the BEA redefined slightly how some of the detailed sectors would be categorized. They unfortunately did not then go back to re-do the categorizations in a consistent way in the years prior to that, but the changes are small enough not to matter greatly to this analysis.  And there were indeed similar breaks in the employment series in 1948 and again in 1987, but the changes there were so small (at the level of aggregation of the sectors used here) as not to be noticeable at all.

Also, for the purposes here the sector components of GDP have been aggregated to just three, with forestry and fishing included with agriculture, mining included with manufacturing, and construction included with services.  As a short hand, these sectors will at times be referred to simply as agriculture, manufacturing, and services.

Finally, the figures on sector outputs in real terms provided by the BEA data are calculated based on what are called “chain-weighted” indices of prices.  Chain-weighted indices are calculated based on moving shares of sector outputs (whatever the share is in any given period) rather than on fixed shares (i.e. the shares at the beginning or the end of the time period examined).  Chain-weighted indices are the best to use over extended periods, but are unfortunately not additive, where a sum (such as real GDP) will not necessarily equal exactly the sum of the estimates of the underlying sector figures (in real terms).  The issue is however not an important one for the questions being examined in this post.  While we will show the estimates in the charts for real GDP (based on a sum of the figures for the three sectors), there is no need to focus on it in the analysis.  Now back to the main text.]

The pattern in a chart of sector outputs as shares of GDP (measured in current prices by the value-added of each sector), is similar to that seen in Chart 1 above for the employment shares:

Agriculture is falling, and falling to an extremely small share of GDP (to less than 1% of GDP in 2015).  Manufacturing and mining is similarly falling from the mid-1950s, while services and construction is rising more or less steadily.  On the surface, all this appears to be similar to what was seen in Chart 1 for employment shares.  It also might look like the employment shares are simply following the shifts in output shares.

But there is a critical difference.  The shares of workers employed is a measure of numbers of workers (in full-time equivalent terms) as a share of the total.  That is, it is a measure in real terms.  But the shares of sector outputs in Chart 2 above is a measure of the shares in terms of current prices.  They do not tell us what is happening to sector outputs in real terms.

For sector outputs in real terms (based on the prices in the initial year, or 1947 here), one finds a very different chart:

Here, the output shares are not changing all that much.  There is only a small decline in agriculture (from 8% of the total in 1947 to 7% in 2015), some in manufacturing (from 28% to 22%), and then the mirror image of this in services (from 64% to 72%).  The changes in the shares were much greater in Chart 2 above for sector output shares in current prices.

Many might find the relatively modest shifts in the shares of sector outputs when measured in constant price terms to be surprising.  We were all taught in our introductory Economics 101 class of Engel Curve effects.  Ernst Engel was a German statistician who, in 1857, found that at the level of households, the share of expenditures on basic nourishment (food) fell the richer the household.  Poorer households spent a relatively higher share of their income on food, while better off households spent less.  One might then postulate that as a nation becomes richer, it will see a lower share of expenditures on food items, and hence that the share of agriculture will decline.

But there are several problems with this theory.  First, for various reasons it may not apply to changes over time as general income levels rise (including that consumption patterns might be driven mostly by what one observes other households to be consuming at the time; i.e. “keeping up with the Joneses” dominates).  Second, agricultural production spans a wide range of goods, from basic foodstuffs to luxury items such as steak.  The Engel Curve effects might mostly be appearing in the mix of food items purchased.

Third, and perhaps most importantly, the Engel Curve effects, if they exist, would affect production only in a closed economy where it was not possible to export or import agricultural items.  But one can in fact trade such agricultural goods internationally. Hence, even if domestic demand fell over time (due perhaps to Engel Curve effects, or for whatever reason), domestic producers could shift to exporting a higher share of their production.  There is therefore no basis for a presumption that the share of agricultural production in total output, in real terms, should be expected to fall over time due to demand effects.

The same holds true for manufacturing and mining.  Their production can be traded internationally as well.

If the shares of agriculture and manufacturing fell sharply over time in terms of current prices, but not in terms of constant prices (with services then the mirror image), the implication is that the relative prices of agriculture as well as manufacturing fell relative to the price of services.  This is indeed precisely what one sees:

These are the changes in the price indices published by the BEA, with all set to 1947 = 1.0.  Compared to the others, the change in agricultural prices over this 68 year period is relatively small.  The price of manufacturing and mining production rose by far more.  And while a significant part of this was due to the rise in the 1970s of the prices of mined products (in particular oil, with the two oil crises of the period, but also in the prices of coal and other mined commodities), it still holds true for manufacturing alone.  Even if one excludes the mining component, the price index rose by far more than that of agriculture.

But far greater was the change in the price of services.  It rose to an index value of 12.5 in 2015, versus an index value of just 1.6 for agriculture in that year.  And the price of services rose by double what the price of manufacturing and mining rose by (and even more for manufacturing alone).

With the price of services rising relative to the others, the share of services in GDP (in current prices) will then rise, and substantially so given the extent of the increase in its relative price, despite the modest change in its share in constant price terms.  Similarly, the fall in the shares of agriculture and of manufacturing (in current price terms) will follow directly from the fall in their prices (relative to the price of services), despite just a modest reduction in their shares in real terms.

The question then is why have we seen such a change in relative prices.  And this is where productivity enters.

C.  Growth in Output, Employment, and Productivity

First, it is useful to look at what happened to the growth in real sector outputs relative to 1947:

All sector outputs rose, and by substantial amounts.  While Trump has asserted that manufacturing is dying (due to free trade treaties), this is not the case at all.  Manufacturing (including mining) is now producing 5.3 times (in real terms) what it was producing in 1947.  Furthermore, manufacturing production was 64% higher in real terms in 2015 than it was in 1994, the year NAFTA went into effect.  This is far from a collapse.  The 64% increase over the 21 years between 1994 and 2015 was also higher than the 42% increase in manufacturing production of the preceding 21 year period of 1973 to 1994. There was of course much more going on than any free trade treaties, but to blame free trade treaties on a collapse in manufacturing is absurd.  There was no collapse.

Production in agriculture also rose, and while there was greater volatility (as one would expect due to the importance of weather), the increase in real output over the full period was in fact very similar to the increase seen for manufacturing.

But the biggest increase was for services.  Production of services was 7.6 times higher in 2015 than in 1947.

The second step is to look at employment, with workers measured here in full-time equivalent terms:

Despite the large increases in sector production over this period, employment in agriculture fell as did employment in manufacturing.  One unfortunately cannot say with precision by how much, given the break in the employment series in 1998.  However, there were drops in the absolute numbers employed in manufacturing both before and after the 1998 break in the series, while in agriculture there was a fall before 1998 (relative to 1947) and a fairly flat series after.  The change in the agriculture employment numbers in 1998 was relatively large for the sector, but since agricultural employment was such a small share of the total (only 1%), this does not make a big difference overall.

In contrast to the falls seen for agriculture and manufacturing, employment in the services sector grew substantially.  This is where the new jobs are arising, and this has been true for decades.  Indeed, services accounted for more than 100% of the new jobs over the period.

But one cannot attribute the decline in employment in agriculture and in manufacturing to the effects of international trade.  The points marked with a “+” in Chart 6 show what employment in the sectors would have been in 2015 (relative to 1947) if one had somehow forced net imports in the sectors to zero in 2015, with productivity remaining the same. There would have been an essentially zero change for agriculture (while the US is the world’s largest food exporter, it also imports a lot, including items like bananas which would be pretty stupid to try to produce here).  There would have been somewhat more of an impact on manufacturing, although employment in the sector would still have been well below what it had been decades ago.  And employment in services would have been a bit less. While most production in the services sector cannot be traded internationally, the sector includes businesses such as banking and other finance, movie making, professional services, and other areas where the US is in fact a strong exporter.  Overall, the US is a net exporter of services, and an abandonment of trade that forced all net imports (and hence net exports) to zero would lead to less employment in the sector.  But the impact would be relatively minor.

Labor productivity is then simply production per unit of labor.  Dividing one by the other leads to the following chart:

Productivity in agriculture grew at a strong pace, and by more than in either of the other two sectors over the period.  With higher productivity per worker, fewer workers will be needed to produce a given level of output.  Hence one can find that employment in agriculture declined over the decades, even though agricultural production rose strongly. Productivity in manufacturing similarly grew strongly, although not as strongly as in agriculture.

In contrast, productivity in the services sector grew at only a modest pace.  Most of the activities in services (including construction) are relatively labor intensive, and it is difficult to substitute machinery and new technology for the core work that they do.  Hence it is not surprising to find a slower pace of productivity growth in services.  But productivity in services still grew, at a positive 0.9% annual pace over the 1947 to 2015 period, as compared to a 2.8% annual pace for manufacturing and a 3.3% annual pace in agriculture.

Finally, and for those readers more technically inclined, one can convert this chart of productivity growth onto a logarithmic scale.  As some may recall from their high school math, a straight line path on a logarithmic scale implies a constant rate of growth.  One finds:

While one should not claim too much due to the break in the series in 1998, the path for productivity in agriculture on a logarithmic scale is remarkably flat over the full period (once one abstracts from the substantial year to year variation – short term fluctuations that one would expect from dependence on weather conditions).  That is, the chart indicates that productivity in agriculture grew at a similar pace in the early decades of the period, in the middle decades, and in the later decades.

In contrast, it appears that productivity in manufacturing grew at a certain pace in the early decades up to the early 1970s, that it then leveled off for about a decade until the early 1980s, and that it then moved to a rate of growth that was faster than it had been in the first few decades.  Furthermore, the pace of productivity growth in manufacturing following this turn in the early 1980s was then broadly similar to the pace seen in agriculture in this period (the paths are then parallel so the slope is the same).  The causes of the acceleration in the 1980s would require an analysis beyond the scope of this blog post. But it is likely that the corporate restructuring that became widespread in the 1980s would be a factor.  Some would also attribute the acceleration in productivity growth to the policies of the Reagan administration in those years.  However, one would also then need to note that the pace of productivity growth was similar in the 1990s, during the years of the Clinton administration, when conservatives complained that Clinton introduced regulations that undid many of the changes launched under Reagan.

Finally, and as noted before, the pace of productivity growth in services was substantially less than in the other sectors.  From the chart in logarithms, it appears the pace of productivity growth was relatively robust in the initial years, up to the mid-1960s.  While slower than the pace in manufacturing or in agriculture, it was not that much slower.  But from the mid-1960s, the pace of growth of productivity in services fell to a slower, albeit still positive, pace.  Furthermore, that pace appears to have been relatively steady since then.

One can summarize the results of this section with the following table:

Growth Rates:

1947 to 2015

Employment

Productivity

Output

Total (GDP)

1.5%

1.4%

2.9%

Agriculture

-0.7%

3.3%

2.6%

Manufacturing

-0.3%

2.8%

2.5%

Services

2.1%

0.9%

3.0%

The growth rate of output will be the simple sum of the growth rate of employment in a sector and the growth rate of its productivity (output per worker).  The figures here do indeed add up as they should.  They do not tell us what causes what, however, and that will be addressed next.

D.  Pulling It Together:  The Impact on Employment, Prices, and Sector Shares

Productivity is driven primarily by technological change.  While management skills and a willingness to invest to take advantage of what new technologies permit will matter over shorter periods, over the long term the primary driver will be technology.

And as seen in the chart above, technological progress, and the resulting growth in productivity, has proceeded at a different pace in the different sectors.  Productivity (real output per worker) has grown fastest over the last 68 years in agriculture (a pace of 3.3% a year), and fast as well in manufacturing (2.8% a year).  In contrast, the rate of growth of productivity in services, while positive, has been relatively modest (0.9% a year).

But as average incomes have grown, there has been an increased domestic demand in what the services sector produces, not only in absolute level but also as a share of rising incomes.  Since services largely cannot be traded internationally (with a few exceptions), the increased demand for services will need to be met by domestic production.  With overall production (GDP) matching overall incomes, and with demand for services growing faster than overall incomes, the growth of services (in real terms) will be greater than the growth of real GDP, and therefore also greater than growth in the rest of the economy (agriculture and manufacturing; see Chart 5).  The share of services in real GDP will then rise (Chart 3).

To produce this, the services sector needed more labor.  With productivity in the services sector growing at a slower pace (in relative terms) than that seen in agriculture and in manufacturing, the only way to obtain the labor input needed was to increase the share of workers in the economy employed in services (Chart 1).  And depending on the overall rate of labor growth as well as the size of the differences in the rates of productivity growth between the sectors, one could indeed find that the shift in workers out of agriculture and out of manufacturing would not only lead to a lower relative share of workers in those sectors, but also even to a lower absolute number of workers in those sectors.  And this is indeed precisely what happened, with the absolute number of workers in agriculture falling throughout the period, and falling in manufacturing since the late 1970s (Chart 6).

Finally, the differential rates of productivity growth account for the relative price changes seen between the sectors.  To be able to hire additional workers into services and out of agriculture and out of manufacturing, despite a lower rate of productivity growth in services, the price of services had to rise relative to agriculture as well as manufacturing. Services became more expensive to produce relative to the costs of agriculture or manufacturing production.  And this is precisely what is seen in Chart 4 above on prices.

To summarize, productivity growth allowed all sectors to grow.  With the higher incomes, there was a shift in demand towards services, which led it to grow at a faster pace than overall incomes (GDP).  But for this to be possible, particularly as its pace of productivity growth was slower than the pace in agriculture and in manufacturing, workers had to shift to services from the other sectors.  The effect was so great (due to the differing rates of growth of productivity) that employment in services rose to the point where services now employs close to 90% of all workers.

To be able to hire those workers, the price of services had to grow relative to the prices of the other sectors.  As a consequence, while there was only a modest shift in sector shares over time when measured in real terms (constant prices of 1947), there was a much larger shift in sector shares when measured in current prices.

The decline in the number of workers in manufacturing should not then be seen as surprising nor as a reflection of some defective policy.  Nor was it a consequence of free trade agreements.  Rather, it was the outcome one should expect from the relatively rapid pace of productivity growth in manufacturing, coupled with an economy that has grown over the decades with this leading to a shift in domestic demand towards services.  The resulting path for manufacturing was then the same basic path as had been followed by agriculture, although it has been underway longer in agriculture.  As a result, fewer than 1% of American workers are now employed in agriculture, with this possible because American agriculture is so highly productive.  One should expect, and indeed hope, that the same eventually becomes true for manufacturing as well.

Taxes to Pay for Highways: A Switch from the Tax on Gallons of Fuel Burned to a Tax on Miles Driven Would Be Stupid

Impact of Switching from Fuel Tax on Gallons Burned to Tax on Miles Driven

A.  Introduction

According to a recent report in the Washington Post, a significant and increasing number of state public officials and politicians are advocating for a change in the tax system the US uses to support highway building and maintenance.  The current system is based on a tax on gallons of fuel burned, and the proposed new system would be based on the number of miles a car is driven.  At least four East Coast states are proposing pilots on how this might be done, some West Coast states have already launched pilots, and states are applying for federal grants to consider the change.  There is indeed even a lobbying group based in Washington now advocating it:  The Mileage-Based User Fee Alliance.

There is no question that the current federal gas tax of 18.4 cents per gallon of gasoline is woefully inadequate.  It was last changed in 1993, 23 years ago, and has been kept constant in nominal terms ever since.  With general prices (based on the CPI) now 65% higher, 18.4 cents now will only buy 11.2 cents at the prices of 1993, a decline of close to 40%.  As a result, the Highway Trust Fund is terribly underfunded, and with all the politics involved in trying to find other sources of funding, our highways are in terrible shape. Basic maintenance is simply not being done.

An obvious solution would be simply to raise the gas tax back at least to where it was before in real terms.  Based on where the tax was when last set in 1993 and on the CPI for inflation since then, this would be 30.3 cents per gallon now, an increase of 11.9 cents from the current 18.4 cents per gallon.  Going back even further, the gasoline tax was set at 4 cents per gallon in 1959, to fund the construction of the then new Interstate Highway system (as well as for general highway maintenance).  Adjusting for inflation, that tax would be 32.7 cents per gallon now.  Also, looking at what the tax would need to be to fund adequately the Highway Trust Fund, a Congressional Budget Office report issued in 2014 estimated that a 10 to 15 cent increase (hence 28.4 cents to 33.4 cents per gallon) would be needed (based on projections through 2024).

These fuel tax figures are all similar.  Note also that while some are arguing that the Highway Trust Fund is underfunded because cars are now more fuel efficient than before, this is not the case.  Simply bringing the tax rate back in real terms to where it was before (30.3 cents based on the 1993 level or 32.7 cents based on the 1959 level) would bring the rate to within the 28.4 to 33.4 cents range that the CBO estimates is needed to fully fund the Highway Trust Fund.  The problem is not fuel efficiency, but rather the refusal to adjust the per gallon tax rate for inflation.

But Congress has refused to approve any such increase.  Anti-tax hardliners simply refuse to consider what they view as an increase in taxes, even though the measure would simply bring them back in real terms to where they were before.  And it is not even true that the general population is against an increase in the gas tax.  According to a poll sponsored by the Mineta Transportation Institute (a transportation think tank based at San Jose State University in California), 75% of those polled would support an immediate increase in the gas tax of 10 cents a gallon if the funds are dedicated to maintenance of our streets, roads, and highways (see the video clip embedded in the Washington Post article, starting at minute 3:00).

In the face of this refusal by Congress, some officials are advocating for a change in the tax, from a tax per gallon of fuel burned to a new tax per mile each car is driven.  While I do not see how this would address the opposition of the anti-tax politicians (this would indeed be a totally new tax, not an adjustment in the old tax to keep it from falling in real terms), there appears to be a belief among some that this would be accepted.

But even if such a new tax were viewed as politically possible, it would be an incredibly bad public policy move to replace the current tax on fuel burned with such a tax on miles driven.  It would in essence be a tax on fuel efficiency, with major distributional (as well as other) consequences, favoring those who buy gas guzzlers.  And as it would encourage the purchase of heavy gas guzzlers (relative to the policy now in place), it would also lead to more than proportional damage to our roads, meaning that road conditions would deteriorate further rather than improve.

This blog post will discuss why such consequences would follow.  To keep things simple, it will focus on the tax on gasoline (which I will sometimes simply referred to as gas, or as fuel).  There are similar, but separate taxes, on diesel and other fuels, and their levels should be adjusted proportionally with any adjustment for gasoline.  There is also the issue of the appropriate taxes to be paid by trucks and other heavy commercial vehicles.  That is an important, but separate, issue, and is not addressed here.

B.  The Proposed Switch Would Penalize Fuel Efficient Vehicles

The reports indicate that the policy being considered would impose a tax of perhaps 1.5 cents per mile driven in substitution for the current federal tax of 18.4 cents per gallon of gas burned (states have their own fuel taxes in addition, with these varying across states). For the calculations here I will take the 1.5 cent figure as the basis for the comparisons, even though no specific figure is as yet set.

First of all, it should be noted that at the current miles driven in the country and the average fuel economy of the stock of cars being driven, a tax of 1.5 cents per mile would raise substantially more in taxes than the current 18.4 cents per gallon of gas.  That is, at these rates, there would be a substantial tax increase.

Using figures for 2014, the average fuel efficiency (in miles per gallon) of the light duty fleet of motor vehicles in the US was 21.4 miles per gallon, and the average miles driven per driver was 13,476 miles.  At a tax of 1.5 cents per mile driven, the average driver would pay $202.14 (= $.015 x 13,476) in such taxes per year.  With an average fuel economy of 21.4 mpg, such a driver would burn 629.7 gallons per year, and at the current fuel tax of 18.4 cents per gallon, is now paying $115.87 (= $.184 x 629.7) in gas taxes per year. Hence the tax would rise by almost 75% ($202.14 / $115.87).  A 75% increase would be equivalent to raising the fuel tax from the current 18.4 cents to a rate of 32.1 cents per gallon.  While higher tax revenues are indeed needed, why a tax on miles driven would be acceptable to tax opponents while an increase in the tax per gallon of fuel burned is not, is not clear.

But the real reason to be opposed to a switch in the tax to miles driven is the impact it would have on incentives.  Taxes matter, and affect how people behave.  And a tax on miles driven would act, in comparison to the current tax on gallons of fuel burned, as a tax on fuel efficiency.

The chart at the top of this post shows how the tax paid would vary across cars of different fuel efficiencies.  It would be a simple linear relationship.  Assuming a switch from the current 18.4 cents per gallon of fuel burned to a new tax of 1.5 cents per mile driven, a driver of a highly fuel efficient car that gets 50 miles per gallon would see their tax increase by over 300%!  A driver of a car getting the average nation-wide fuel efficiency of 21.4 miles per gallon would see their tax increase by 75%, as noted above (and as reflected in the chart).  In contrast, someone driving a gas guzzler getting only 12 miles per gallon or less, would see their taxes in fact fall!  They would end up paying less under such a new system based on miles driven than they do now based on gallons of fuel burned.  Drivers of luxury sports cars or giant SUVs could well end up paying less than before, even with rates set such that taxes on average would rise by 75%.

Changing the tax structure in this way would, with all else equal, encourage drivers to switch from buying fuel efficient cars to cars that burn more gas.  There are, of course, many reasons why someone buys the car that they do, and fuel efficiency is only one.  But at the margin, changing the basis for the tax to support highway building and maintenance from a tax per gallon to a tax on miles driven would be an incentive to buy less fuel efficient cars.

C.  Other Problems

The change to a tax on miles driven from the tax on gallons of fuel burned would have a number of adverse effects:

a)  A Tax on Fuel Efficiency:  As noted above, this would become basically a tax on fuel efficiency.  More fuel efficient cars would pay higher taxes relative to what they do now, and there will be less of an incentive to buy more fuel efficient cars.  There would then be less of an incentive for car manufacturers to develop the technology to improve fuel efficiency.  This is what economists call a technological externality, and we all would suffer.

b)  Heavier Vehicles Cause Far More Damage to the Roads:  Heavier cars not only get poorer gas mileage, but also tear up the roads much more, leading to greater maintenance needs and expense.  Heavier vehicles also burn more fuel, but there is a critical difference.  As a general rule, vehicles burn fuel in proportion with their weight: A vehicle that weighs twice as much will burn approximately twice as much fuel.  Hence such a vehicle will pay twice as much in fuel taxes (when such taxes are in cents per gallon) per mile driven.

However, the heavier vehicle also cause more damage to the road over time, leading to greater maintenance needs.  And it will not simply be twice as much damage.  A careful early study found that the amount of damage from a heavier vehicle increases not in direct proportion to its weight, but rather approximately according to the fourth power of the ratio of the weights.  That is, a vehicle that weighs twice as much (for the same number of axles distributing the weight) will cause damage equal to 2 to the fourth power (=16) times as much as the lighter vehicle.  Hence if they were to pay taxes proportionate to the damage they do, a vehicle that is twice as heavy should pay 16 times more in taxes, not simply twice as much.

(Note that some now argue that the 2 to the fourth power figure found before might be an over-estimate, and that the relationship might be more like 2 to the third power.  But this would still imply that a vehicle that weighs twice as much does 8 times the damage (2 to the third power = 8).  The heavier vehicle still accounts for a grossly disproportionate share of damage to the roads.)

A tax that is set based on miles driven would tax heavy and light vehicles the same.  This is the opposite of what should be done:  Heavy vehicles cause far more damage to the roads than light vehicles do.  Encouraging heavy, fuel-thirsty, vehicles by switching from a tax per gallon of fuel burned to a tax per mile driven will lead to more road damage, and proportionately far more cost than what would be collected in highway taxes to pay for repair of that damage.

c)  Impact on Greenhouse Gases:  One also wants to promote fuel efficiency because of the impact on greenhouse gases, and hence global warming, from the burning of fuels. By basic chemistry, carbon dioxide (CO2) is a direct product of fuel that is burned.  The more fuel that is burned, the more CO2 will go up into the air and then trap heat. Economists have long argued that the most efficient way to address the issue of greenhouse gases being emitted would be to tax them in proportion to the damage they do.  A tax on gallons of fuel that are burned will do this, while a tax on miles driven (and hence independent of the fuel efficiency of the vehicle) will not.

An interesting question is what level of gasoline tax would do this.  That is, what would the level of fuel tax need to be, for that tax to match the damage being done through the associated emission of CO2.  The EPA has come up with estimates of what the social cost of such carbon emissions are (and see here for a somewhat more technical discussion of its estimates).  Unfortunately, given the uncertainties in any such calculations, as well as uncertainty on what the social discount rate should be (needed to discount costs arising in the future that follow from emitting greenhouse gases today), the cost range is quite broad. Hence the EPA presents figures for the social cost of emitting CO2 using expected values at alternative social discount rates of 2.5%, 3%, and 5%, as well as from a measure of the statistical distribution of one of them (the 95th percentile for the 3% discount rate, meaning there is only an estimated 5% chance that the cost will be higher than this).  The resulting costs per metric ton of CO2 emitted then range from a low of $11 for the expected value (the 50th percentile) at the 5% discount rate, $36 at the expected value for the 3% discount rate, and $56 for the expected value for the 2.5% discount rate, to $105 for the 95th percentile at a 3% discount rate (all for 2015).

With such range in social costs, one should be cautious in the interpretation of any one. But it may still be of interest to calculate how this would translate into a tax on gasoline burned by automobiles, to see if the resulting tax is “in the ballpark” of what our fuel taxes are or should be.  Every gallon of gasoline burned emits 19.64 pounds of CO2.  There are 2,204.62 pounds in a metric ton, so one gallon of gas burned emits 0.00891 metric tons of CO2.  At the middle social cost of $36 per metric ton of CO2 emitted (the expected value for the 3% social discount rate scenario), this implies that a fuel tax of 32.1 cents per gallon should be imposed.  This is surprisingly almost precisely the fuel tax figure that all the other calculations suggest is warranted.

d)  One Could Impose a Similar Tax on Electric Cars:  One of the arguments of the advocates of a switch from taxes on fuel burned to miles driven is that as cars have become more fuel efficient, they pay less (per mile driven) in fuel taxes.  This is true.  But as generally lighter vehicles (one of the main ways to improve fuel economy) they also cause proportionately far less road damage, as discussed above.

There is also an increasing share of electric, battery-powered, cars, which burn no fossil fuel at all.  At least they do not burn fossil fuels directly, as the electricity they need to recharge their batteries come from the power grid, where fossil fuels dominate.  But this is still close to a non-issue, as the share of electric cars among the vehicles on US roads is still tiny.  However, the share will grow over time (at least one hopes).  If the share does become significant, how will the cost of building and maintaining roads be covered and fairly shared?

The issue could then be addressed quite simply.  And one would want to do this in a way that rewards efficiency (as different electric cars have different efficiencies in the mileage they get for a given charge of electricity) rather than penalize it.  One could do this by installing on all electric cars a simple meter that keeps track of how much it receives in power charges (in kilowatt-hours) over say a year.  At an annual safety inspection or license renewal, one would then pay a tax based on that measure of power used over the year.  Such a meter would likely have a trivial cost, of perhaps a few dollars.

Note that the amounts involved to be collected would not be large.  According to the 2016 EPA Automobile Fuel Economy Guide (see page 5), all-electric cars being sold in the US have fuel efficiencies (in miles per gallon equivalent) of over 100 mpg, and as high as 124 mpg.  These are on the order of five times the 21.4 average mpg of the US auto stock, for which we calculated that the average tax to be paid would be $202.  Even ignoring that the electric cars will likely be driven for fewer miles per year than the average car (due to their shorter range), the tax per year commensurate with their fuel economy would be roughly $40.  This is not much.  It is also not unreasonable as electric cars are kept quite light (given the limits of battery technology) and hence do little road damage.

e)  There Are Even Worse Policies That Have Been Proposed:  As discussed above, there are many reasons why a switch from a tax on fuel burned to miles driven would be a bad policy change.  But it should be acknowledged that some have proposed even worse. One example is the idea that there should be a fixed annual tax per registered car that would fund what is needed for highway building and maintenance.  Some states in fact do this now.

The amounts involved are not huge.  As was calculated above, at the current federal gasoline tax of 18.4 cents per gallon, the driver of a car that gets the average mileage (of 21.4 mpg) for the average distance a year (of 13,476 miles) will pay $115.87 a year.  If the fuel tax were raised to 32.1 cents per gallon (or equivalently, if there were a tax of 1.5 cents per mile driven), the average tax paid would be still just $202.14 per year.  These are not huge amounts.  One could pay them as part of an annual license renewal.

But the tax structured in this way would then be the same for a driver who drives a fuel efficient car or a gas guzzler.  And it would be the same for a driver who drives only a few miles each year, or who drives far more than the average each year.  The driver of a heavy gas guzzler, or one who drives more miles each year than others, does more damage to the roads and should pay more to the fund that repairs such damage and develops new road capacity.  The tax should reflect the costs they are imposing on society, and a fixed annual fee does not.

f)  The Cost of Tax Collection Needs to be Recognized:  Finally, one needs to recognize that it will cost something to collect the taxes.  This cost will be especially high for a tax on miles driven.

The current system, of a tax on fuel burned, is efficient and costs next to nothing to collect.  It can be charged at the point where the gasoline and other fuels leaves in bulk from the refinery, as all of it will eventually be burned.  While the consumer ultimately pays for the tax when they pump their gas, the price being charged at the pump simply reflects the tax that had been charged at an earlier stage.

In contrast, a tax on miles driven would need to be worked out at the level of each individual car.  And if the tax is to include shares that are allocating to different states, the equipment will need to keep track of which states the car is being driven in.  As the Washington Post article on a possible tax on miles driven describes, experiments are underway on different ways this might be done.  All would require special equipment to be installed, with a GPS-based system commonly considered.

Such special equipment would have a cost, both up-front for the initial equipment and then recurrent if there is some regular reporting to the center (perhaps monthly) of miles driven.  No one knows right now what such a system might cost if it were in mass use, but one could easily imagine that a GPS tracking and reporting system might cost on the order of $100 up front, and then several dollars a month for reporting.  This would be a significant share of a tax collection that would generate an average of just $202 per driver each year.

There is also the concern that any type of GPS system would allow the overseers to spy on where the car was driven.  While this might well be too alarmist, and there would certainly be promises that this would not be done, some might not be comforted by such promises.

D.  Conclusion

While one should always consider whether given policies can be changed for the better, one needs also to recognize that often the changes proposed would make things worse rather than better.  Switching the primary source of funding for highway building and maintenance from a tax on fuel burned to a tax on miles driven is one example.  It would be a stupid move.

There is no doubt that the current federal tax on gasoline of 18.4 cents per gallon is too low.  The result is insufficient revenues for the Highway Trust Fund, and we end up with insufficient road capacity and roads that are terribly maintained.

What I was surprised by in the research for this blog post was finding that a wide range of signals all pointed to a similar figure for what the gasoline tax should be. Specifically:

  1. The 1959 gas tax of 4 cents per gallon in terms of current prices would be 32.7 cents per gallon;
  2. The 1993 gas tax of 18.4 cents per gallon in terms of current prices would be 30.3 cents per gallon;
  3. The proposal of a 1.5 cent tax per mile driven would be equivalent (given current average car mileage and the average miles driven per year) to 32.1 cents per gallon;
  4. The tax to offset the social cost of greenhouse gas emissions from burning fuel would be (at a 3% social discount rate) 32.1 cents per gallon.
  5. The Congressional Budget Office projected that the gasoline tax needed to fully fund the Highway Trust Fund would be in the range of 28.4 to 33.4 cents per gallon.

All these point in the same direction.  The tax on gasoline should be adjusted to between 30 and 33 cents per gallon, and then indexed for inflation.

The Rate of Return on Funds Paid Into Social Security Are Actually Quite Good

Social Security Real Rates of Return - Various Scenarios

 

A.  Introduction

The rate of return earned on what is paid into our Social Security accounts is actually quite good.  It is especially good when one takes into account that these are investments in safe assets, and thus that the proper comparison should be to the returns on other safe assets, not risky ones.  Yet critics of Social Security, mostly those who believe it should be shut down in its current form with some sort of savings plan invested through the financial markets (such as a 401(k) plan) substituted for it, often assert that the returns earned on the pension savings in Social Security are abysmally poor.

These critics argue that by “privatizing” Social Security, that is by shifting to individual plans invested through the financial markets, returns would be much higher and that thus our Social Security pensions would be “rescued”.  They assert that by privatizing Social Security investments, the system will be able to provide pensions that are either better than what we receive under the current system, or that similar pensions could be provided at lower contribution (Social Security tax) rates.

There are a number of problems with this.  They include that risks of poor financial returns (perhaps due, for example, to a financial collapse such as that suffered in 2008 in the last year of the Bush administration, when many Americans lost much or all of their retirement savings) would then be shifted on to individuals.  Individuals are not in a good position to take on such risks.  Individuals are also not financial professionals, nor normally in a good position to judge the competency of financial professionals who offer them services.  They also often underestimate the impact of high and compounding fees in depleting their savings over time.  For all these reasons, such an approach would serve as a bad substitute for the Social Security system such as we have now, which is designed to provide at least a minimum pension that people can rely on in their old age, with little risk.

But there is also a more fundamental problem with this approach.  It presumes that returns in the financial markets will in general be substantially higher than returns that one earns on what we pay into the Social Security system.  This blog post will show that this is simply not true.

The post looks at what the implicit rates of return are under several benchmark cases for individuals.  We pay into Social Security over our life time, and then draw down Social Security pensions in our old age.  The returns will vary for every individual, depending on their specific earnings profile (how much they earn in each year of their working career), their age, their marital situation, and other factors.  Hence there will be over 300 million different cases, one for each of the over 300 million Americans who are either paying into Social Security or are enjoying a Social Security pension now.  But by selecting a few benchmarks, and in particular extreme cases in the direction of where the returns will be relatively low, we can get a sense of the range of what the rates of return normally will be.

The chart at the top of this post shows several such cases.  The rest of this post will discuss each.

B.  Social Security Rates of Return Under Current Tax and Benefit Rates

The scenarios considered are all for an individual who is assumed to work from age 22 to age 65, who then retires at 66.  The individual is assumed to have reached age 65 in 2013 (the most recent year for which we have all the data required for the calculations), and hence reached age 62 in 2010 and was born in 1948.  The historical Social Security tax rates, the ceiling on wages subject to Social Security tax, the wage inflation factors used by Social Security to adjust for average wage growth, and the median earnings of workers by year, are all obtained from the comprehensive Annual Statistical Supplement to the Social Security Bulletin – 2014 (published April 2015).  Information on the parameters needed to calculate what the Social Security pension payments will be are also presented in detail in this Statistical Supplement, or in a more easy-to-use form for the specific case of someone reaching age 62 in 2010 in this publication of the Social Security Administration.  It is issued annually.

The Social Security pension for an individual is calculated by first taking the average annual earnings (as adjusted for average wage growth) over the 35 years of highest such earnings in a person’s working career.  For someone who always earned the median wage who reached age 62 in 2010, this would work out to $2,290 per month. The monthly pension (at full retirement age) would then be equal to 90% of the first $761, 32% of the earnings above this up to $4,586 per month, and then (if any is left, which would not be the case in this example of median earnings) 15% of the amount above $4,586.  Note the progressivity in these rates of 90% for the initial earnings, then 32%, and finally 15% for the highest earnings.  The monthly Social Security pension will then be the sum of these three components.  Since it is then adjusted for future inflation (as measured by the CPI), we do not need to make any further adjustments to determine the future pension payments in real terms.  The pensions will then be paid out from age 66 until the end of their life, which we take to be age 84, the current average life expectancy for someone who has reached the age of 65.

The historical series of payments made into the Social Security system through Social Security taxes (for Social Security Old-Age pensions only, and so excluding the taxes for Disability insurance and for Medicare) are then calculated by multiplying earnings by the tax rate (currently 10.6%, including the shares paid by both worker and employer).  The stream of payments are then put in terms of 2010 dollars using the historical CPI series from the Bureau of Labor Statistics.

We can thus calculate the real rates of return on Social Security pensions under various scenarios.  The first set of figures (lines A-1) in the chart above are for a worker whose earnings are equal to what median wages were throughout his or her working life.  (A table with the specific numbers on the rates of return is provided at the bottom of this post, for those who prefer a numerical presentation.)  The individual paid into the Social Security pension system when working, and will now draw a Social Security pension while in retirement.  One can calculate the real rate of return on this stream of payments in and then payments out, and in such a scenario for a single worker earning median wages throughout his or her career who retired at age 66 in 2014, the real rate of return works out to be 2.9%.  If the person is married, with a spouse receiving the standard spousal benefit, the real rate of return is 4.1%.

Such rates of return are pretty good, especially on what should be seen as a safe asset (provided the politicians do not kill the system).  Indeed, as discussed in an earlier post on this blog, the real rate of return (before taxes) on an investment in the S&P500 stock market index over the 50 year period 1962 to 2012, would have been just 2.9% per annum assuming fees on 401(k) type retirement accounts of 2.5% (which is typical once one aggregates the fees at all the various levels – see the discussion in section E.3 of this blog post).  But investing in the stock market, even in a broad based index such as the S&P500, is risky due to the volatility.  Retirement accounts in 401(k)’s are generally a mix of equity investments, fixed income securities (bonds of various maturities, CDs, and similar instruments), and cash.  Based on the recent average mix seen in 401(k)’s, and for the same 50 year period of 1962 to 2012, the average real rate of return achieved after the fees typically charged on such accounts would only have been 1.2%.  Social Security for a worker earning median wages is far better.

As noted above, there is a degree of progressivity in the system, as higher income earners will receive only a smaller boost in their pension (at the 15% rate) from the higher end of their earnings.  Thus the rates of return in Social Security for high income earners will be less.  The rates of return they will earn are shown on lines A-2 of the chart.  This extreme case is calculated for a worker who is assumed to have earned throughout his or her entire work life an amount equal to the maximum ceiling on wages subject to Social Security tax (which was $113,700 in 2013).  Note also that anyone earning even more than this will have the same rates of return, as they will not be paying any more into the Social Security system (it is capped at the wage ceiling subject to tax) and hence also not withdrawing any more (or less) in pension.

Such high income earners will nonetheless still see a positive real rate of return on their Social Security contributions, of 1.4% for a single earner and 2.8% if married receiving a spousal benefit.  That is, while there is some progressivity in the Social Security system, it is not such that the returns turn negative.  And the returns achieved are still better than what typical 401(k) retirement accounts earn.

One should also take into account that high income earners are living longer than low income earners.  Indeed, the increase in life expectancies have been substantial in the last 30 years for high income earners, but only modest for those in the bottom half of the earnings distribution.  While I do not have data on what the life expectancies are for a person whose earnings have been at the absolute top of the Social Security wage ceiling over the course of their careers, for the purposes here it was assumed their life expectancy (for someone who has reached age 65) would be increased to age 90 from the age of 84 for the overall population.

In such a scenario, the real rates of return for someone who paid into the Social Security system always at the wage ceiling over their entire life time and then drew a Social Security pension up to age 90 would be 2.2% if single and 3.4% if married with a standard spousal benefit.  These are far better than typical 401(k) returns, and indeed are quite good in comparison to an investment in any safe asset (once one takes into account fees).

C.  Social Security Rates of Return Assuming Higher Social Security Tax Rates

The rates of return calculated so far have been based on what the actual historical Social Security tax rates have been, and what the current benefit formula would determine for future pensions.  But as most know, at current tax and benefit rates the Social Security Trust Fund is projected to be depleted by about 2034 according to current estimates.  The reason is that life expectancies are now longer (which is a good thing), but inadequate adjustments have been made in Social Security tax rates to allow for pay-outs which will now need to cover longer lifetimes.  The problem has been gridlock in Washington, where an important faction of politicians opposed to Social Security are able to block any decision on how to pay for longer life expectancies.

There are a number of ways to ensure Social Security could be adequately funded.  One option, which I would recommend, would be simply to lift the ceiling on wages subject to Social Security tax (which was $113,700 in 2013, $118,500 in 2015, and will remain at $118,500 in 2016).  As discussed in section E.2 of this earlier blog post, it turns out that this alone should suffice to ensure the Social Security Trust Fund remains adequate for the foreseeable future.  The extra funding needed is an estimated 19.4% over what is collected now (based on calculations from an earlier post on this blog, but with data now a few years old), and it turns out that ending the wage ceiling would provide this.  At the ceiling on wages subject to Social Security tax of $113,700 in 2013, the share of workers earning at this ceiling or more was just 6.1%, but due to the skewed distribution of income in favor of the rich, untaxed wages in excess of the ceiling accounted for 17.3% of all wages paid.  That is, Social Security taxes were being paid on only 82.7% of all wages.  If the taxes were instead paid on the full 100%, Social Security would be collecting 21% more (= 100.0 / 82.7).

The extremely rich would then pay Social Security taxes at the same rate as most of the population, instead of something lower.  It should also be noted that it is the increase in life expectancy of those at the upper end of the income distribution which is driving the Social Security system into deficit at the current tax rates, as they are the ones living longer while those in the lower part of the income distribution are not.  Thus it is fair that those who will be drawing a Social Security pension for a longer period should be those who should be called on to pay more into the system.

To be highly conservative, however, for the rate of return calculations being discussed here I have assumed that the general Social Security tax rate will be increased by 19.4% on all wages below the ceiling, while the ceiling remains where it has been.  These calculations are for historical scenarios, where the purpose is to determine what the rates of return on payments into Social Security would have been had the tax rates been 19.4% higher on all, to provide for a fully funded system.  Finally, note that while these scenarios assume a higher Social Security tax rate historically, they also set the future pension benefits to be paid out to be the same as what they would be under the current benefit rates.  That is, the pay-out formulae would need to be changed to leave benefits the same despite the higher taxes being paid into the system.

The real rates of return would then be as shown in Panel B of the chart above.  While somewhat less than before, the real returns are still substantial, and still normally better than what is earned in a typical 401(k) plan.  The returns for someone earning at the median wage throughout their career will now be 2.4% if single and 3.6% if married (0.5% points less than before).  The returns for someone earning at or above the ceiling for wages subject to Social Security taxes would now be earning at the real rate of 0.8% if single and 2.2% if married for the age 84 life expectancy (0.6% points less than before), or 1.6% and 2.9% (for single and married) if the life expectancy of such high earners is in fact age 90 (also 0.6% points less, before round-off).

The real rates of return all remain positive, and generally good compared to what 401(k)’s typically earn.

D.  Conclusion

As noted above, the actual profile of Social Security taxes paid and pension received will vary by individual.  No two cases will be exactly alike.  But the calculations here indicate that for someone with median earnings, and still even in the extreme case of someone with very high earnings (where a degree of progressivity in the system will reduce the returns), the rates of return earned on what is paid into and then taken out of the Social Security system are actually quite good.  They generally are better than what is earned in a typical 401(k) account (after fees), and indeed often better than what would earn in a pure equity investment of the S&P500 index (and without the risk and volatility of such an investment).

Social Security is important and has become increasingly important.  Due to the end of many traditional defined benefit pension plans, with a forced switch to 401(k) plans or indeed often to nothing at all from the employer, Social Security now accounts (for those aged 65 or older) for a disturbingly high share on the incomes of many of the aged. Specifically, Social Security now accounts for half or more of total income for two-thirds of all those age 65 or older, and accounts for 100% of their income for one-quarter of them. And for the bottom 40% of this population, Social Security accounted for 90% or more of their total income for three-quarters of them, and 100% of their income for over half of them.

The problem is not in the Social Security system itself.  It is highly efficient, with an expense ratio in 2014 of just 0.4% of benefits paid.  Private 401(k) plans, with typical expenses of 2.5% of assets (not benefits) each year will have expenses over their life time that are 90 times as great as what Social Security costs to run.  And as seen in this post, the return on individual Social Security accounts are quite good.

The problem that Social Security faces is rather that with longer life expectancies (most importantly for those of higher income), the Social Security taxes being paid are no longer sufficient to cover the payouts to cover these longer lifetimes.  They need to be adjusted. There are several options, and my recommendation would be to start by ending the ceiling on wages subject to Social Security taxes.  This would suffice to solve the problem.  But one could go further.  As discussed in an earlier blog post (see Section E.2), not only should all wages be taxed equally, but one should extend this to taxing all forms of income equally (i.e. income from wealth as well as income from wages).  If one did this, one could then either cut the Social Security tax rate sharply, or raise the Social Security benefits that could be paid, or (and most likely) some combination of each.

But something needs to be done, or longer life spans will lead the Social Security Trust Fund to run out by around 2034.  The earlier this is resolved the better, both to ensure less of a shock when the change is finally made (as it could then be phased in over time) and for equity reasons (as it is those paying in now who are not adequately funding the system for what they will eventually drawdown).

 

============================================================

Annex:  Summary Table

Real Rates of Return from Social Security Old-Age Taxes and Benefits

A)  Social Security Scenarios – Current Rates

  1)  Earnings at Median Throughout Career

   a)  Single

2.9%

   b)  Married

4.1%

  2)  Earnings at Ceiling Throughout Career

   a)  Single

1.4%

   b)  Married

2.8%

  3)  Earnings at Ceiling, and Life Expectancy of 90

   a)  Single

2.2%

   b)  Married

3.4%

B)  Social Security with 19.4% higher tax rate

  1)  Earnings at Median Throughout Career

   a)  Single

2.4%

   b)  Married

3.6%

  2)  Earnings at Ceiling Throughout Career

   a)  Single

0.8%

   b)  Married

2.2%

  3)  Earnings at Ceiling, and Life Expectancy of 90

   a)  Single

1.6%

   b)  Married

2.9%

C)  Comparison to 401(k) Vehicles

  1)  S&P500 after typical fees

2.9%

  2)  Average 401(k) mix after typical fees

1.2%