Andrew Yang’s Proposed $1,000 per Month Grant: Issues Raised in the Democratic Debate

A.  Introduction

This is the second in a series of posts on this blog addressing issues that have come up during the campaign of the candidates for the Democratic nomination for president, and which specifically came up in the October 15 Democratic debate.  As flagged in the previous blog post, one can find a transcript of the debate at the Washington Post website, and a video of the debate at the CNN website.

This post will address Andrew Yang’s proposal of a $1,000 per month grant for every adult American (which I will mostly refer to here as a $12,000 grant per year).  This policy is called a universal basic income (or UBI), and has been explored in a few other countries as well.  It has received increased attention in recent years, in part due to the sharp growth in income inequality in the US of recent decades, that began around 1980.  If properly designed, such a $12,000 grant per adult per year could mark a substantial redistribution of income.  But the degree of redistribution depends directly on how the funding would be raised.  As we will discuss below, Yang’s specific proposals for that are problematic.  There are also other issues with such a program which, even if well designed, calls into question whether it would be the best approach to addressing inequality.  All this will be discussed below.

First, however, it is useful to address two misconceptions that appear to be widespread.  One is that many appear to believe that the $12,000 per adult per year would not need to come from somewhere.  That is, everyone would receive it, but no one would have to provide the funds to pay for it.  That is not possible.  The economy produces so much, whatever is produced accrues as incomes to someone, and if one is to transfer some amount ($12,000 here) to each adult then the amounts so transferred will need to come from somewhere.  That is, this is a redistribution.  There is nothing wrong with a redistribution, if well designed, but it is not a magical creation of something out of nothing.

The other misconception, and asserted by Yang as the primary rationale for such a $12,000 per year grant, is that a “Fourth Industrial Revolution” is now underway which will lead to widespread structural unemployment due to automation.  This issue was addressed in the previous post on this blog, where I noted that the forecast job losses due to automation in the coming years are not out of line with what has been the norm in the US for at least the last 150 years.  There has always been job disruption and turnover, and while assistance should certainly be provided to workers whose jobs will be affected, what is expected in the years going forward is similar to what we have had in the past.

It is also a good thing that workers should not be expected to rely on a $12,000 per year grant to make up for a lost job.  Median earnings of a full-time worker was an estimated $50,653 in 2018, according to the Census Bureau.  A grant of $12,000 would not go far in making up for this.

So the issue is one of redistribution, and to be fair to Yang, I should note that he posts on his campaign website a fair amount of detail on how the program would be paid for.  I make use of that information below.  But the numbers do not really add up, and for a candidate who champions math (something I admire), this is disappointing.

B.  Yang’s Proposal of a $1,000 Monthly Grant to All Americans

First of all, the overall cost.  This is easy to calculate, although not much discussed.  The $12,000 per year grant would go to every adult American, who Yang defines as all those over the age of 18.  There were very close to 250 million Americans over the age of 18 in 2018, so at $12,000 per adult the cost would be $3.0 trillion.

This is far from a small amount.  With GDP of approximately $20 trillion in 2018 ($20.58 trillion to be more precise), such a program would come to 15% of GDP.  That is huge.  Total taxes and revenues received by the federal government (including all income taxes, all taxes for Social Security and Medicare, and everything else) only came to $3.3 trillion in FY2018.  This is only 10% more than the $3.0 trillion that would have been required for Yang’s $12,000 per adult grants.  Or put another way, taxes and other government revenues would need almost to be doubled (raised by 91%) to cover the cost of the program.  As another comparison, the cost of the tax cuts that Trump and the Republican leadership rushed through Congress in December 2017 was forecast to be an estimated $150 billion per year.  That was a big revenue loss.  But the Yang proposal would cost 20 times as much.

With such amounts to be raised, Yang proposes on his campaign website a number of taxes and other measures to fund the program.  One is a value-added tax (VAT), and from his very brief statements during the debates but also in interviews with the media, one gets the impression that all of the program would be funded by a value-added tax.  But that is not the case.  He in fact says on his campaign website that the VAT, at the rate and coverage he would set, would raise only about $800 billion.  This would come only to a bit over a quarter (27%) of the $3.0 trillion needed.  There is a need for much more besides, and to his credit, he presents plans for most (although not all) of this.

So what does he propose specifically?:

a) A New Value-Added Tax:

First, and as much noted, he is proposing that the US institute a VAT at a rate of 10%.  He estimates it would raise approximately $800 billion a year, and for the parameters for the tax that he sets, that is a reasonable estimate.  A VAT is common in most of the rest of the world as it is a tax that is relatively easy to collect, with internal checks that make underreporting difficult.  It is in essence a tax on consumption, similar to a sales tax but levied only on the added value at each stage in the production chain.  Yang notes that a 10% rate would be approximately half of the rates found in Europe (which is more or less correct – the rates in Europe in fact vary by country and are between 17 and 27% in the EU countries, but the rates for most of the larger economies are in the 19 to 22% range).

A VAT is a tax on what households consume, and for that reason a regressive tax.  The poor and middle classes who have to spend all or most of their current incomes to meet their family needs will pay a higher share of their incomes under such a tax than higher-income households will.  For this reason, VAT systems as implemented will often exempt (or tax at a reduced rate) certain basic goods such as foodstuffs and other necessities, as such goods account for a particularly high share of the expenditures of the poor and middle classes.  Yang is proposing this as well.  But even with such exemptions (or lower VAT rates), a VAT tax is still normally regressive, just less so.

Furthermore, households will in the end be paying the tax, as prices will rise to reflect the new tax.  Yang asserts that some of the cost of the VAT will be shifted to businesses, who would not be able, he says, to pass along the full cost of the tax.  But this is not correct.  In the case where the VAT applies equally to all goods, the full 10% will be passed along as all goods are affected equally by the now higher cost, and relative prices will not change.  To the extent that certain goods (such as foodstuffs and other necessities) are exempted, there could be some shift in demand to such goods, but the degree will depend on the extent to which they are substitutable for the goods which are taxed.  If they really are necessities, such substitution is likely to be limited.

A VAT as Yang proposes thus would raise a substantial amount of revenues, and the $800 billion figure is a reasonable estimate.  This total would be on the order of half of all that is now raised by individual income taxes in the US (which was $1,684 billion in FY2018).  But one cannot avoid that such a tax is paid by households, who will face higher prices on what they purchase, and the tax will almost certainly be regressive, impacting the poor and middle classes the most (with the extent dependent on how many and which goods are designated as subject to a reduced VAT rate, or no VAT at all).  But whether regressive or not, everyone will be affected and hence no one will actually see a net increase of $12,000 in purchasing power from the proposed grant  Rather, it will be something less.

b)  A Requirement to Choose Either the $12,000 Grants, or Participation in Existing Government Social Programs

Second, Yang’s proposal would require that households who currently benefit from government social programs, such as for welfare or food stamps, would be required to give up those benefits if they choose to receive the $12,000 per adult per year.  He says this will lead to reduced government spending on such social programs of $500 to $600 billion a year.

There are two big problems with this.  The first is that those programs are not that large.  While it is not fully clear how expansive Yang’s list is of the programs which would then be denied to recipients of the $12,000 grants, even if one included all those included in what the Congressional Budget Office defines as “Income Security” (“unemployment compensation, Supplemental Security Income, the refundable portion of the earned income and child tax credits, the Supplemental Nutrition Assistance Program [food stamps], family support, child nutrition, and foster care”), the total spent in FY2018 was only $285 billion.  You cannot save $500 to $600 billion if you are only spending $285 billion.

Second, such a policy would be regressive in the extreme.  Poor and near-poor households, and only such households, would be forced to choose whether to continue to receive benefits under such existing programs, or receive the $12,000 per adult grant per year.  If they are now receiving $12,000 or more in such programs per adult household member, they would receive no benefit at all from what is being called a “universal” basic income grant.  To the extent they are now receiving less than $12,000 from such programs (per adult), they may gain some benefit, but less than $12,000 worth.  For example, if they are now receiving $10,000 in benefits (per adult) from current programs, their net gain would be just $2,000 (setting aside for the moment the higher prices they would also now need to pay due to the 10% VAT).  Furthermore, only the poor and near-poor who are being supported by such government programs will see such an effective reduction in their $12,000 grants.  The rich and others, who benefit from other government programs, will not see such a cut in the programs or tax subsidies that benefit them.

c)  Savings in Other Government Programs 

Third, Yang argues that with his universal basic income grant, there would be a reduction in government spending of $100 to $200 billion a year from lower expenditures on “health care, incarceration, homelessness services and the like”, as “people would be able to take better care of themselves”.  This is clearly more speculative.  There might be some such benefits, and hopefully would be, but without experience to draw on it is impossible to say how important this would be and whether any such savings would add up to such a figure.  Furthermore, much of those savings, were they to follow, would accrue not to the federal government but rather to state and local governments.  It is at the state and local level where most expenditures on incarceration and homelessness, and to a lesser degree on health care, take place.  They would not accrue to the federal budget.

d)  Increased Tax Revenues From a Larger Economy

Fourth, Yang states that with the $12,000 grants the economy would grow larger – by 12.5% he says (or $2.5 trillion in increased GDP).  He cites a 2017 study produced by scholars at the Roosevelt Institute, a left-leaning non-profit think tank based in New York, which examined the impact on the overall economy, under several scenarios, of precisely such a $12,000 annual grant per adult.

There are, however, several problems:

i)  First, under the specific scenario that is closest to the Yang proposal (where the grants would be funded through a combination of taxes and other actions), the impact on the overall economy forecast in the Roosevelt Institute study would be either zero (when net distribution effects are neutral), or small (up to 2.6%, if funded through a highly progressive set of taxes).

ii)  The reason for this result is that the model used by the Roosevelt Institute researchers assumes that the economy is far from full employment, and that economic output is then entirely driven by aggregate demand.  Thus with a new program such as the $12,000 grants, which is fully paid for by taxes or other measures, there is no impact on aggregate demand (and hence no impact on economic output) when net distributional effects are assumed to be neutral.  If funded in a way that is not distributionally neutral, such as through the use of highly progressive taxes, then there can be some effect, but it would be small.

In the Roosevelt Institute model, there is only a substantial expansion of the economy (of about 12.5%) in a scenario where the new $12,000 grants are not funded at all, but rather purely and entirely added to the fiscal deficit and then borrowed.  And with the current fiscal deficit now about 5% of GDP under Trump (unprecedented even at 5% in a time of full employment, other than during World War II), and the $12,000 grants coming to $3.0 trillion or 15% of GDP, this would bring the overall deficit to 20% of GDP!

Few economists would accept that such a scenario is anywhere close to plausible.  First of all, the current unemployment rate of 3.5% is at a 50 year low.  The economy is at full employment.  The Roosevelt Institute researchers are asserting that this is fictitious, and that the economy could expand by a substantial amount (12.5% in their scenario) if the government simply spent more and did not raise taxes to cover any share of the cost.  They also assume that a fiscal deficit of 20% of GDP would not have any consequences, such as on interest rates.  Note also an implication of their approach is that the government spending could be on anything, including, for example, the military.  They are using a purely demand-led model.

iii)  Finally, even if one assumes the economy will grow to be 12.5% larger as a result of the grants, even the Roosevelt Institute researchers do not assume it will be instantaneous.  Rather, in their model the economy becomes 12.5% larger only after eight years.  Yang is implicitly assuming it will be immediate.

There are therefore several problems in the interpretation and use of the Roosevelt Institute study.  Their scenario for 12.5% growth is not the one that follows from Yang’s proposals (which is funded, at least to a degree), nor would GDP jump immediately by such an amount.  And the Roosevelt Insitute model of the economy is one that few economists would accept as applicable in the current state of the economy, with its 3.5% unemployment.

But there is also a further problem.  Even assuming GDP rises instantly by 12.5%, leading to an increase in GDP of $2.5 trillion (from a current $20 trillion), Yang then asserts that this higher GDP will generate between $800 and $900 billion in increased federal tax revenue.  That would imply federal taxes of 32 to 36% on the extra output.  But that is implausible.  Total federal tax (and all other) revenues are only 17.5% of GDP.  While in a progressive tax system the marginal tax revenues received on an increase in income will be higher than at the average tax rate, the US system is no longer very progressive.  And the rates are far from what they would need to be twice as high at the margin (32 to 36%) as they are at the average (17.5%).  A more plausible estimate of the increased federal tax revenues from an economy that somehow became 12.5% larger would not be the $800 to $900 billion Yang calculates, but rather about half that.

Might such a universal basic income grant affect the size of the economy through other, more orthodox, channels?  That is certainly possible, although whether it would lead to a higher or to a lower GDP is not clear.  Yang argues that it would lead recipients to manage their health better, to stay in school longer, to less criminality, and to other such social benefits.  Evidence on this is highly limited, but it is in principle conceivable in a program that does properly redistribute income towards those with lower incomes (where, as discussed above, Yang’s specific program has problems).  Over fairly long periods of time (generations really) this could lead to a larger and stronger economy.

But one will also likely see effects working in the other direction.  There might be an increase in spouses (wives usually) who choose to stay home longer to raise their children, or an increase in those who decide to retire earlier than they would have before, or an increase in the average time between jobs by those who lose or quit from one job before they take another, and other such impacts.  Such impacts are not negative in themselves, if they reflect choices voluntarily made and now possible due to a $12,000 annual grant.  But they all would have the effect of reducing GDP, and hence the tax revenues that follow from some level of GDP.

There might therefore be both positive and negative impacts on GDP.  However, the impact of each is likely to be small, will mostly only develop over time, and will to some extent cancel each other out.  What is likely is that there will be little measurable change in GDP in whichever direction.

e)  Other Taxes

Fifth, Yang would institute other taxes to raise further amounts.  He does not specify precisely how much would be raised or what these would be, but provides a possible list and says they would focus on top earners and on pollution.  The list includes a financial transactions tax, ending the favorable tax treatment now given to capital gains and carried interest, removing the ceiling on wages subject to the Social Security tax, and a tax on carbon emissions (with a portion of such a tax allocated to the $12,000 grants).

What would be raised by such new or increased taxes would depend on precisely what the rates would be and what they would cover.  But the total that would be required, under the assumption that the amounts that would be raised (or saved, when existing government programs are cut) from all the measures listed above are as Yang assumes, would then be between $500 and $800 billion (as the revenues or savings from the programs listed above sum to $2.2 to $2.5 trillion).  That is, one might need from these “other taxes” as much as would be raised by the proposed new VAT.

But as noted in the discussion above, the amounts that would be raised by those measures are often likely to be well short of what Yang says will be the case.  One cannot save $500 to $600 billion in government programs for the poor and near-poor if government is spending only $285 billion on such programs, for example.  A more plausible figure for what might be raised by those proposals would be on the order of $1 trillion, mostly from the VAT, and not the $2.2 to $2.5 trillion Yang says will be the case.

C.  An Assessment

Yang provides a fair amount of detail on how he would implement a universal basic income grant of $12,000 per adult per year, and for a political campaign it is an admirable amount of detail.  But there are still, as discussed above, numerous gaps that prevent anything like a complete assessment of the program.  But a number of points are evident.

To start, the figures provided are not always plausible.  The math just does not add up, and for someone who extolls the need for good math (and rightly so), this is disappointing.  One cannot save $500 to $600 billion in programs for the poor and near-poor when only $285 billion is being spent now.  One cannot assume that the economy will jump immediately by 12.5% (which even the Roosevelt Institute model forecasts would only happen in eight years, and under a scenario that is the opposite of that of the Yang program, and in a model that few economists would take as credible in any case).  Even if the economy did jump by so much immediately, one would not see an increase of $800 to $900 billion in federal tax revenues from this but rather more like half that.  And other such issues.

But while the proposal is still not fully spelled out (in particular on which other taxes would be imposed to fill out the program), we can draw a few conclusions.  One is that the one group in society who will clearly not gain from the $12,000 grants is the poor and near-poor, who currently make use of food stamp and other such programs and decide to stay with those programs.  They would then not be eligible for the $12,000 grants.  And keep in mind that $12,000 per adult grants are not much, if you have nothing else.  One would still be below the federal poverty line if single (where the poverty line in 2019 is $12,490) or in a household with two adults and two or more children (where the poverty line, with two children, is $25,750).  On top of this, such households (like all households) will pay higher prices for at least some of what they purchase due to the new VAT.  So such households will clearly lose.

Furthermore, those poor or near-poor households who do decide to switch, thus giving up their eligibility for food stamps and other such programs, will see a net gain that is substantially less than $12,000 per adult.  The extent will depend on how much they receive now from those social programs.  Those who receive the most (up to $12,000 per adult), who are presumably also most likely to be the poorest among them, will lose the most.  This is not a structure that makes sense for a program that is purportedly designed to be of most benefit to the poorest.

For middle and higher-income households the net gain (or loss) from the program will depend on the full set of taxes that would be needed to fund the program.  One cannot say who will gain and who will lose until the structure of that full set of taxes is made clear.  This is of course not surprising, as one needs to keep in mind that this is a program of redistribution:  Funds will be raised (by taxes) that disproportionately affect certain groups, to be distributed then in the $12,000 grants.  Some will gain and some will lose, but overall the balance has to be zero.

One can also conclude that such a program, providing for a universal basic income with grants of $12,000 per adult, will necessarily be hugely expensive.  It would cost $3 trillion a year, which is 15% of GDP.  Funding it would require raising all federal tax and other revenue by 91% (excluding any offset by cuts in government social programs, which are however unlikely to amount to anything close to what Yang assumes).  Raising funds of such magnitude is completely unrealistic.  And yet despite such costs, the grants provided of $12,000 per adult would be poverty level incomes for those who do not have a job or other source of support.

One could address this by scaling back the grant, from $12,000 to something substantially less, but then it becomes less meaningful to an individual.  The fundamental problem is the design as a universal grant, to all adults.  While this might be thought to be politically attractive, any such program then ends up being hugely expensive.

The alternative is to design a program that is specifically targeted to those who need such support.  Rather than attempting to hide the distributional consequences in a program that claims to be universal (but where certain groups will gain and certain groups will lose, once one takes fully into account how it will be funded), make explicit the redistribution that is being sought.  With this clear, one can then design a focussed program that addresses that redistribution aim.

Finally, one should recognize that there are other policies as well that might achieve those aims that may not require explicit government-intermediated redistribution.  For example, Senator Cory Booker in the October 15 debate noted that a $15 per hour minimum wage would provide more to those now at the minimum wage than a $12,000 annual grant.  This remark was not much noted, but what Senator Booker said was true.  The federal minimum wage is currently $7.25 per hour.  This is low – indeed, it is less (in real terms) than what it was when Harry Truman was president.  If the minimum wage were raised to $15 per hour, a worker now at the $7.25 rate would see an increase in income of $15.00 – $7.25 = $7.75 per hour, and over a year of 40 hour weeks would see an increase in income of $7.75 x 40 x 52 = $16,120.00.  This is well more than a $12,000 annual grant would provide.

Republican politicians have argued that raising the minimum wage by such a magnitude will lead to widespread unemployment.  But there is no evidence that changes in the minimum wage that we have periodically had in the past (whether federal or state level minimum wages) have had such an adverse effect.  There is of course certainly some limit to how much it can be raised, but one should recognize that the minimum wage would now be over $24 per hour if it had been allowed to grow at the same pace as labor productivity since the late 1960s.

Income inequality is a real problem in the US, and needs to be addressed.  But there are problems with Yang’s specific version of a universal basic income.  While one may be able to fix at least some of those problems and come up with something more reasonable, it would still be massively disruptive given the amounts to be raised.  And politically impossible.  A focus on more targeted programs, as well as on issues such as the minimum wage, are likely to prove far more productive.

The “Threat” of Job Losses is Nothing New and Not to be Feared: Issues Raised in the Democratic Debate

A.  Introduction

The televised debate held October 15 between twelve candidates for the Democratic presidential nomination covered a large number of issues.  Some were clear, but many were not.  The debate format does not allow for much explanation or nuance.  And while some of the positions taken refected sound economics, others did not.

In a series of upcoming blog posts, starting with this one, I will review several of the issues raised, focussing on the economics and sometimes the simple arithmetic (which the candidates often got wrong).  And while the debate covered a broad range of issues, I will limit my attention here to the economic ones.

This post will look at the concern that was raised (initially in a question from one of the moderators) that the US will soon be facing a massive loss of jobs due to automation.  A figure of “a quarter of American jobs” was cited.  All the candidates basically agreed, and offered various solutions.  But there is a good deal of confusion over the issue, starting with the question of whether such job “losses” are unprecedented (they are not) and then in some of the solutions proposed.

A transcript of the debate can be found at the Washington Post website, which one can refer to for the precise wording of the questions and responses.  Unfortunately it does not provide pages or line numbers to refer to, but most of the economic issues were discussed in the first hour of the three hour debate.  Alternatively, one can watch the debate at the CNN.com website.  The discussion on job losses starts at the 32:30 minute mark of the first of the four videos CNN posted at its site.

B.  Job Losses and Productivity Growth

A topic on which there was apparently broad agreement across the candidates was that an unprecedented number of jobs will be “lost” in the US in the coming years due to automation, and that this is a horrifying prospect that needs to be addressed with urgency.  Erin Burnett, one of the moderators, introduced it, citing a study that she said concluded that “about a quarter of American jobs could be lost to automation in just the next 10 years”.  While the name of the study was not explicitly cited, it appears to be one issued by the Brookings Institution in January 2019, with Mark Muro as the principal author.  It received a good deal of attention when it came out, with the focus on its purported conclusion that there would be a loss of a quarter of US jobs by 2030 (see here, here, here, here, and/or here, for examples).

[Actually, the Brookings study did not say that.  Nor was its focus on the overall impact on the number of jobs due to automation.  Rather, its purpose was to look at how automation may differentially affect different geographic zones across the US (states and metropolitan areas), as well as different occupations, as jobs vary in their degree of exposure to possible automation.  Some jobs can be highly automated with technologies that already exist today, while others cannot.  And as the Brookings authors explain, they are applying geographically a methodology that had in fact been developed earlier by the McKinsey Global Institute, presented in reports issued in January 2017 and in December 2017.  The December 2017 report is most directly relevant, and found that 23% of “jobs” in the US (measured in terms of hours of work) may be automated by 2030 using technologies that have already been demonstrated as technically possible (although not necessarily financially worthwhile as yet).  And this would have been the total over a 14 year period starting from their base year of 2016.  This was for their “midpoint scenario”, and McKinsey properly stresses that there is a very high degree of uncertainty surrounding it.]

The candidates offered various answers on how to address this perceived crisis (which I will address below), but it is worth looking first at whether this is indeed a pending crisis.

The answer is no.  While the study cited said that perhaps a quarter of jobs could be “lost to automation” by 2030 (starting from their base year of 2016), such a pace of job loss is in fact not out of line with the norm.  It is not that much different from what has been happening in the US economy for the last 150 years, or longer.

Job losses “due to automation” is just another way of saying productivity has grown.  Fewer workers are needed to produce some given level of output, or equivalently, more output can be produced for a given number of workers.  As a simple example, suppose some factory produces 100 units of some product, and to start has 100 employees.  Output per employee is then 100/100, or a ratio of 1.0.  Suppose then that over a 14 year period, the number of workers needed (following automation of some of the tasks) reduces the number of employees to just 75 to produce that 100 units of output (where that figure of 75 workers includes those who will now be maintaining and operating the new machines, as well as those workers in the economy as a whole who made the machines, with those scaled to account for the lifetime of the machines).  The productivity of the workers would then have grown to 100/75, or a ratio of 1.333.  Over a 14 year period, that implies growth in productivity of 2.1% a year.  More accurately, the McKinsey estimate was that 23% of jobs might be automated, and with this the increase in productivity would be to 100/77 = 1.30.  The growth rate over 14 years would then be 1.9% per annum.

Such an increase in productivity is not outside the norm for the US.  Indeed, it matches what the US has experienced over at least the last century and a half.  The chart at the top of this post shows how GDP per capita has grown since 1870.  The chart is plotted in logarithms, and those of you who remember their high school math will recall that a straight line in such a graph depicts a constant rate of growth.  An earlier version of this chart was originally prepared for a prior post on this blog (where one can find further discussion of its implications), and it has been updated here to reflect GDP growth in recent years (using BEA data, with the earlier data taken from the Maddison Project).

What is remarkable is how steady that rate of growth in GDP per capita has been since 1870.  One straight line fits it extraordinarily well for the entire period, with a growth rate of 1.9% a year (or 1.86% to be more precise).  And while the US is now falling below that long-term trend (since around 2008, from the onset of the economic collapse in the last year of the Bush administration), the deviation of recent years is not that much different from an earlier such deviation between the late 1940s to the mid-1960s.  It remains to be seen whether there will be a similar catch-up to the long-term trend in the coming years.

One might reasonably argue that GDP per capita is not quite productivity, which would be GDP per employee.  Over very long periods of time population and the number of workers in that population will tend to grow at a similar pace, but we could also look at GDP per employee:

This chart is based on BEA data, the agency which issues the official GDP accounts for the US, for both real GDP and the number of employees (in full time equivalent terms, so part-time workers are counted in proportion to the number of hours they work).  The figures unfortunately only go back to 1929, the oldest year for which the BEA has issued estimates.  Note also that the rise in GDP during World War II looks relatively modest here, but that is because measures of “real” GDP (when carefully estimated using standard procedures) can deviate more and more as one goes back in time from the base year for prices (2012 here), coupled with major changes in the structure of production (such as during a major war).  But the BEA figures are the best available.

Once again one finds that the pace of productivity growth was remarkably stable over the period, with a growth rate here of 1.74% a year.  It was lower during the Great Depression years, but then recovered during World War II, and was then above the 1929 to 2018 trend from the early 1950s to 1980.  And the same straight line (meaning a constant growth rate) then fit extremely well from 1980 to 2010.

Since 2010 the growth in labor productivity has been more modest, averaging just 0.5% a year from 2010 to 2018.  An important question going forward is whether the path will return to the previous trend.  If it does, the implication is that there will be more job turnover for at least a temporary period.  If it does not, and productivity growth does not return to the path it has been on since 1929, the US as a whole will not be able to enjoy the growth in overall living standards the economy had made possible before.

The McKinsey numbers for what productivity growth might be going forward, of possibly 1.9% a year, are therefore not out of line with what the economy has actually experienced over the years.  It matches the pace as measured by GDP per capita, and while the 1.74% a year found for the last almost 90 years for the measure based on GDP per employee is a bit less, they are close.  And keep in mind that the McKinsey estimate (of 1.9% growth in productivity over 14 years) is of what might be possible, with a broad range of uncertainty over what will actually happen.

The estimate that “about” a quarter of jobs may be displaced by 2030 is therefore not out of line with what the US has experienced for perhaps a century and a half.  Such disruption is certainly still significant, and should be met with measures to assist workers to transition from jobs that have been automated away to the jobs then in need of more workers.  We have not, as a country, managed this very well in the past.  But the challenge is not new.

What will those new jobs be?  While there are needs that are clear to anyone now (as Bernie Sanders noted, which I will discuss below), most of the new jobs will likely be in fields that do not even exist right now.  A careful study by Daron Acemoglu (of MIT) and Pascual Restrepo (of Boston University), published in the American Economic Review in 2018, found that about 60% of the growth in net new jobs in the US between 1980 and 2015 (an increase of 52 million, from 90 million in 1980 to 142 million in 2015) were in occupations where the specific title of the job (as defined in surveys carried out by the Census Bureau) did not even exist in 1980.  And there was a similar share of those with new job titles over the shorter periods of 1990 to 2015 or 2000 to 2015.  There is no reason not to expect this to continue going forward.  Most new jobs are likely to be in positions that are not even defined at this point.

C.  What Would the Candidates Do?

I will not comment on all the answers provided by the candidates (some of which were indecipherable), but just a few.

Bernie Sanders provided perhaps the best response by saying there is much that needs to be done, requiring millions of workers, and if government were to proceed with the programs needed, there would be plenty of jobs.  He cited specifically the need to rebuild our infrastructure (which he rightly noted is collapsing, and where I would add is an embarrassment to anyone who has seen the infrastructure in other developed economies).  He said 15 million workers would be required for that.  He also cited the Green New Deal (requiring 20 million workers), as well as needs for childcare, for education, for medicine, and in other areas.

There certainly are such needs.  Whether we can organize and pay for such programs is of course critical and would need to be addressed.  But if they can be, there will certainly be millions of workers required.

Sanders was also asked by the moderator specifically about his federal jobs guarantee proposal (and indeed the jobs topic was introduced this way).  But such a policy proposal is more problematic, and separate from the issue of whether the economy will need so many workers.  It is not clear how such a jobs guarantee, provided by the federal government, would work.  The Sanders campaign website provides almost no detail.  But a number of questions need to be addressed.  To start, would such a program be viewed as a temporary backstop for a worker, to be used when he or she cannot find another reasonable job at a wage they would accept, or something permanent?  If permanent, one is really talking more of an expanded public sector, and that does not seem to be the intention of a jobs guarantee program.  But if a backstop, how would the wage be set?  If too high, no workers would want to leave and take a different job, and the program would not be a backstop.  And would all workers in such a program be paid the same, or different based on their skills?  Presumably one would pay an engineer working on the design of infrastructure projects more than someone with just a high school degree.  But how would these be determined?  Also, with a job guarantee, can someone be fired?  Suppose they often do not show up for work?

So there are a number of issues to address, and the answers are not clear.  But more fundamentally, if there is not a shortage of jobs but rather of workers (keep in mind that the unemployment rate is now at a 50 year low), why does one need such a guarantee?  It might be warranted (on a temporary basis) during an economic downturn, when unemployment is high, but why now, when unemployment is low?  [October 28 update:  The initial version of this post had an additional statement here saying that the federal government already had “something close to a job guarantee”, as you could always join the Army.  However, as a reader pointed out, while that once may have been true, it no longer is.  So that sentence has been deleted.]

Andrew Yang responded next, arguing for his proposal of a universal basic income that would provide every adult in the country with a grant of $1,000 per month, no questions asked.  There are many issues with such a proposal, which I will address in a subsequent blog post, but would note here that his basic argument for such a universal grant follows from his assertion that jobs will be scarce due to automation.  He repeatedly asserted in the debate that we have now entered into what has been referred to as the “Fourth Industrial Revolution”, where automation will take over most jobs and millions will be forced out of work.

But as noted above, what we have seen in the US over the last 150 years (at least) is not that much different from what is now forecast for the next few decades.  Automation will reduce the number of workers needed to produce some given amount, and productivity per worker will rise.  And while this will be disruptive and lead to a good deal of job displacement (important issues that certainly need to be addressed), the pace of this in the coming decades is not anticipated to be much different from what the country has seen over the last 150 years.

A universal basic income is fundamentally a program of redistribution, and given the high and growing degree of inequality in the US, a program of redistribution might well be warranted.  I will discuss this is a separate blog post.  But such a program is not needed to provide income to workers who will be losing jobs to automation, as there will be jobs if we follow the right macro policies.  And $12,000 a year would not nearly compensate for a lost job anyway.

Elizabeth Warren’s response to the jobs question was different.  She argued that jobs have been lost not due to automation, but due to poor international trade policies.  She said:  “the data show that we have had a lot of problems with losing jobs, but the principal reason has been bad trade policy.”

Actually, this is simply not true, and the data do not support it.  There have been careful studies of the issue, but it is easy enough to see in the numbers.  For example, in an earlier post on this blog from 2016, I examined what the impact would have been on the motor vehicle sector if the US had moved to zero net imports in the sector (i.e. limiting car imports to what the US exports, which is not very much).  Employment in the sector would then have been flat, rather than decline by 17%, between the years 1967 and 2014.  But this impact would have been dwarfed by the impact of productivity gains.  The output of the motor vehicle (in real terms) was 4.5 times higher in 2014 than what it was in 1967.  If productivity had not grown, they would then have required 4.5 times as many workers.  But productivity did grow – by 5.4 times.  Hence the number of workers needed to produce the higher output actually went down by the 17% observed.  Banning imports would have had almost no effect relative to this.

D.  Summary and Conclusion

Automation is important, but is nothing new.  The Luddites destroyed factory machinery in the early 1800s in England due to a belief that the machines were taking away their jobs and that they would then be left with no prospects.  And data for the US that goes back to at least 1870 shows such job “destroying” processes have long been underway.  They have not accelerated now.  Indeed, over the past decade the pace has slowed (i.e. less job “destruction”).  But it is too soon to tell whether this deceleration is similar to fluctuations seen in the past, where there were occasional deviations but then always a return to the long-term path.

Looking forward, careful studies such as those carried out by McKinsey have estimated how many jobs may be exposed to automation (using technologies that we know already to be technically feasible).  While they emphasize that any such forecasts are subject to a great deal of uncertainty, McKinsey’s midpoint scenario estimates that perhaps 23% of jobs may be substituted away by automation between 2016 and 2030.  If so, such a pace (of 1.9% a year) would be similar to what productivity growth has been historically in the US.  There is nothing new here.

But while nothing new, that does not mean it should be ignored.  It will lead, just as it has in the past, to job displacement and disruption.  There is plenty of scope for government to assist workers in finding appropriate new jobs, and in obtaining training for them, but the US has historically never done this all that well.  Countries such as Germany have been far better at addressing such needs.

The candidate responses did not, however, address this (other than Andrew Yang saying government supported training programs in the US have not been effective).  While Bernie Sanders correctly noted there is no shortage of needs for which workers will be required, he has also proposed a jobs guarantee to be provided by the federal government.  Such a guarantee would be more problematic, with many questions not yet answered.  But it is also not clear why it would be needed in current circumstances anyway (with an economy at full employment).

Andrew Yang argued the opposite:  That the economy is facing a structural problem that will lead to mass unemployment due to automation, with a Fourth Industrial Revolution now underway that is unprecedented in US history.  But the figures show this not to be the case, with forecast prospects similar to what the US has faced in the past.  Thus the basis for his argument that we now need to do something fundamentally different (a universal basic income of $1,000 a month for every adult) falls away.  And I will address the $1,000 a month itself in a separate blog post.

Finally, Elizabeth Warren asserted that the problem stems primarily from poor international trade policy.  If we just had better trade policy, she said, there would be no jobs problem.  But this is also not borne out by the data.  Increased imports, even in the motor vehicle sector (which has long been viewed as one of the most exposed sectors to international trade), explains only a small fraction of why there are fewer workers needed in that sector now than was the case 50 years ago.  By far the more important reason is that workers in the sector are now far more productive.

End Gerrymandering by Focussing on the Process, Not on the Outcomes

A.  Introduction

There is little that is as destructive to a democracy as gerrymandering.  As has been noted by many, with gerrymandering the politicians are choosing their voters rather than the voters choosing their political representatives.

The diagrams above, in schematic form, show how gerrymandering works.  Suppose one has a state or region with 50 precincts, with 60% that are fully “blue” and 40% that are fully “red”, and where 5 districts need to be drawn.  If the blue party controls the process, they can draw the district lines as in the middle diagram, and win all 5 (100%) of the districts, with just 60% of the voters.  If, in contrast, the red party controls the process for some reason, they could draw the district boundaries as in the diagram on the right.  They would then win 3 of the 5 districts (60%) even though they only account for 40% of the voters.  It works by what is called in the business “packing and cracking”:  With the red party controlling the process, they “pack” as many blue voters as possible into a small number of districts (two in the example here, each with 90% blue voters), and then “crack” the rest by scattering them around in the remaining districts, each as a minority (three districts here, each with 40% blue voters and 60% red).

Gerrymandering leads to cynicism among voters, with the well-founded view that their votes just do not matter.  Possibly even worse, gerrymandering leads to increased polarization, as candidates in districts with lines drawn to be safe for one party or the other do not need to worry about seeking to appeal to voters of the opposite party.  Rather, their main concern is that a more extreme candidate from their own party will not challenge them in a primary, where only those of their own party (and normally mostly just the more extreme voters in their party) will vote.  And this is exactly what we have seen, especially since 2010 when gerrymandering became more sophisticated, widespread, and egregious than ever before.

Gerrymandering has grown in recent decades both because computing power and data sources have grown increasingly sophisticated, and because a higher share of states have had a single political party able to control the process in full (i.e. with both legislative chambers, and the governor when a part of the process, all under a single party’s control).  And especially following the 2010 elections, this has favored the Republicans.  As a result, while there has been one Democratic-controlled state (Maryland) on common lists of the states with the most egregious gerrymandering, most of the states with extreme gerrymandering were Republican-controlled.  Thus, for example, Professor Samuel Wang of Princeton, founder of the Princeton Gerrymandering Project, has identified a list of the eight most egregiously gerrymandered states (by a set of criteria he has helped develop), where one (Maryland) was Democratic-controlled, while the remaining seven were Republican.  Or the Washington Post calculated across all states an average of the degree of compactness of congressional districts:  Of the 15 states with the least compact districts, only two (Maryland and Illinois) were liberal Democratic-controlled states.  And in terms of the “efficiency gap” measure (which I will discuss below), seven states were gerrymandered following the 2010 elections in such a way as to yield two or more congressional seats each in their favor.  All seven were Republican-controlled.

With gerrymandering increasingly common and extreme, a number of cases have gone to the Supreme Court to try to stop it.  However, the Supreme Court has failed as yet to issue a definitive ruling ending the practice.  Rather, it has so far skirted the issue by resolving cases on more narrow grounds, or by sending cases back to lower courts for further consideration.  This may soon change, as the Supreme Court has agreed to take up two cases (affecting lines drawn for congressional districts in North Carolina and in Maryland), with oral arguments scheduled for March 26, 2019.  But it remains to be seen if these cases will lead to a definitive ruling on the practice of partisan gerrymandering or not.

This is not because of a lack of concern by the court.  Even conservative Justice Samuel Alito has conceded that “gerrymandering is distasteful”.  But he, along with the other conservative justices on the court, have ruled against the court taking a position on the gerrymandering cases brought before it, in part, at least, out of the concern that they do not have a clear standard by which to judge whether any particular case of gerrymandering was constitutionally excessive.  This goes back to a 2004 case (Vieth v. Jubelirer) in which the four most conservative justices of the time, led by Justice Antonin Scalia, opined that there could not be such a standard, while the four liberal justices argued that there could.  Justice Anthony Kennedy, in the middle, issued a concurring opinion with the conservative justices there was not then an acceptable such standard before them, but that he would not preclude the possibility of such a standard being developed at some point in the future.

Following this 2004 decision, political scientists and other scholars have sought to come up with such a standard.  Many have been suggested, such as a set of three tests proposed by Professor Wang of Princeton, or measures that focus on the share of seats won to the share of the votes cast, and more.  Probably most attention in recent years has been given to the “efficiency gap” measure proposed by Professor Nicholas Stephanopoulos and Eric McGhee.  The efficiency gap is the gap between the two main parties in the “wasted votes” each party received in some past election in the state (as a share of total votes in the state), where a wasted vote is the sum of all the votes for a losing candidate of that party, plus the votes in excess of 50% when that party’s candidate won.  This provides a direct measure of the two basic tactics of gerrymandering, as described above, of “packing” as many voters of one party as possible in a small number of districts (where they might receive 80 or 90% of the votes, but with all those above 50% “wasted”), and “cracking” (by splitting up the remaining voters of that party into a large number of districts where they will each be in a minority and hence will lose, with those votes then also “wasted”).

But there are problems with each of these measures, including the widely touted efficiency gap measure.  It has often been the case in recent years, in our divided society, that like-minded voters live close to each other, and particular districts in the state then will, as a result, often see the winner of the district receive a very high share of the votes.  Thus, even with no overt gerrymandering, the efficiency gap as measured will appear large.  Furthermore, at the opposite end of this spectrum, the measure will be extremely sensitive if a few districts are close to 50/50.  A shift of just a few percentage points in the vote will then lead one party or the other to lose and hence will then see a big jump in their share of wasted votes (the 49% received by one party or the other).

There is, however, a far more fundamental problem.  And that is that this is simply the wrong question to ask.  With all due respect to Justice Kennedy, and recognizing also that I am an economist and not a lawyer, I do not understand why the focus here is on the voting outcome, rather than on the process by which the district lines were drawn.  The voting outcome is not the standard by which other aspects of voter rights are judged.  Rather, the focus is on whether the process followed was fair and unbiased, with the outcome then whatever it is.

I would argue that the same should apply when district lines are drawn.  Was the process followed fair and unbiased?  The way to ensure that would be to remove the politicians from the process (both directly and indirectly), and to follow instead an automatic procedure by which district lines are drawn in accord with a small number of basic principles.

The next section below will first discuss the basic point that the focus when judging fairness and lack of bias should not be on whether we can come up with some measure based on the vote outcomes, but rather on whether the process that was followed to draw the district lines was fair and unbiased or not.  The section following will then discuss a particular process that illustrates how this could be done.  It would be automatic, and would produce a fair and unbiased drawing of voting district lines that meets the basic principles on which such a map should be based (districts of similar population, compactness, contiguity, and, to the extent consistent with these, respect for the boundaries of existing political jurisdictions such as counties or municipalities).  And while I believe this particular process would be a good one, I would not exclude that others are possible.  The important point is that the courts should require the states to follow some such process, and from the example presented we see that this is indeed feasible.  It is not an impossible task.

The penultimate section of the post will then discuss a few points that arise with any such system, and their implications, and end with a brief section summarizing the key points.

B.  A Fair Voting System Should Be Judged Based on the Process, Not on the Outcomes

Voting rights are fundamental in any democracy.  But in judging whether some aspect of the voting system is proper, we do not try to determine whether or not (by some defined specific measure) the resulting outcomes were improperly skewed or not.

Thus, for example, we take as a basic right that our ballot may be cast in secret.  No government official, nor anyone else for that matter, can insist on seeing how we voted.  Suppose that some state passed a law saying a government-appointed official will look over the shoulder of each of us as we vote, to determine whether we did it “right” or not.  We would expect the courts to strike this down, as an inappropriate process that contravenes our basic voting rights.  We would not expect the courts to say that they should look at the subsequent voting outcomes, and try to come up with some specific measure which would show, with certainty, whether the resulting outcomes were excessively influenced or not.  That would of course be absurd.

As another absurd example, suppose some state passed a law granting those registered in one of the major political parties, but not those registered in the other, access to more early days of voting than the other.  This would be explicitly partisan, and one would assume that the courts would not insist on limiting their assessment to an examination of the later voting outcomes to see whether, by some proposed measure, the resulting outcomes were excessively affected.  The voting system, to be fair, should not lead to a partisan advantage for one party or the other.  But gerrymandering does precisely that.

Yet the courts have so far asked declined to issue a definitive ruling on partisan gerrymandering, and have asked instead whether there might be some measure to determine, in the voting outcomes, whether gerrymandering had led to an excessive partisan advantage for the party drawing the district lines.  And there have been open admissions by senior political figures that district borders were in fact drawn up to provide a partisan advantage.  Indeed, principals involved in the two cases now before the Supreme Court have openly said that partisan advantage was the objective.  In North Carolina, David Lewis, the Republican chair of the committee in the state legislature responsible for drawing up the district lines, said during the debate that “I think electing Republicans is better than electing Democrats. So I drew this map to help foster what I think is better for the country.”

And in the case of Maryland, the Democratic governor of the state in 2010 at the time the congressional district lines were drawn, Martin O’Malley, spoke out in 2018 in writing and in interviews openly acknowledging that he and the Democrats had drawn the district lines for partisan advantage.  But he also now said that this was wrong and that he hoped the Supreme Court would rule against what they had done.

But how to remove partisanship when district lines are drawn?  As long as politicians are directly involved, with their political futures (and those of their colleagues) dependent on the district lines, it is human nature that biases will enter.  And it does not matter whether the biases are conscious and openly expressed, or unconscious and denied.  Furthermore, although possibly diminished, such biases will still enter even with independent commissions drawing the district lines.  There will be some political process by which the commissioners are appointed, and those who are appointed, even if independent, will still be human and will have certain preferences.

The way to address this would rather be to define some automatic process which, given the data on where people live and the specific principles to follow, will be able to draw up district lines that are both fair (follow the stated principles) and unbiased (are not drawn up in order to provide partisan advantage to one party).  In the next section I will present a particular process that would do this.

C.  An Automatic Process to Draw District Lines that are Fair and Unbiased

The boundaries for fair and unbiased districts should be drawn in accord with the following set of principles (and no more):

a)  One Person – One Vote:  Each district should have a similar population;

b)  Contiguity:  Each district must be geographically contiguous.  That is, one continuous boundary line will encompass the entire district and nothing more;

c)  Compactness:  While remaining consistent with the above, districts should be as compact as possible under some specified measure of compactness.

And while not such a fundamental principle, a reasonable objective is also, to the extent possible consistent with the basic principles above, that the district boundaries drawn should follow the lines of existing political jurisdictions (such as of counties or municipalities).

There will still be a need for decisions to be made on the basic process to follow and then on a number of the parameters and specific rules required for any such process.  Individual states will need to make such decisions, and can do so in accordance with their traditions and with what makes sense for their particular state.  But once these “rules of the game” are fully specified, there should then be a requirement that they will remain locked in for some lengthy period (at least to beyond whenever the next decennial redistricting will be needed), so that games cannot be played with the rules in order to bias a redistricting that may soon be coming up.  This will be discussed further below.

Such specific decisions will need to be made in order to fully define the application of the basic principles presented above.  To start, for the one person – one vote principle the Supreme Court has ruled that a 10% margin in population between the largest and smallest districts is an acceptable standard.  And many states have indeed chosen to follow this standard.  However, a state could, if it wished, choose to use a tighter standard, such as a margin in the populations between the largest and smallest districts of no more than 8%, or perhaps 5% or whatever.  A choice needs to be made.

Similarly, a specific measure of compactness will need to be specified.  Mathematically there are several different measures that could be used, but a good one which is both intuitive and relatively easy to apply is that the sum of the lengths of all the perimeters of each of the districts in the state should be minimized.  Note that since the outside borders of the state itself are fixed, this sum can be limited just to the perimeters that are internal to the state.  In essence, since states are to be divided up into component districts (and exhaustively so), the perimeter lines that do this with the shortest total length will lead to districts that are compact.  There will not be wavy lines, nor lines leading to elongated districts, as such lines will sum to a greater total length than possible alternatives.

What, then, would be a specific process (or algorithm) which could be used to draw district lines?  I will recommend one here, which should work well and would be consistent with the basic principles for a fair and unbiased set of district boundaries.  But other processes are possible.  A state could choose some such alternative (but then should stick to it).  The important point is that one should define a fully specified, automatic, and neutral process to draw such district lines, rather than try to determine whether some set of lines, drawn based on the “judgment” of politicians or of others, was “excessively” gerrymandered based on the voting outcomes observed.

Finally, the example will be based on what would be done to draw congressional district lines in a state.  But one could follow a similar process for drawing other such district lines, such as for state legislative districts.

The process would follow a series of steps:

Step 1: The first step would be to define a set of sub-districts within each county in a state (parish in Louisiana) and municipality (in those states where municipalities hold similar governmental responsibilities as a county).  These sub-districts would likely be the districts for county boards or legislative councils in most of the states, and one might typically have a dozen or more of these in such jurisdictions.  When those districts are also being redrawn as part of the decennial redistricting process, then they should be drawn first (based on the principles set out here), before the congressional district lines are drawn.

Each state would define, as appropriate for the institutions of that specific state, the sub-districts that will be used for the purpose of drawing the congressional district lines.  And if no such sub-jurisdictions exist in certain counties of certain states, one could draw up such sub-districts, purely for the purposes of this redistricting exercise, by dividing such counties into compact (based on minimization of the sum of the perimeters), equal population, districts.  While the number of such sub-districts would be defined (as part of the rules set for the process) based on the population of the affected counties, a reasonable number might generally be around 12 or 15.

These sub-districts will then be used in Step 4 below to even out the congressional districts.

Step 2:  An initial division of each state into a set of tentative congressional districts would then be drawn based on minimizing the sum of the lengths of the perimeter lines for all the districts, and requiring that all of the districts in the state have exactly the same population.  Following the 2010 census, the average population in a congressional district across the US was 710,767, but the exact number will vary by state depending on how many congressional seats the state was allocated.

Step 3: This first set of district lines will not, in general, follow county and municipal lines.  In this step 3, the initial set of district lines would then be shifted to the county or municipal line which is geographically closest to it (as defined by minimizing the geographic area that would be shifted in going to that county or city line, in comparison to whatever the alternative jurisdiction would be).  If the populations in the resulting congressional districts are then all within the 10% margin for the populations (or whatever percent margin is chosen by the state) between the largest and the smallest districts, then one is finished and the map is final.

Step 4:  But in general, there may be one or more districts where the resulting population exceeds or falls short of the 10% limit.  One would then make use of the political subdivisions of the counties and municipalities defined in Step 1 to bring them into line.  A specific set of rules for that process would need to be specified.  One such set would be to first determine which congressional district, as then drawn, deviated most from what the mean population should be for the districts in that state.  Suppose that district had too large of a population.  One would then shift one of the political subdivisions in that district from it to whichever adjacent congressional district had the least population (of all adjacent districts).  And the specific political subdivision shifted would then be the one which would have the least adverse impact on the measure of compactness (the sum of perimeter lengths).  Note that the impact on the compactness measure could indeed be positive (i.e. it could make the resulting congressional districts more compact), if the political subdivision eligible to be shifted were in a bend in the county or city line.

If the resulting congressional districts were all now within the 10% population margin (or whatever margin the state had chosen as its standard), one would be finished.  But if this is not the case, then one would repeat Step 4 over and over as necessary, each time for whatever district was then most out of line with the 10% margin.

That is it.  The result would be contiguous and relatively compact congressional districts, each with a similar population (within the 10% margin, or whatever margin is decided upon), and following borders of counties and municipalities or of political sub-divisions within those entities.

This would of course all be done on the computer, and can be once the rules and parameters are all decided as there will no longer be a role for opinion nor an opportunity for political bias to enter.  And while the initial data entry will be significant (as one would need to have the populations and perimeter lengths of each of the political subdivisions, and those of the counties and municipalities that they add up to), such data are now available from standard sources.  Indeed, the data entry needed would be far less than what is typically required for the computer programs used by our politicians to draw up their gerrymandered maps.

D.  Further Remarks

A few more points:

a)  The Redistricting Process, Once Decided, Should be Locked In for a Long Period:  As was discussed above, states will need to make a series of decisions to define fully the specific process it chooses to follow.  As illustrated in the case discussed above, states will need to decide on matters such as what will be the maximum margin of the populations between the largest and smallest districts (no more than 10%, by Supreme Court decision, but it could be less).  And rules will need to be set on, also as in the case discussed above, what measure of compactness to use, or the criterion on which district should be chosen first to have a shift of a sub-district in order to even out the population differences, and so on.

Such decisions will have an impact on the final districts arrived at.  And some of those districts will favor Republicans and some will favor Democrats, just by random.  There would then be a problem if the redistricting were controlled by one party in the state, and that party (through consultants who specialize in this) tried out dozens if not hundreds of possible choices on the parameters to see which would turn out to be most advantageous to it.  While the impact would be far less than what we have now with the deliberate gerrymandering, there could still be some effect.

To stem this, one should require that once choices are made on the process to follow and on the rules and other parameters needed to implement that process, there could not then be a change in that process for the immediately upcoming decennial redistricting.  They would only apply to those following.  While this would not be possible for the very first application of the system, there will likely be a good deal of attention paid by the public to these issues initially so such an attempt to bias the system would be difficult.

As noted, this is not likely to be a major problem, and any such system will not introduce the major biases we have seen in the deliberately gerrymandered maps of numerous states following the 2010 census.  But by locking in any decisions made for a long period, where any random bias in favor of one party in a map might well be reversed following the next census, there will be less of a possibility to game the system by changing the rules, just before a redistricting is due, to favor one party.

b)  Independent Commissions Do Not Suffice  – They Still Need to Decide How to Draw the District Maps:  A reform that has been increasingly advocated by many in recent years is to take the redistricting process out of the hands of the politicians, and instead to appoint independent commissions to draw up the maps.  There are seven states currently with non-partisan or bipartisan, nominally independent, commissions that draw the lines for both congressional and state legislative districts, and a further six who do this for state legislative districts only.  Furthermore, several additional states will use such commissions starting with the redistricting that follows the 2020 census.  Finally, there is Iowa.  While technically not an independent commission, district lines in Iowa are drawn up by non-partisan legislative staff, with the state legislature then approving it or not on a straight up or down vote.  If not approved, the process starts over, and if not approved after three votes it goes to the Iowa Supreme Court.

While certainly a step in the right direction, a problem with such independent commissions is that the process by which members are appointed can be highly politicized.  And even if not overtly politicized, the members appointed will have personal views on who they favor, and it is difficult even with the best of intentions to ensure such views do not enter.

But more fundamentally, even a well-intentioned independent commission will need to make choices on what is, and what is not, a “good” district map.  While most states list certain objectives for the redistricting process in either their state constitutions or in legislation, these are typically vague, such as saying the maps should try to preserve “communities of interest”, but with no clarity on what this in practice means.  Thirty-eight states also call for “compactness”, but few specify what that really means.  Indeed, only two states (Colorado and Iowa) define a specific measure of compactness.  Both states say that compactness should be measured by the sum of the perimeter lines being minimized (the same measure I used in the process discussed above).  However, in the case of Iowa this is taken along with a second measure of compactness (the absolute value of the difference between the length and the width of a district), and it is not clear how these two criteria are to be judged against each other when they differ.  Furthermore, in all states, including Colorado and Iowa, the compactness objective is just one of many objectives, and how to judge tradeoffs between the diverse objectives is not specified.

Even a well-intentioned independent commission will need to have clear criteria to judge what is a good map and what is not.  But once these criteria are fully specified, there is then no need for further opinion to enter, and hence no need for an independent commission.

c)  Appropriate and Inappropriate Principles to Follow: As discussed above, the basic principles that should be followed are:  1) One person – One vote, 2) Contiguity, and 3) Compactness.  Plus, to the extent possible consistent with this, the lines of existing political jurisdictions of a state (such as counties and municipalities) should be respected.

But while most states do call for this (with one person – one vote required by Supreme Court decision, but decided only in 1964), they also call for their district maps to abide by a number of other objectives.  Examples include the preservation of “communities of interest”, as discussed above, where 21 states call for this for their state legislative districts and 16 for their congressional districts (where one should note that congressional districting is not relevant in 7 states as they have only one member of Congress).  Further examples of what are “required” or “allowed” to be considered include preservation of political subdivision lines (45 states); preservation of “district cores” (8 states); and protection of incumbents (8 states).  Interestingly, 10 states explicitly prohibit consideration of the protection of incumbents.  And various states include other factors to consider or not consider as well.

But many, indeed most, of these considerations are left vague.  What does it mean that “communities of interest” are to be preserved where possible?  Who defines what the relevant communities are?  What is the district “core” that is to be preserved?  And as discussed above, there is a similar issue with the stated objective of “compactness”, as while 38 states call for it, only Colorado and Iowa are clear on how it is defined (but then vague on what trade-offs are to be accepted against other objectives).

The result of such multiple objectives, mostly vaguely defined and with no guidance on trade-offs, is that it is easy to come up with the heavily gerrymandered maps we have seen and the resulting strong bias in favor of one political party over the other.  Any district can be rationalized in terms of at least one of the vague objectives (such as preserving a “community of interest”).  These are loopholes which allow the politicians to draw maps favorable to themselves, and should be eliminated.

d)  Racial Preferences: The US has a long history of using gerrymandering (as well as other measures) to effectively disenfranchise minority groups, in particular African-Americans.  This has been especially the case in the American South, under the Jim Crow laws that were in effect through to the 1960s.  The Voting Rights Act of 1965 aimed to change this.  It required states (in particular under amendments to Section 2 passed in 1982 when the Act was reauthorized) to ensure minority groups would be able to have an effective voice in their choice of political representatives, including, under certain circumstances, through the creation of congressional and other legislative districts where the previously disenfranchised minority group would be in the majority (“majority-minority districts”).

However, it has not worked out that way.  Indeed, the creation of majority-minority districts, with African-Americans packed into as small a number of districts as possible and with the rest then scattered across a large number of remaining districts, is precisely what one would do under classic gerrymandering (packing and cracking) designed to limit, not enable, the political influence of such groups.  With the passage of these amendments to the Voting Rights Act in 1982, and then a Supreme Court decision in 1986 which upheld this (Thornburg v. Gingles), Republicans realized in the redistricting following the 1990 census that they could then, in those states where they controlled the process, use this as a means to gerrymander districts to their political advantage.  Newt Gingrich, in particular, encouraged this strategy, and the resulting Republican gains in the South in 1992 and 1994 were an important factor in leading to the Republican take-over of the Congress following the 1994 elections (for the first time in 40 years), with Gingrich then becoming the House Speaker.

Note also that while the Supreme Court, in a 5-4 decision in 2013, essentially gutted a key section of the Voting Rights Act, the section they declared to be unconstitutional was Section 5.  This was the section that required pre-approval by federal authorities of changes in voting statutes in those jurisdictions of the country (mostly the states of the South) with a history of discrimination as defined in the statute.  Left in place was Section 2 of the Voting Rights Act, the section under which the gerrymandering of districts on racial lines has been justified.  It is perhaps not surprising that Republicans have welcomed keeping this Section 2 while protesting Section 5.

One should also recognize that this racial gerrymandering of districts in the South has not led to most African-Americans in the region being represented in Congress by African-Americans.  One can calculate from the raw data (reported here in Ballotpedia, based on US Census data), that as of 2015, 12 of the 71 congressional districts in the core South (Louisiana, Mississippi, Alabama, Georgia, South Carolina, North Carolina, Virginia, and Tennessee) had a majority of African-American residents.  These were all just a single district in each of the states, other than two in North Carolina and four in Georgia.  But the majority of African Americans in those states did not live in those twelve districts.  Of the 13.2 million African-Americans in those eight states, just 5.0 million lived in those twelve districts, while 8.2 million were scattered around the remaining districts.  By packing as many African-Americans as possible in a small number of districts, the Republican legislators were able to create a large number of safe districts for their own party, and the African-Americans in those districts effectively had little say in who was then elected.

The Voting Rights Act was an important measure forward, drafted in reaction to the Jim Crow laws that had effectively undermined the right to vote of African-Americans.  And defined relative to the Jim Crow system, it was progress.  However, relative to a system that draws up district lines in a fair and unbiased manner, it would be a step backwards.  A system where minorities are packed into a small number of districts, with the rest then scattered across most of the districts, is just standard gerrymandering designed to minimize, not to ensure, the political rights of the minority groups.

E.  Conclusion

Politicians drawing district lines to favor one party and to ensure their own re-election fundamentally undermines democracy.  Supreme Court justices have themselves called it “distasteful”.  However, to address gerrymandering the court has sought some measure which could be used to ascertain whether the resulting voting outcomes were biased to a degree that could be considered unconstitutional.

But this is not the right question.  One does not judge other aspects of whether the voting process is fair or not by whether the resulting outcomes were by some measure “excessively” affected or not.  It is not clear why such an approach, focused on vote outcomes, should apply to gerrymandering.  Rather, the focus should be on whether the process followed was fair and unbiased or not.

And one can certainly define a fair and unbiased process to draw district lines.  The key is that the process, once established, should be automatic and follow the agreed set of basic principles that define what the districts should be – that they should be of similar population, compact, contiguous, and where possible and consistent with these principles, follow the lines of existing political jurisdictions.

One such process was outlined above.  But there are other possibilities.  The key is that the courts should require, in the name of ensuring a fair vote, that states must decide on some such process and implement it.  And the citizenry should demand the same.

Impact of the 1994 Assault Weapons Ban on Mass Shootings: An Update, Plus What To Do For a Meaningful Reform

A.  Introduction

An earlier post on this blog (from January 2013, following the horrific shooting at Sandy Hook Elementary School in Connecticut), looked at the impact of the 1994 Federal Assault Weapons Ban on the number of (and number of deaths from) mass shootings during the 10-year period the law was in effect.  The data at that point only went through 2012, and with that limited time period one could not draw strong conclusions as to whether the assault weapons ban (with the law as written and implemented) had a major effect.  There were fewer mass shootings over most of the years in that 1994 to 2004 period, but 1998 and 1999 were notable exceptions.

There has now been another horrific shooting at a school – this time at Marjory Stoneman Douglas High School in Parkland, Florida.  There are once again calls to limit access to the military-style semiautomatic assault weapons that have been used in most of these mass shootings (including the ones at Sandy Hook and Stoneman Douglas).  And essentially nothing positive had been done following the Sandy Hook shootings.  Indeed, a number of states passed laws which made such weapons even more readily available than before.  And rather than limiting access to such weapons, the NRA response following Sandy Hook was that armed guards should be posted at every school.  There are, indeed, now more armed guards at our schools.  Yet an armed guard at Stoneman Douglas did not prevent this tragedy.

With the passage of time, we now have five more years of data than we had at the time of the Sandy Hook shooting.  With this additional data, can we now determine with more confidence whether the Assault Weapons Ban had an impact, with fewer shootings incidents and fewer deaths from such shootings?

This post will look at this.  With the additional five years of data, it now appears clear that the 1994 to 2004 period did represent a change in the sadly rising trend, with a reduction most clearly in the number of fatalities from and total victims of those mass shootings.  This was true even though the 1994 Assault Weapons Ban was a decidedly weak law, with a number of loopholes that allowed continued access to such weapons for those who wished to obtain them.  Any new law should address those loopholes, and I will discuss at the end of this post a few such measures so that such a ban would be more meaningful.

B.  The Number of Mass Shootings by Year

The Federal Assault Weapons Ban (formally the “Public Safety and Recreational Firearms Use Protection Act”, and part of a broader crime control bill) was passed by Congress and signed into law on September 13, 1994.  The Act banned the sale of any newly manufactured or imported “semiautomatic assault weapon” (as defined by the Act), as well as of newly manufactured or imported large capacity magazines (holding more than 10 rounds of ammunition).  The Act had a sunset provision where it would be in effect for ten years, after which it could be modified or extended.

However, it was a weak ban, with many loopholes.  First of all, there was a grandfather clause that allowed manufacturers and others to sell all of their existing inventory.  Not surprisingly, manufacturers scaled up production sharply while the ban was being debated, as those inventories could later then be sold, and were.  Second and related to this, there was no constraint on shops or individuals on the sale of weapons that had been manufactured before the start date, provided just that they were legally owned at the time the law went into effect.  Third, “semiautomatic assault weapons” (which included handguns and certain shotguns, in addition to rifles such as the AR-15) were defined quite precisely in the Act.  But with that precision, gun manufacturers could make what were essentially cosmetic changes, with the new weapons then not subject to the Act.  And fourth, with the sunset provision after 10 years (i.e. to September 12, 2004), the Republican-controlled Congress of 2004 (and President George W. Bush) simply could allow the Act to expire, with nothing done to replace it.  And they did.

The ban was therefore weak.  But it is still of interest to see whether even such a weak law might have had an impact on the number of, and severity of, mass shootings during the period it was in effect.

The data used for this analysis were assembled by Mother Jones, the investigative newsmagazine and website.  The data are available for download in spreadsheet form, and is the most thorough and comprehensive such dataset publicly available.  Sadly, the US government has not assembled and made available anything similar.  A line in the Mother Jones spreadsheet is provided for each mass shooting incident in the US since 1982, with copious information on each incident (as could be gathered from contemporaneous news reports) including the weapons used when reported.  I would encourage readers to browse through the spreadsheet to get a sense of mass shootings in America, the details of which are all too often soon forgotten.  My analysis here is based on various calculations one can then derive from this raw data.

This dataset (through 2012) was used in my earlier blog post on the impact of the Assault Weapons Ban, and has now been updated with shootings through February 2018 (as I write this).  To be included, a mass shooting incident was defined by Mother Jones as a shooting in a public location (and so excluded incidents such as in a private home, which are normally domestic violence incidents), or in the context of a conventional crime (such as an armed robbery, or from gang violence), and where at least four people were killed (other than the killer himself if he also died, and note it is almost always a he).  While other possible definitions of what constitutes a “mass shooting” could be used, Mother Jones argues (and I would agree) that this definition captures well what most people would consider a “mass shooting”.  It only constitutes a small subset of all those killed by guns each year, but it is a particularly horrific set.

There was, however, one modification in the updated file, which I adjusted for.  Up through 2012, the definition was as above and included all incidents where four or more people died (other than the killer).  In 2013, the federal government started to refer to mass shootings as those events where three or more people were killed (other than the killer), and Mother Jones adopted this new criterion for the mass shootings it recorded for 2013 and later.  But this added a number of incidents that would not have been included under the earlier criterion (of four or more killed), and would bias any analysis of the trend.  Hence I excluded those cases in the charts shown here.  Including incidents with exactly three killed would have added no additional cases in 2013, but one additional in 2014, three additional in 2015, two additional in 2016, and six additional in 2017 (and none through end-February in 2018).  There would have been a total of 36 additional fatalities (three for each of the 12 additional cases), and 80 additional victims (killed and wounded).

What, then, was the impact of the assault weapons ban?  We will first look at this graphically, as trends are often best seen by eye, and then take a look at some of the numbers, as they can provide better precision.

The chart at the top of this post shows the number of mass shooting events each year from 1982 through 2017, plus for the events so far in 2018 (through end-February).  The numbers were low through the 1980s (zero, one, or two a year), but then rose.  The number of incidents per year was then generally less during the period the Assault Weapons Ban was in effect, but with the notable exceptions of 1998 (three incidents) and especially 1999 (five).  The Columbine High School shooting was in 1999, when 13 died and 24 were wounded.

The number of mass shootings then rose in the years after the ban was allowed to expire.  This was not yet fully clear when one only had data through 2012, but the more recent data shows that the trend is, sadly, clearly upward.  The data suggest that the number of mass shooting incidents were low in the 1980s but then began to rise in the early 1990s; that there was then some fallback during the decade the Assault Weapons Ban was in effect (with 1998 and 1999 as exceptions); but with the lifting of the ban the number of mass shooting incidents began to grow again.  (For those statistically minded, a simple linear regression for the full 1982 to 2017 period indicates an upward trend with a t-statistic of a highly significant 4.6 – any t-statistic of greater than 2.0 is generally taken to be statistically significant.)

C.  The Number of Fatalities and Number of Victims in Mass Shooting Incidents 

These trends are even more clear when one examines what happened to the total number of those killed each year, and the total number of victims (killed and wounded).

First, a chart of fatalities from mass shootings over time shows:

 

Fatalities fluctuated within a relatively narrow band prior to 1994, but then, with the notable exception of 1999, fell while the Assault Weapons Ban was in effect.  And they rose sharply after the ban was allowed to expire.  There is still a great deal of year to year variation, but the increase over the last decade is clear.

And for the total number of victims:

 

One again sees a significant reduction during the period the Assault Weapons Ban was in effect (with again the notable exception of 1999, and now 1998 as well).  The number of victims then rose in most years following the end of the ban, and went off the charts in 2017.  This was due largely to the Las Vegas shooting in October, 2017, where there were 604 victims of the shooter.  But even excluding the Las Vegas case, there were still 77 victims in mass shooting events in 2017, more than in any year prior to 2007 (other than 1999).

D.  The Results in Tables

One can also calculate the averages per year for the pre-ban period (13 years, from 1982 to 1994), the period of the ban (September 1994 to September 2004), and then for the post-ban period (again 13 years, from 2005 to 2017):

Number of Mass Shootings and Their Victims – Averages per Year

Avg per Year

Shootings

Fatalities

Injured

Total Victims

1982-1994

1.5

12.4

14.2

26.6

1995-2004

1.5

9.6

10.1

19.7

2005-2017

3.8

38.6

71.5

110.2

Note:  One shooting in December 2004 (following the lifting of the Assault Weapons Ban in September 2004) is combined here with the 2005 numbers.  And the single shooting in 1994 was in June, before the ban went into effect in September.

The average number of fatalities per year, as well as the number injured and hence the total number of victims, all fell during the period of the ban.  They all then jumped sharply once the ban was lifted.  While one should acknowledge that these are all correlations in time, where much else was also going on, these results are consistent with the ban having a positive effect on reducing the number killed or wounded in such mass shootings.

The number of mass shootings events per year also stabilized during the period the ban was in effect (at an average of 1.5 per year).  That is, while the number of mass shooting events was the same (per year) as before, their lethality was less.  Plus the number of mass shooting events did level off, and fell back if one compares it to the previous half-decade rather than the previous 13 year period.  They had been following a rising trend before.  And the number of mass shootings then jumped sharply after the ban was lifted.

The data also allow us to calculate the average number of victims per mass shooting event, broken down by the type of weapon used:

Average Number of Victims per Mass Shooting, by Weapon Used

Number of Shootings

Fatalities

Injured

Total Victims

Semiauto Rifle Used

26

13.0

34.6

47.6

Semiauto Rifle Not Used

59

7.5

5.6

13.1

Semiauto Handgun Used

63

10.0

17.5

27.5

Semiauto Handgun (but Not Semiautomatic Rifle) Used

48

7.7

6.0

13.7

No Semiauto Weapon Used

11

6.6

4.0

10.6

There were 26 cases where the dataset Mother Jones assembled allowed one to identify specifically that a semiautomatic rifle was used.  (Some news reports were not clear, saying only that a rifle was used.  Such cases were not counted here.)  This was out of a total of 85 mass shooting events where four or more were killed.  But the use of semiautomatic rifles proved to be especially deadly.  On average, there were 13 fatalities per mass shooting when one could positively identify that a semiautomatic rifle was used, versus 7.5 per shooting when it was not.  And there were close to 48 total victims per mass shooting on average when a semiautomatic rifle was used, versus 13 per shooting when it was not.

The figures when a semiautomatic handgun was used (from what could be identified in the news reports) are very roughly about half-way between these two.  But note that there is a great deal of overlap between mass shootings where a semiautomatic handgun was used and where a semiautomatic rifle was also used.  Mass shooters typically take multiple weapons with them as they plan out their attacks, including semiautomatic handguns along with their semiautomatic rifles.  The fourth line in the table shows the figures when a semiautomatic handgun was used but not also a semiautomatic rifle.  These figures are similar to the averages in all of the cases where a semiautomatic rifle was not used (the second line in the table).

The fewest number of fatalities and injured are, however, when no semiautomatic weapons are used at all.  Unfortunately, in only 11 of the 85 mass shootings (13%) were neither a semiautomatic rifle nor a semiautomatic handgun used.  And these 11 might include a few cases where the news reports did not permit a positive identification that a semiautomatic weapon had been used.

E.  What Would Be Needed for a Meaningful Ban

It thus appears that the 1994 Assault Weapons Ban, as weak as it was, had a positive effect on saving lives.  But as noted before, it was flawed, with a number of loopholes which meant that the “ban” was far from a true ban.  Some of these might have been oversights, or issues only learned with experience, but I suspect most reflected compromises that were necessary to get anything approved by Congress.  That will certainly remain an issue.

But if one were to draft a law addressing these issues, what are some of the measures one would include?  I will make a few suggestions here, but this list should not be viewed as anything close to comprehensive:

a)  First, there should not be a 10-year (or any period) sunset provision.  A future Congress could amend the law, or even revoke it, as with any legislation, but this would then require specific action by that future Congress.  But with a sunset provision, it is easy simply to do nothing, as the Republican-controlled Congress did in 2004.

b)  Second, with hindsight one can see that the 1994 law made a mistake by defining precisely what was considered a “semiautomatic” weapon.  This made it possible for manufacturers later to make what were essentially cosmetic changes to the weapons, and then make and sell them.  Rather, a semiautomatic weapon should be defined in any such law by its essential feature, which is that one can fire such a weapon repeatedly simply by pulling the trigger once for each shot, with the weapon loading itself.

c)  Third, fully automatic weapons (those which fire continuously as long as the trigger is pulled) have been banned since 1986 (if manufactured after May 19, 1986, the day President Reagan signed this into law).  But “bump stocks” have not been banned.  Bump stocks are devices that effectively convert a semiautomatic weapon into a fully automatic one.  Following the horrific shooting in Las Vegas on October 1, 2017, in which 58 were killed and 546 injured, and where the shooter used a bump stock to convert his semiautomatic rifles (he had many) into what were effectively fully automatic weapons, there have been calls for bump stocks to be banned.  This should be done, and indeed it is now being recognized that a change in existing law is not even necessary.  Attorney General Jeff Sessions said on February 27 that the Department of Justice is re-examining the issue, and implied that there would “soon” be an announcement by the department of regulations that recognize that a semiautomatic weapon equipped with a bump stock meets the definition of a fully automatic weapon.

d)  Fourth, a major problem with the 1994 Assault Weapons Ban, as drafted, was it only banned the sale of newly manufactured (or imported) semiautomatic weapons from the date the act was signed into law – September 13, 1994.  Manufacturers and shops could sell legally any such weapons produced before then.  Not surprisingly, manufacturers ramped up production (and imports) sharply in the months the Act was being debated in Congress, which provided then an ample supply for a substantial period after the law technically went into effect.

But one could set an earlier date of effectiveness, with the ban covering weapons manufactured or imported from that earlier date.  This is commonly done in tax law.  That is, tax laws being debated during some year will often be made effective for transactions starting from the beginning of the year, or from when the new laws were first proposed, so as not to induce negative actions designed to circumvent the purpose of the new law.

e)  Fifth, the 1994 Assault Weapons Ban allowed the sale to the public of any weapons legally owned before the law went into effect on September 13, 1994 (including all those in inventory).  This is related to, but different from, the issue discussed immediately above.  The issue here is that all such weapons, including those manufactured many years before, could then be sold and resold for as long as those weapons existed.  This could continue for decades.  And with millions of such weapons now in the US, it would be many decades before the supply of such weapons would be effectively reduced.

To accelerate this, one could instead create a government-funded program to purchase (and then destroy) any such weapons that the seller wished to dispose of.  And one would couple this with a ban on the sale of any such weapons to anyone other than the government.  There could be no valid legal objection to this as any sales would be voluntary (although I have no doubt the NRA would object), and would be consistent with the ban on the sale of any newly manufactured semiautomatic weapon.  One would also couple this with the government buying the weapons at a generous price – say the original price paid for the weapon (or the list price it then had), without any reduction for depreciation.

Semiautomatic weapons are expensive.  An assault rifle such as the AR-15 can easily cost $1,000.  And one would expect that as those with such weapons in their households grow older and more mature over time, many will recognize that such a weapon does not provide security.  Rather, numerous studies have shown (see, for example, here, here, here, and here) that those most likely to be harmed by weapons in a household are either the owners themselves or their loved ones.  As the gun owners mature, many are likely to see the danger in keeping such weapons at home, and the attractiveness of disposing of them legally at a good price.  Over time, this could lead to a substantial reduction in the type of weapons which have been used in so many of the mass shootings.

F.  Conclusion

Semiautomatic weapons are of no use in a civilian setting other than to massacre innocent people.  They are of no use in self-defense:  One does not walk down the street, or while shopping in the aisles of a Walmart or a Safeway, with an AR-15 strapped to your back.  One does not open the front door to your house each time the doorbell rings aiming an AR-15 at whoever is there.  Nor are such weapons of any use in hunting.  First, they are not terribly accurate.  And second, if one succeeded in hitting the animal with multiple shots, all one would have is a bloody mess.

Such weapons are used in the military precisely because they are good at killing people.  But for precisely the same reason as fully automatic weapons have been banned since 1986 (and tightly regulated since 1934), semiautomatic weapons should be similarly banned.

The 1994 Assault Weapons Ban sought to do this.  However, it was allowed to expire in 2004.  It also had numerous loopholes which lessened the effectiveness it could have had.  Despite this, the number of those killed and injured in mass shootings fell back substantially while that law was in effect, and then jumped after it expired.  And the number of mass shooting events per year leveled off or fell while it was in effect (depending on the period it is being compared to), and then also jumped once it expired.

There are, however, a number of ways a new law banning such weapons could be written to close off those loopholes.  A partial list is discussed above.  I fully recognize, however, that the likelihood of such a law passing in the current political environment, where Republicans control both the Senate and the House as well as the presidency, are close to nil.  One can hope that at some point in the future the political environment will change to the point where an effective ban on semiautomatic weapons can be passed.  After all, President Reagan, the hero of Republican conservatives, did sign into law the 1986 act that banned fully automatic weapons.  Sadly, I expect many more school children will die from such shootings before this will happen.

Social Security Could be Saved With the Revenues Lost Under the Trump Tax Plan

As is well known, the Social Security Trust Fund will run out in about 2034 (plus or minus a year) if nothing is done.  “Running out” means that the past accumulated stock of funds paid in through Social Security taxes on wages, plus what is paid in each year, will not suffice to cover what is due to be paid out that year to beneficiaries.  If nothing is done, Social Security payments would then be scaled back by 23% (in 2034, rising to 27% by 2091), to match the amount then being paid in each year.

This would be a disaster.  Social Security does not pay out all that much:  An average of just $15,637 annually per beneficiary for those in retirement and their survivors, and an average of just $12,452 per beneficiary for those on disability (all as of August 2017).  But despite such limited amounts, Social Security accounts for almost two-thirds (63%) of the incomes of beneficiaries age 65 or older, and 90% or more of the incomes of fully one-third of them.  Scaling back such already low payments, when so many Americans depend so much on the program, should be unthinkable.

Yet Congress has been unwilling to act, even though the upcoming crisis (if nothing is done) has been forecast for some time.  Furthermore, the longer we wait, the more severe the measures that will then be necessary to fix the problem.  It should be noted that the crisis is not on account of an aging population (one has pretty much known for 64 years how many Americans would be reaching age 65 now), nor because of a surprising jump in life expectancies (indeed, life expectancies have turned out to be lower than what had been forecast).  Rather, as discussed in an earlier post on this blog, the crisis has arisen primarily because wage income inequality has grown sharply (and unexpectedly) since around 1980, and this has pulled an increasing share of wages into the untaxed range above the ceiling for annual earnings subject to Social Security tax ($127,200 currently).

But Congress could act, and there are many different approaches that could be taken to ensure the Social Security Trust Fund remains adequately funded.  This post will discuss just one.  And that would be not to approve the Trump proposal for what he accurately calls would be a huge cut in taxes, and use the revenues that would be lost under his tax plan instead to shore up the Social Security Trust Fund.  As the chart at the top of this post shows (and as will be discussed below), this would more than suffice to ensure the Trust Fund would remain in surplus for the foreseeable future.  There would then be no need to consider slashing Social Security benefits in 2034.

The Trump tax plan was submitted to Congress on September 27.  It is actually inaccurate to call it simply the Trump tax plan as it was worked out over many months of discussions between Trump and his chief economic aides on one side, and the senior Republican leadership in both the Senate and the Congress on the other side, including the chairs of the tax-writing committees.  This was the so-called “Gang of Six”, who jointly released the plan on September 27, with the full endorsement of all.  But for simplicity, I will continue to call it the Trump tax plan.

The tax plan would sharply reduce government revenues.  The Tax Policy Center (TPC), a respected bipartisan nonprofit, has provided the most careful forecast of the revenue losses yet released.  They estimated that the plan would reduce government revenues by $2.4 trillion between 2018 and 2027, with this rising to a $3.2 trillion loss between 2028 and 2037.  The lost revenue would come to 0.9% of GDP for the 2018 to 2027 period, and 0.8% of GDP for the 2028 to 2037 period (some of the tax losses under the Trump plan are front-loaded), based on the GDP forecasts of the Social Security Trustees 2017 Annual Report (discussed below).  While less than 1% of GDP might not sound like much, such a revenue loss would be significant.  As we will see, it would suffice to ensure the Social Security Trust Fund would remain fully funded.

The chart at the top of this post shows what could be done.  The curve in green is the base case where nothing is done to shore up the Trust Fund.  It shows what the total stock of funds in the Social Security Trust Fund have been (since 1980) and would amount to, as a share of GDP, if full beneficiary payments would continue as per current law.  Note that I have included here the trust funds for both Old-Age and Survivors Insurance (OASI) and for Disability Insurance (DI).  While technically separate, they are often combined (and then referred to as OASDI).

The figures are calculated from the forecasts released in the most recent (July 2017) mandated regular annual report of the Board of Trustees of the Social Security system.  Their current forecast is that the Trust Fund would run out by around 2034, as seen in the chart.

But suppose that instead of enacting the Trump tax plan proposals, Congress decided to dedicate to the Social Security Trust Funds (OASDI) the revenues that would be lost as a consequence of those tax cuts?  The curve in the chart shown in red is a forecast of what those tax revenue losses would be each year, as a share of GDP.  These are the Tax Policy Center estimates, although extrapolated.  The TPC forecasts as published showed the estimated year-by-year losses over the first ten years (2018 to 2027), but then only for the sum of the losses over the next ten years (2028 to 2037).  I assumed a constant rate of growth from the estimate for 2027 sufficient to generate the TPC sum for 2028 to 2037, which worked out to a bit over 6.1%.  I then assumed the revenue losses would continue to grow at this rate for the remainder of the forecast period.

Note this 6.1% growth is a nominal rate of growth, reflecting both inflation and real growth.  The long-run forecasts in the Social Security Trustees report were for real GDP to grow at a rate of 2.1 or 2.2%, and inflation (in terms of the GDP price index) to grow at also 2.2%, leading to growth in nominal GDP of 4.3 or 4.4%.  Thus the forecast tax revenue losses under the Trump plan would slowly climb over time as a share of GDP, reaching 2% of GDP by about 2090.  This is as one would expect for this tax plan, as the proposals would reduce progressivity in the tax system.  As I noted before on this blog and will discuss further below, most of the benefits under the Trump tax plan would accrue to those with higher incomes.  However, one should also note that the very long-term forecasts for the outer years should not be taken too seriously.  While the trends are of interest, the specifics will almost certainly be different.

If the tax revenues that would be lost under the Trump tax plan were instead used to shore up the Social Security Trust Fund, one would get the curve shown in blue (which includes the interest earned on the balance in the Fund, at the interest rates forecast in the Trustees report).  The balance in the fund would remain positive, never dipping below 12% of GDP, and then start to rise as a share of GDP.  Even if the TPC forecasts of the revenues that would be lost under the Trump plan are somewhat off (or if Congress makes changes which will reduce somewhat the tax losses), there is some margin here.  The forecast is robust.

The alternative is to follow the Trump tax plan, and cut taxes sharply.  As I noted in my earlier post on this blog on the Trump tax plan, the proposals are heavily weighted to provisions which would especially benefit the rich.  The TPC analysis (which I did not yet have when preparing my earlier blog post) has specific estimates of this.  The chart below shows who would get the tax cuts for the forecast year of 2027:

The estimate is that 87% of the tax revenues lost under the Trump plan would go to the richest 20% of the population (those households with an income of $154,900 or more in 2027, in prices of 2017).  And indeed, almost all of this (80% of the overall total) would accrue just to the top 1%.  The top 1% are already pretty well off, and it is not clear why tax cuts focused on them would spur greater effort on their part or greater growth.  The top 1% are those households who would have an annual income of at least $912,100 in 2027, in prices of 2017.  Most of them would be making more than a million annually.

The Trump people, not surprisingly, do not accept this.  They assert that the tax cuts will spur such a rapid acceleration in growth that tax revenues will not in fact be lost.  Most economists do not agree.  As discussed in earlier posts on this blog, the historical evidence does not support the Trumpian view (the tax cuts under Reagan and Bush II did not lead to any such acceleration in growth; what they did do is reduce tax revenues); the argument that tax cuts will lead to more rapid growth is also conceptually confused and reveals a misunderstanding of basic economics; and with the economy having already reached full employment during the Obama years, there is little basis for the assertion that the economy will now be able to grow at even 3% a year on average (over a mulit-year period) much less something significantly faster.  Tax cuts have in the past led to cuts in tax revenues collected, not to increases, and there is no reason to believe this time will be different.

Thus Congress faces a choice.  It can approve the Trump tax plan (already endorsed by the Republican leadership in both chambers), with 80% of the cuts going to the richest 1%.  Or it could use those revenues to shore up the Social Security Trust Fund.  If the latter is done, the Trust Fund would not run out in 2034, and Social Security would be able to continue to pay amounts owed to retired senior citizens and their survivors, as well as to the disabled, in accordance with the commitments it has made.

I would favor the latter.  If you agree, please call or write your Senator and Member of Congress, and encourage others to do so as well.

————————————————————————

Update, October 22, 2017

The US Senate passed on October 19 a budget framework for the FY2018-27 period which would allow for $1.5 trillion in lost tax revenues over this period, and a corresponding increase in the deficit, as a consequence of new tax legislation.  It was almost fully a party line vote (all Democrats voted against it, while all Republicans other than Senator Rand Paul voted in favor).  Importantly, this vote cleared the way (under Senate rules) for it to pass a new tax law with losses of up to $1.5 trillion over the decade, and pass this with only Republican votes.  Only 50 votes in favor will be required (with Vice President Pence providing a tie-breaking vote if needed).  Democrats can be ignored.

The loss in tax revenues in this budget framework is somewhat less than the $2.4 trillion that the Tax Policy Center estimates would follow in the first decade under the Trump tax plan.  But it is still sizeable, and it is of interest to see what this lesser amount would achieve if redirected to the Social Security Trust Fund instead of being used for tax cuts.

The chart above shows what would follow.  It still turns out that the Social Security Trust Fund would be saved from insolvency, although just barely this time.

One has to make an assumption as to what would happen to tax revenues after 2027, as well as for what the time pattern would be for the $1.5 trillion in losses over the ten years from FY2018 to 27.  With nothing else available, I assumed that the losses would grow over time at the same rate as what is implied in the Tax Policy Center estimates for the losses in the second decade of the Trump tax plan as compared to the losses in the final year of the first decade.  As discussed above, these estimates implied a nominal rate of growth of 6.1% a year.  I assumed the same rate of growth here, including for the year to year growth in the first decade (summing over that decade to $1.5 trillion).

The result again is that the Social Security Trust Fund would remain solvent for the foreseeable future, although now just marginally.  The Trust Fund (as a share of GDP) would just touch zero in the years around 2080, but would then start to rise.

We therefore have a choice.  The Republican-passed budget framework has that an increase in the fiscal deficit of $1.5 trillion over the next decade is acceptable.  It could be used for tax cuts that would accrue primarily to the rich.  Or it could be used to ensure the Social Security system will be able, for the foreseeable future, to keep to its commitments to senior citizens, to their survivors, and to the disabled.