Trump’s Mismanagement of the Covid-19 Crisis: South Korea Shows What Would Have Been Possible

Source:  David Leonhardt, Newsletter of April 13, 2020, The New York Times

I normally only include charts I have developed myself in this blog, but the chart above, from David Leonhardt of the New York Times, is particularly striking.  It comes from his newsletter of April 13, and shows the daily number of deaths (on a seven-day moving average) per 10 million people, from February 19 to now in the US and in South Korea.

It shows what the US could have achieved had the Trump administration managed this crisis as well as South Korea has.  And one cannot argue that South Korea is a rich country with resources that the US does not have – GDP per capita in the US is double that of South Korea.  Nor is it because of travel bans.  Trump repeatedly asserts that the crisis would have been far greater in the US had he not had the singular wisdom to impose a ban on travel (by non-US citizens) from China on February 2 (and from Europe and other countries later).  But the only travel ban South Korea has imposed has been travel from Hubei Province in China.  And South Korea has far more contact with China, from both business and personal travel and trade in goods, than the US has.  Yet despite this, the deaths from Covid-19 have been far fewer in South Korea than in the US even after scaling for population.

And it is not only South Korea that has demonstrated competence in the management of the Covid-19 virus.  Death rates in other countries of East Asia, all similarly heavily exposed to China, have been even lower than that of South Korea.  In terms of the cumulative number of deaths from Covid-19 since the crisis began (as of April 13), there have been 4 deaths per million of population in South Korea, but just 2 per million in Singapore, 1 per million in Japan, 0.5 per million in Hong Kong, and 0.3 per million in Taiwan.  For the US, in contrast, the total is 71 per million.  (Reminder:  The chart above tracks deaths per day, not the cumulative total, and shows the figures per 10 million of population.)

This also shows that Trump’s repeated assertion that the deaths suffered in the US were inevitable – that nothing more could have been done – is simply nonsense.  Sadly, it is deadly nonsense.  South Korea shows what could have been done.  Travel bans were not important.  Rather, it was the basic public health measures of large-scale testing, identifying those with the virus or who may have been exposed to the virus, quarantining or isolating those exposed (including self-isolating, along with self-monitoring and regular reporting), and then treating in hospitals those who developed severe symptoms.

None of this is new to public health professionals.  And the US has excellent public health professionals.  What was different in the US was Trump, who refused to listen to them and indeed treated many of those in government as enemies to be attacked (as those with expertise were seen as members of the “deep state”).

The US had prepared plans on what to do should an infectious disease such as Covid-19 threaten.  There was, for example, a major effort to develop such plans in 2006/2007, towards the end of the Bush administration.  The work included running exercises similar to war-games of various scenarios (“table-top” exercises), to see how officials would respond and what the likely outcomes then would be.  These plans were further developed during Obama’s two terms in office.  But the Trump administration then ignored this previous preparation, and indeed took pride in dismantling important elements of it.

Dr. James Lawler, now an infectious disease doctor at the University of Nebraska but then serving in the Bush White House, participated in the 2006/2007 task force.  Over the weekend, the New York Times released a trove of over 80 pages of emails (obtained through a Freedom of Information Act request) of late-January to mid-March from Dr. Lawler and other experts, in and out of government, discussing how to address the crisis.  Particularly telling is a March 12 email from Dr. Lawler in which he said:

“We are making every misstep initially made in the table-tops at the outset of pandemic planning in 2006.  We had systematically addressed all of these and had a plan that would work – and has worked in Hong Kong/Singapore.  We have thrown 15 years of institutional learning out the window …”

Throwing those 15 years of institutional learning out the window has had deadly consequences.

The Rapid Growth in Deaths from Covid-19: The Role of Politics

Deaths from Covid-19 have been growing at an extremely rapid rate.  The chart above shows what those rates have been in the month of March, averaged over seven day periods to smooth out day-to-day fluctuations.  The figures are for the daily rate of growth over the seven day period ending on the date indicated.  The curves start in the first period when there were at least 10 cases, which was on March 3 for the US as a whole.  Hence the first growth rate shown is for the one week period of March 3 to 10.  As I will discuss below, the chart has not only the growth rates for the US as a whole but also for the set of states that Trump won in 2016 and for the set that Clinton won.  They show an obvious pattern.

The data come from the set assembled by The New York Times, based on a compilation of state and local reports.  The Times updates these figures daily, and has made them available through the GitHub site.  And it provides a summary report on these figures, with a map, at least daily.

I emphasize that the figures are of daily growth rates, even though they are calculated over one week periods.  And they are huge.  For the US as a whole, that rate was just over 28% a day for the seven day period ending March 30.  It is difficult to get one’s head around such a rapid rate of growth, but a few figures can be illustrative.  In the New York Times database, 3,066 Americans had died of Covid-19 as of March 30.  If the 28% rate of growth were maintained, then the entire population of the US (330 million) would be dead by May 16.  For many reasons, that will not happen.  The entire population would have been infected well before (if there was nothing to limit the spread) and it is fatal for perhaps 1% of those infected.  And the 99% infected who do not die develop an immunity, where once they recover they cannot spread the virus to others.  For this reason as well, 100% of those not previously exposed will not catch the virus.  Rather, it will be some lower share, as the spread becomes less and less likely as an increasing share of the population develops an immunity.  This is also the reason why mass vaccination programs are effective in stopping the spread of a virus (including to those not able to receive a vaccination, such as very young children or those with compromised immune systems).

So that 28% daily rate of growth has to come down, preferably by policy rather than by running out of people to infect.  And there has been a small reduction in the last two days (the seven day periods ending March 29 and March 30), with the rate falling modestly to 28% from a 30% rate that had ruled since the seven day period ending March 22.  But it has much farther to go to get to zero.

The recent modest dip might be an initial sign that the social distancing measures that began to be put in place around parts of the nation by March 16 are having a positive effect (and where many individuals, including myself, started social distancing some time before).  It is believed that it takes about 4 to 7 days after being infected before one shows any symptoms, and then, in those cases where the symptoms are severe and require hospitalization (about 20% of the total), another several days to two weeks before it becomes critical for those where it will prove fatal.  Hence one might be starting to see the impacts of the policies about now.

But the social distancing measures implemented varied widely across the US.  They were strict and early in some locales, and advisory only and relatively late in other locales.  Sadly, Trump injected a political element into this.  Trump belittled the seriousness of Covid-19 until well into March, even calling Covid-19 a “hoax” conjured up by the Democrats while insisting the virus soon would go away.  And even since mid-March Trump has been inconsistent, saying on some days that it needs to be taken seriously and on others that it was not a big deal.  Fox News and radio hosts of the extreme right such as Rush Limbaugh also belittled the seriousness of the virus.

It is therefore understandable that Trump supporters and those who follow such outlets for what they consider the news, have not shown as much of a willingness to implement the social distancing measures that are at this point the only way to reduce the spread of the virus.  And it shows in the death figures.  The red curve in the chart at the top of this post shows the daily growth rates of fatalities from this virus in those states that voted for Trump in the 2016 election.  While the spread of the virus in these states, many of which are relatively rural, started later than in the states that voted for Clinton, their fatalities from the virus have since grown at a substantially faster pace.

The pace of growth in the states that voted for Clinton has also been heavily influenced by the rapid spread of the virus in New York.  As of March 30, more than half (57%) of the fatalities in the Clinton states was due to the fatalities in New York alone.  And New York is a special case.  With its dense population in New York City, where a high proportion use a crowded subway system or buses to commute to work, with the work then often in tall office buildings requiring long rides in what are often crowded elevators, it should not be surprising that a virus that goes person to person could spread rapidly.

Excluding New York, the rate of increase in the other states that voted for Clinton (the curve in green in the chart above) is more modest.  The rates are also then even more substantially lower than those in the Trump-voting states.

But any of these growth rates are still incredibly high, and must be brought down to zero quickly.  That will require clear, sustained, and scientifically sound policy, from the top.  But Trump has not been providing this.

The Democratic Primaries Thus Far: Bernie Sanders’ Vote Numbers

A.  Introduction

One of the main arguments Bernie Sanders has made for why he should be the nominee of the Democratic Party to run against Trump is that he would spur a much higher turnout, especially of young voters who would not otherwise go to the polls (with those young voters favoring him).  But this has not turned out to be the case in the Democratic primaries held thus far.  While turnout has gone up substantially, Sanders has not been receiving an exceptionally high share of that increased turnout.  And even Sanders has now acknowledged that a higher number of younger voters that he argued would go to the polls to vote for him have not materialized.

So what has been going on?  To summarize what will be discussed in more detail below, in the primaries held thus far the share of the votes going to Sanders has gone down compared to what he received in the same primary states in the 2016 elections.  But the share going to Sanders and Elizabeth Warren combined has been similar (indeed almost identical overall) to what Sanders received in 2016, when it was essentially only him running against Hillary Clinton.  Similarly, the share going to Joe Biden, Amy Klobuchar, Pete Buttigieg, and Michael Bloomberg has been similar to the share that had gone to Clinton.  This very much looks like a case of Democratic Party primary voters with a separation between those who hold the more extreme liberal views of Sanders and Warren, and those with the more moderate views of Biden, Klobuchar, Buttigieg, and Bloomberg (although it is not really correct to view them as moderates – the positions they hold are all well to the left of the positions that were held by Obama when he served as president).  Primary turnout has gone up, but with similar shares as before of voters in those two channels in that increased turnout.

Pundit commentary, at least until recently, has not focused on this.  Rather, in the Democratic primaries and caucuses held in February before South Carolina (i.e. following the contests in Iowa, New Hampshire, and especially Nevada), all attention was on Sanders winning the vote count (modestly in Iowa and New Hampshire, more significantly in Nevada).  It was not on what the outcomes might be telling us on the broader issue of who will, in the end, amass the delegates needed ultimately to win the Democratic nomination.  Sanders was deemed the “front-runner”.

And then all were surprised when the vote in the South Carolina primary appeared to be so different.  However, if a comparison had been made to the results of the 2016 primary in that state one would have seen important similarities.

This has now become more clear with the results from the Super Tuesday primaries.  Turnout (in all but one of the states) has gone up, and sometimes quite substantially.  The Democratic base is clearly energized.  But the higher turnout was not of voters disproportionately supporting Sanders.  Indeed, the share voting for Sanders has gone down compared to the share that voted for him in 2016.  Rather, across the states with primaries held thus far, the share going now to Sanders and Warren together is very close to what Sanders had received before, and the share going to Biden, et. al., was similarly close to what Clinton had received before.  Thus the higher turnout was composed of similar shares of voters in the two groups.

There were of course differences in several of the individual states.  For the analysis here I looked at the ten states who held primaries and not caucuses (vote counts in caucuses are different, with far lower participation), did so in both 2016 and 2020, and held their primaries in each of those years on Super Tuesday (March 1 in 2016, March 3 in 2020) or before.  Thus this excluded states like Colorado and Minnesota (which held caucuses in 2016), or had primaries (or caucuses) after Super Tuesday in 2016.  The most important, and largest, state thus excluded is California, which held its primary on June 7 in 2016.  I will discuss separately the special case of California.

The overall results for those ten states are summarized in the chart at the top of this post.  But rather than discuss that one first, it is perhaps better to examine the cases in a few of the states individually, before looking at the overall totals across the ten states.  The vote numbers are all as reported in the New York Times, at this post for 2016, or at this post for 2020.  The 2020 results are all as shown as of about 2:00 pm on Wednesday, March 4.  At that point, almost all were either complete (with 100% of precincts reporting) or close to it (with 99% or more in two cases, one at 97.0%, one at 93.8%, and one at 93.4%).  There will be some differences, but small, as they get to 100% of precincts reporting, and as mail-in ballots are fully counted (rules vary by state).  However, these will likely not affect the shares to any significant degree, which are the focus of the analysis here.  And while it will not change the shares, I did scale up to 100% the figures for the cases where fewer than 100% of the precincts had reported, in order to estimate what the total votes (and hence change in turnout) will be and to add up the figures consistently across the states.

B.  Individual States

The South Carolina primary, which was critical for Biden, shows well what the pattern has been.  The key results are summarized in this chart:

Sanders received only 26% of the vote in this primary in 2016, losing badly to Clinton who received 73% of the vote.  And that share of Sanders went down to 20% this year, even though there was a 46% increase in turnout.  But Sanders plus Warren together received 27% of the vote, almost the same as what Sanders received in 2016.  Despite an increase in turnout of close to half, the share going to the extreme liberal candidates remained about the same – not more, not less.

One saw the same in Virginia:

Here turnout rose by close to 70%.  And the Sanders share fell again, from 35% in 2016 to 23% in 2020.  But Sanders and Warren together received 34%, very close to what Sanders had received before.  Despite the far higher turnout, the shares were close to unchanged (taking Sanders and Warren together).

As noted above, there were a total of ten states where one could make such a comparison.  I won’t go through them all, and there were individual exceptions.  One noteworthy case was that of New Hampshire, the state with the first primary (Iowa is a caucus):

Bernie Sanders did exceptionally well in that primary in 2016, receiving 60% of the vote, against Hillary Clinton’s 38% (with other candidates receiving the rest).  Sanders won again in 2020, but this time with only 25.7% of the vote (with Pete Buttigieg in second place at 24.4%).  But while the pundits focused on Sanders winning that primary again, I did not see mentioned that despite an increase in turnout (of a not insignificant 18%), the absolute number of votes Sanders received fell in half (falling from 151,584 in 2016, to just 76,234 in 2020).  And even if one adds in the votes that Warren received, the total still came only to 103,711, with a share of 35%.

There were two other states where Sanders and Warren together did significantly worse than Sanders alone in 2020.  One was in Sanders’ home state of Vermont, where Sanders received 86% of the vote in 2016 while Sanders and Warren together received just 63% in 2020 (despite a 17% increase in turnout).  The other was Oklahoma, where Sanders received 52% of the vote in 2016 while Sanders and Warren together received just 39% in 2020 (and is the one state where turnout fell – by 7%).

These states were offset by Texas, where Sanders received 33% of the vote in 2016 (and 30% in 2020), but where Sanders and Warren together received 41% (with turnout rising 47%).  In the other states, the shares of Sanders in 2016 and Sanders plus Warren together in 2020 were pretty much the same.  Especially similar was the case of Massachusetts (the home state of Warren):  Sanders received 48.7% of the vote in 2016, while Sanders plus Warren received 48.3% in 2020.

California is also a special case, but an important one.  In 2016, the California primary was held on June 7, close to the end of the primary season.  Close to 5.1 million voted in the Democratic primary in that year, and Sanders won 45.7% of the vote.  As I write this (in the evening of Friday, March 6, and based on what is shown on the New York Times website), California has posted results for only 89% of the precincts.  Why this is less than 100% three days after the primary is not clear to me.  California also accepts mail-in ballots that were mailed on election day or before, and the state allows up to a month for these to come in.

But based on what has been reported as of now, Sanders plus Warren together received 45.9% of the votes, almost exactly the same as the 45.7% Sanders received in 2016.  But there was a big change in turnout, likely tied to the different election date.  While 5.1 million voted in 2016, the total votes recorded as of today is just 3.3 million.  While this will go up as all the mail-in ballots are counted (and as full reports are provided on all of the precincts), it will certainly not go up to anywhere close to the 5.1 million of 2016.

C.  The Ten States as a Whole

The chart at the top of this post reflects the figures added up across all of the ten states.  And one finds that as with most of the states (where the few exceptions basically offset each other), the share of the vote Sanders and Warren together received in 2020 (38%) was very close to what Sanders alone received in 2016 (39%).  The share of Sanders alone went down, with this offset almost exactly by the share Warren received.  And this was despite a substantial increase in turnout – of 34% across the ten states as a group.

In terms of what has been called the “more moderate” wing, the share across the ten states of those voting for Clinton in 2016 was 59%.  The share going to Biden plus Klobuchar plus Buttigieg plus Bloomberg in 2020 was 58%.  Again almost the same.

With turnout up by a third, the Democratic primary electorate appears to be energized.  There are real concerns about Trump, and what he has done to our country.  But the higher turnout is not because Sanders is pulling in a large number of new voters who will vote for him and him only.  Rather, the split in the new voters between those voting for Sanders or Warren on one side, or for Biden, Klobuchar, Buttigieg, or Bloomberg on the other side, is very close to the split between Sanders and Clinton voters in 2016.

With the withdrawal in the past week of all of the major remaining candidates other than Sanders and Biden, we will now see whether this pattern continues.  It is now basically a two-person race, and the results should be clear to all.

End Gerrymandering by Focussing on the Process, Not on the Outcomes

A.  Introduction

There is little that is as destructive to a democracy as gerrymandering.  As has been noted by many, with gerrymandering the politicians are choosing their voters rather than the voters choosing their political representatives.

The diagrams above, in schematic form, show how gerrymandering works.  Suppose one has a state or region with 50 precincts, with 60% that are fully “blue” and 40% that are fully “red”, and where 5 districts need to be drawn.  If the blue party controls the process, they can draw the district lines as in the middle diagram, and win all 5 (100%) of the districts, with just 60% of the voters.  If, in contrast, the red party controls the process for some reason, they could draw the district boundaries as in the diagram on the right.  They would then win 3 of the 5 districts (60%) even though they only account for 40% of the voters.  It works by what is called in the business “packing and cracking”:  With the red party controlling the process, they “pack” as many blue voters as possible into a small number of districts (two in the example here, each with 90% blue voters), and then “crack” the rest by scattering them around in the remaining districts, each as a minority (three districts here, each with 40% blue voters and 60% red).

Gerrymandering leads to cynicism among voters, with the well-founded view that their votes just do not matter.  Possibly even worse, gerrymandering leads to increased polarization, as candidates in districts with lines drawn to be safe for one party or the other do not need to worry about seeking to appeal to voters of the opposite party.  Rather, their main concern is that a more extreme candidate from their own party will not challenge them in a primary, where only those of their own party (and normally mostly just the more extreme voters in their party) will vote.  And this is exactly what we have seen, especially since 2010 when gerrymandering became more sophisticated, widespread, and egregious than ever before.

Gerrymandering has grown in recent decades both because computing power and data sources have grown increasingly sophisticated, and because a higher share of states have had a single political party able to control the process in full (i.e. with both legislative chambers, and the governor when a part of the process, all under a single party’s control).  And especially following the 2010 elections, this has favored the Republicans.  As a result, while there has been one Democratic-controlled state (Maryland) on common lists of the states with the most egregious gerrymandering, most of the states with extreme gerrymandering were Republican-controlled.  Thus, for example, Professor Samuel Wang of Princeton, founder of the Princeton Gerrymandering Project, has identified a list of the eight most egregiously gerrymandered states (by a set of criteria he has helped develop), where one (Maryland) was Democratic-controlled, while the remaining seven were Republican.  Or the Washington Post calculated across all states an average of the degree of compactness of congressional districts:  Of the 15 states with the least compact districts, only two (Maryland and Illinois) were liberal Democratic-controlled states.  And in terms of the “efficiency gap” measure (which I will discuss below), seven states were gerrymandered following the 2010 elections in such a way as to yield two or more congressional seats each in their favor.  All seven were Republican-controlled.

With gerrymandering increasingly common and extreme, a number of cases have gone to the Supreme Court to try to stop it.  However, the Supreme Court has failed as yet to issue a definitive ruling ending the practice.  Rather, it has so far skirted the issue by resolving cases on more narrow grounds, or by sending cases back to lower courts for further consideration.  This may soon change, as the Supreme Court has agreed to take up two cases (affecting lines drawn for congressional districts in North Carolina and in Maryland), with oral arguments scheduled for March 26, 2019.  But it remains to be seen if these cases will lead to a definitive ruling on the practice of partisan gerrymandering or not.

This is not because of a lack of concern by the court.  Even conservative Justice Samuel Alito has conceded that “gerrymandering is distasteful”.  But he, along with the other conservative justices on the court, have ruled against the court taking a position on the gerrymandering cases brought before it, in part, at least, out of the concern that they do not have a clear standard by which to judge whether any particular case of gerrymandering was constitutionally excessive.  This goes back to a 2004 case (Vieth v. Jubelirer) in which the four most conservative justices of the time, led by Justice Antonin Scalia, opined that there could not be such a standard, while the four liberal justices argued that there could.  Justice Anthony Kennedy, in the middle, issued a concurring opinion with the conservative justices there was not then an acceptable such standard before them, but that he would not preclude the possibility of such a standard being developed at some point in the future.

Following this 2004 decision, political scientists and other scholars have sought to come up with such a standard.  Many have been suggested, such as a set of three tests proposed by Professor Wang of Princeton, or measures that focus on the share of seats won to the share of the votes cast, and more.  Probably most attention in recent years has been given to the “efficiency gap” measure proposed by Professor Nicholas Stephanopoulos and Eric McGhee.  The efficiency gap is the gap between the two main parties in the “wasted votes” each party received in some past election in the state (as a share of total votes in the state), where a wasted vote is the sum of all the votes for a losing candidate of that party, plus the votes in excess of 50% when that party’s candidate won.  This provides a direct measure of the two basic tactics of gerrymandering, as described above, of “packing” as many voters of one party as possible in a small number of districts (where they might receive 80 or 90% of the votes, but with all those above 50% “wasted”), and “cracking” (by splitting up the remaining voters of that party into a large number of districts where they will each be in a minority and hence will lose, with those votes then also “wasted”).

But there are problems with each of these measures, including the widely touted efficiency gap measure.  It has often been the case in recent years, in our divided society, that like-minded voters live close to each other, and particular districts in the state then will, as a result, often see the winner of the district receive a very high share of the votes.  Thus, even with no overt gerrymandering, the efficiency gap as measured will appear large.  Furthermore, at the opposite end of this spectrum, the measure will be extremely sensitive if a few districts are close to 50/50.  A shift of just a few percentage points in the vote will then lead one party or the other to lose and hence will then see a big jump in their share of wasted votes (the 49% received by one party or the other).

There is, however, a far more fundamental problem.  And that is that this is simply the wrong question to ask.  With all due respect to Justice Kennedy, and recognizing also that I am an economist and not a lawyer, I do not understand why the focus here is on the voting outcome, rather than on the process by which the district lines were drawn.  The voting outcome is not the standard by which other aspects of voter rights are judged.  Rather, the focus is on whether the process followed was fair and unbiased, with the outcome then whatever it is.

I would argue that the same should apply when district lines are drawn.  Was the process followed fair and unbiased?  The way to ensure that would be to remove the politicians from the process (both directly and indirectly), and to follow instead an automatic procedure by which district lines are drawn in accord with a small number of basic principles.

The next section below will first discuss the basic point that the focus when judging fairness and lack of bias should not be on whether we can come up with some measure based on the vote outcomes, but rather on whether the process that was followed to draw the district lines was fair and unbiased or not.  The section following will then discuss a particular process that illustrates how this could be done.  It would be automatic, and would produce a fair and unbiased drawing of voting district lines that meets the basic principles on which such a map should be based (districts of similar population, compactness, contiguity, and, to the extent consistent with these, respect for the boundaries of existing political jurisdictions such as counties or municipalities).  And while I believe this particular process would be a good one, I would not exclude that others are possible.  The important point is that the courts should require the states to follow some such process, and from the example presented we see that this is indeed feasible.  It is not an impossible task.

The penultimate section of the post will then discuss a few points that arise with any such system, and their implications, and end with a brief section summarizing the key points.

B.  A Fair Voting System Should Be Judged Based on the Process, Not on the Outcomes

Voting rights are fundamental in any democracy.  But in judging whether some aspect of the voting system is proper, we do not try to determine whether or not (by some defined specific measure) the resulting outcomes were improperly skewed or not.

Thus, for example, we take as a basic right that our ballot may be cast in secret.  No government official, nor anyone else for that matter, can insist on seeing how we voted.  Suppose that some state passed a law saying a government-appointed official will look over the shoulder of each of us as we vote, to determine whether we did it “right” or not.  We would expect the courts to strike this down, as an inappropriate process that contravenes our basic voting rights.  We would not expect the courts to say that they should look at the subsequent voting outcomes, and try to come up with some specific measure which would show, with certainty, whether the resulting outcomes were excessively influenced or not.  That would of course be absurd.

As another absurd example, suppose some state passed a law granting those registered in one of the major political parties, but not those registered in the other, access to more early days of voting than the other.  This would be explicitly partisan, and one would assume that the courts would not insist on limiting their assessment to an examination of the later voting outcomes to see whether, by some proposed measure, the resulting outcomes were excessively affected.  The voting system, to be fair, should not lead to a partisan advantage for one party or the other.  But gerrymandering does precisely that.

Yet the courts have so far asked declined to issue a definitive ruling on partisan gerrymandering, and have asked instead whether there might be some measure to determine, in the voting outcomes, whether gerrymandering had led to an excessive partisan advantage for the party drawing the district lines.  And there have been open admissions by senior political figures that district borders were in fact drawn up to provide a partisan advantage.  Indeed, principals involved in the two cases now before the Supreme Court have openly said that partisan advantage was the objective.  In North Carolina, David Lewis, the Republican chair of the committee in the state legislature responsible for drawing up the district lines, said during the debate that “I think electing Republicans is better than electing Democrats. So I drew this map to help foster what I think is better for the country.”

And in the case of Maryland, the Democratic governor of the state in 2010 at the time the congressional district lines were drawn, Martin O’Malley, spoke out in 2018 in writing and in interviews openly acknowledging that he and the Democrats had drawn the district lines for partisan advantage.  But he also now said that this was wrong and that he hoped the Supreme Court would rule against what they had done.

But how to remove partisanship when district lines are drawn?  As long as politicians are directly involved, with their political futures (and those of their colleagues) dependent on the district lines, it is human nature that biases will enter.  And it does not matter whether the biases are conscious and openly expressed, or unconscious and denied.  Furthermore, although possibly diminished, such biases will still enter even with independent commissions drawing the district lines.  There will be some political process by which the commissioners are appointed, and those who are appointed, even if independent, will still be human and will have certain preferences.

The way to address this would rather be to define some automatic process which, given the data on where people live and the specific principles to follow, will be able to draw up district lines that are both fair (follow the stated principles) and unbiased (are not drawn up in order to provide partisan advantage to one party).  In the next section I will present a particular process that would do this.

C.  An Automatic Process to Draw District Lines that are Fair and Unbiased

The boundaries for fair and unbiased districts should be drawn in accord with the following set of principles (and no more):

a)  One Person – One Vote:  Each district should have a similar population;

b)  Contiguity:  Each district must be geographically contiguous.  That is, one continuous boundary line will encompass the entire district and nothing more;

c)  Compactness:  While remaining consistent with the above, districts should be as compact as possible under some specified measure of compactness.

And while not such a fundamental principle, a reasonable objective is also, to the extent possible consistent with the basic principles above, that the district boundaries drawn should follow the lines of existing political jurisdictions (such as of counties or municipalities).

There will still be a need for decisions to be made on the basic process to follow and then on a number of the parameters and specific rules required for any such process.  Individual states will need to make such decisions, and can do so in accordance with their traditions and with what makes sense for their particular state.  But once these “rules of the game” are fully specified, there should then be a requirement that they will remain locked in for some lengthy period (at least to beyond whenever the next decennial redistricting will be needed), so that games cannot be played with the rules in order to bias a redistricting that may soon be coming up.  This will be discussed further below.

Such specific decisions will need to be made in order to fully define the application of the basic principles presented above.  To start, for the one person – one vote principle the Supreme Court has ruled that a 10% margin in population between the largest and smallest districts is an acceptable standard.  And many states have indeed chosen to follow this standard.  However, a state could, if it wished, choose to use a tighter standard, such as a margin in the populations between the largest and smallest districts of no more than 8%, or perhaps 5% or whatever.  A choice needs to be made.

Similarly, a specific measure of compactness will need to be specified.  Mathematically there are several different measures that could be used, but a good one which is both intuitive and relatively easy to apply is that the sum of the lengths of all the perimeters of each of the districts in the state should be minimized.  Note that since the outside borders of the state itself are fixed, this sum can be limited just to the perimeters that are internal to the state.  In essence, since states are to be divided up into component districts (and exhaustively so), the perimeter lines that do this with the shortest total length will lead to districts that are compact.  There will not be wavy lines, nor lines leading to elongated districts, as such lines will sum to a greater total length than possible alternatives.

What, then, would be a specific process (or algorithm) which could be used to draw district lines?  I will recommend one here, which should work well and would be consistent with the basic principles for a fair and unbiased set of district boundaries.  But other processes are possible.  A state could choose some such alternative (but then should stick to it).  The important point is that one should define a fully specified, automatic, and neutral process to draw such district lines, rather than try to determine whether some set of lines, drawn based on the “judgment” of politicians or of others, was “excessively” gerrymandered based on the voting outcomes observed.

Finally, the example will be based on what would be done to draw congressional district lines in a state.  But one could follow a similar process for drawing other such district lines, such as for state legislative districts.

The process would follow a series of steps:

Step 1: The first step would be to define a set of sub-districts within each county in a state (parish in Louisiana) and municipality (in those states where municipalities hold similar governmental responsibilities as a county).  These sub-districts would likely be the districts for county boards or legislative councils in most of the states, and one might typically have a dozen or more of these in such jurisdictions.  When those districts are also being redrawn as part of the decennial redistricting process, then they should be drawn first (based on the principles set out here), before the congressional district lines are drawn.

Each state would define, as appropriate for the institutions of that specific state, the sub-districts that will be used for the purpose of drawing the congressional district lines.  And if no such sub-jurisdictions exist in certain counties of certain states, one could draw up such sub-districts, purely for the purposes of this redistricting exercise, by dividing such counties into compact (based on minimization of the sum of the perimeters), equal population, districts.  While the number of such sub-districts would be defined (as part of the rules set for the process) based on the population of the affected counties, a reasonable number might generally be around 12 or 15.

These sub-districts will then be used in Step 4 below to even out the congressional districts.

Step 2:  An initial division of each state into a set of tentative congressional districts would then be drawn based on minimizing the sum of the lengths of the perimeter lines for all the districts, and requiring that all of the districts in the state have exactly the same population.  Following the 2010 census, the average population in a congressional district across the US was 710,767, but the exact number will vary by state depending on how many congressional seats the state was allocated.

Step 3: This first set of district lines will not, in general, follow county and municipal lines.  In this step 3, the initial set of district lines would then be shifted to the county or municipal line which is geographically closest to it (as defined by minimizing the geographic area that would be shifted in going to that county or city line, in comparison to whatever the alternative jurisdiction would be).  If the populations in the resulting congressional districts are then all within the 10% margin for the populations (or whatever percent margin is chosen by the state) between the largest and the smallest districts, then one is finished and the map is final.

Step 4:  But in general, there may be one or more districts where the resulting population exceeds or falls short of the 10% limit.  One would then make use of the political subdivisions of the counties and municipalities defined in Step 1 to bring them into line.  A specific set of rules for that process would need to be specified.  One such set would be to first determine which congressional district, as then drawn, deviated most from what the mean population should be for the districts in that state.  Suppose that district had too large of a population.  One would then shift one of the political subdivisions in that district from it to whichever adjacent congressional district had the least population (of all adjacent districts).  And the specific political subdivision shifted would then be the one which would have the least adverse impact on the measure of compactness (the sum of perimeter lengths).  Note that the impact on the compactness measure could indeed be positive (i.e. it could make the resulting congressional districts more compact), if the political subdivision eligible to be shifted were in a bend in the county or city line.

If the resulting congressional districts were all now within the 10% population margin (or whatever margin the state had chosen as its standard), one would be finished.  But if this is not the case, then one would repeat Step 4 over and over as necessary, each time for whatever district was then most out of line with the 10% margin.

That is it.  The result would be contiguous and relatively compact congressional districts, each with a similar population (within the 10% margin, or whatever margin is decided upon), and following borders of counties and municipalities or of political sub-divisions within those entities.

This would of course all be done on the computer, and can be once the rules and parameters are all decided as there will no longer be a role for opinion nor an opportunity for political bias to enter.  And while the initial data entry will be significant (as one would need to have the populations and perimeter lengths of each of the political subdivisions, and those of the counties and municipalities that they add up to), such data are now available from standard sources.  Indeed, the data entry needed would be far less than what is typically required for the computer programs used by our politicians to draw up their gerrymandered maps.

D.  Further Remarks

A few more points:

a)  The Redistricting Process, Once Decided, Should be Locked In for a Long Period:  As was discussed above, states will need to make a series of decisions to define fully the specific process it chooses to follow.  As illustrated in the case discussed above, states will need to decide on matters such as what will be the maximum margin of the populations between the largest and smallest districts (no more than 10%, by Supreme Court decision, but it could be less).  And rules will need to be set on, also as in the case discussed above, what measure of compactness to use, or the criterion on which district should be chosen first to have a shift of a sub-district in order to even out the population differences, and so on.

Such decisions will have an impact on the final districts arrived at.  And some of those districts will favor Republicans and some will favor Democrats, just by random.  There would then be a problem if the redistricting were controlled by one party in the state, and that party (through consultants who specialize in this) tried out dozens if not hundreds of possible choices on the parameters to see which would turn out to be most advantageous to it.  While the impact would be far less than what we have now with the deliberate gerrymandering, there could still be some effect.

To stem this, one should require that once choices are made on the process to follow and on the rules and other parameters needed to implement that process, there could not then be a change in that process for the immediately upcoming decennial redistricting.  They would only apply to those following.  While this would not be possible for the very first application of the system, there will likely be a good deal of attention paid by the public to these issues initially so such an attempt to bias the system would be difficult.

As noted, this is not likely to be a major problem, and any such system will not introduce the major biases we have seen in the deliberately gerrymandered maps of numerous states following the 2010 census.  But by locking in any decisions made for a long period, where any random bias in favor of one party in a map might well be reversed following the next census, there will be less of a possibility to game the system by changing the rules, just before a redistricting is due, to favor one party.

b)  Independent Commissions Do Not Suffice  – They Still Need to Decide How to Draw the District Maps:  A reform that has been increasingly advocated by many in recent years is to take the redistricting process out of the hands of the politicians, and instead to appoint independent commissions to draw up the maps.  There are seven states currently with non-partisan or bipartisan, nominally independent, commissions that draw the lines for both congressional and state legislative districts, and a further six who do this for state legislative districts only.  Furthermore, several additional states will use such commissions starting with the redistricting that follows the 2020 census.  Finally, there is Iowa.  While technically not an independent commission, district lines in Iowa are drawn up by non-partisan legislative staff, with the state legislature then approving it or not on a straight up or down vote.  If not approved, the process starts over, and if not approved after three votes it goes to the Iowa Supreme Court.

While certainly a step in the right direction, a problem with such independent commissions is that the process by which members are appointed can be highly politicized.  And even if not overtly politicized, the members appointed will have personal views on who they favor, and it is difficult even with the best of intentions to ensure such views do not enter.

But more fundamentally, even a well-intentioned independent commission will need to make choices on what is, and what is not, a “good” district map.  While most states list certain objectives for the redistricting process in either their state constitutions or in legislation, these are typically vague, such as saying the maps should try to preserve “communities of interest”, but with no clarity on what this in practice means.  Thirty-eight states also call for “compactness”, but few specify what that really means.  Indeed, only two states (Colorado and Iowa) define a specific measure of compactness.  Both states say that compactness should be measured by the sum of the perimeter lines being minimized (the same measure I used in the process discussed above).  However, in the case of Iowa this is taken along with a second measure of compactness (the absolute value of the difference between the length and the width of a district), and it is not clear how these two criteria are to be judged against each other when they differ.  Furthermore, in all states, including Colorado and Iowa, the compactness objective is just one of many objectives, and how to judge tradeoffs between the diverse objectives is not specified.

Even a well-intentioned independent commission will need to have clear criteria to judge what is a good map and what is not.  But once these criteria are fully specified, there is then no need for further opinion to enter, and hence no need for an independent commission.

c)  Appropriate and Inappropriate Principles to Follow: As discussed above, the basic principles that should be followed are:  1) One person – One vote, 2) Contiguity, and 3) Compactness.  Plus, to the extent possible consistent with this, the lines of existing political jurisdictions of a state (such as counties and municipalities) should be respected.

But while most states do call for this (with one person – one vote required by Supreme Court decision, but decided only in 1964), they also call for their district maps to abide by a number of other objectives.  Examples include the preservation of “communities of interest”, as discussed above, where 21 states call for this for their state legislative districts and 16 for their congressional districts (where one should note that congressional districting is not relevant in 7 states as they have only one member of Congress).  Further examples of what are “required” or “allowed” to be considered include preservation of political subdivision lines (45 states); preservation of “district cores” (8 states); and protection of incumbents (8 states).  Interestingly, 10 states explicitly prohibit consideration of the protection of incumbents.  And various states include other factors to consider or not consider as well.

But many, indeed most, of these considerations are left vague.  What does it mean that “communities of interest” are to be preserved where possible?  Who defines what the relevant communities are?  What is the district “core” that is to be preserved?  And as discussed above, there is a similar issue with the stated objective of “compactness”, as while 38 states call for it, only Colorado and Iowa are clear on how it is defined (but then vague on what trade-offs are to be accepted against other objectives).

The result of such multiple objectives, mostly vaguely defined and with no guidance on trade-offs, is that it is easy to come up with the heavily gerrymandered maps we have seen and the resulting strong bias in favor of one political party over the other.  Any district can be rationalized in terms of at least one of the vague objectives (such as preserving a “community of interest”).  These are loopholes which allow the politicians to draw maps favorable to themselves, and should be eliminated.

d)  Racial Preferences: The US has a long history of using gerrymandering (as well as other measures) to effectively disenfranchise minority groups, in particular African-Americans.  This has been especially the case in the American South, under the Jim Crow laws that were in effect through to the 1960s.  The Voting Rights Act of 1965 aimed to change this.  It required states (in particular under amendments to Section 2 passed in 1982 when the Act was reauthorized) to ensure minority groups would be able to have an effective voice in their choice of political representatives, including, under certain circumstances, through the creation of congressional and other legislative districts where the previously disenfranchised minority group would be in the majority (“majority-minority districts”).

However, it has not worked out that way.  Indeed, the creation of majority-minority districts, with African-Americans packed into as small a number of districts as possible and with the rest then scattered across a large number of remaining districts, is precisely what one would do under classic gerrymandering (packing and cracking) designed to limit, not enable, the political influence of such groups.  With the passage of these amendments to the Voting Rights Act in 1982, and then a Supreme Court decision in 1986 which upheld this (Thornburg v. Gingles), Republicans realized in the redistricting following the 1990 census that they could then, in those states where they controlled the process, use this as a means to gerrymander districts to their political advantage.  Newt Gingrich, in particular, encouraged this strategy, and the resulting Republican gains in the South in 1992 and 1994 were an important factor in leading to the Republican take-over of the Congress following the 1994 elections (for the first time in 40 years), with Gingrich then becoming the House Speaker.

Note also that while the Supreme Court, in a 5-4 decision in 2013, essentially gutted a key section of the Voting Rights Act, the section they declared to be unconstitutional was Section 5.  This was the section that required pre-approval by federal authorities of changes in voting statutes in those jurisdictions of the country (mostly the states of the South) with a history of discrimination as defined in the statute.  Left in place was Section 2 of the Voting Rights Act, the section under which the gerrymandering of districts on racial lines has been justified.  It is perhaps not surprising that Republicans have welcomed keeping this Section 2 while protesting Section 5.

One should also recognize that this racial gerrymandering of districts in the South has not led to most African-Americans in the region being represented in Congress by African-Americans.  One can calculate from the raw data (reported here in Ballotpedia, based on US Census data), that as of 2015, 12 of the 71 congressional districts in the core South (Louisiana, Mississippi, Alabama, Georgia, South Carolina, North Carolina, Virginia, and Tennessee) had a majority of African-American residents.  These were all just a single district in each of the states, other than two in North Carolina and four in Georgia.  But the majority of African Americans in those states did not live in those twelve districts.  Of the 13.2 million African-Americans in those eight states, just 5.0 million lived in those twelve districts, while 8.2 million were scattered around the remaining districts.  By packing as many African-Americans as possible in a small number of districts, the Republican legislators were able to create a large number of safe districts for their own party, and the African-Americans in those districts effectively had little say in who was then elected.

The Voting Rights Act was an important measure forward, drafted in reaction to the Jim Crow laws that had effectively undermined the right to vote of African-Americans.  And defined relative to the Jim Crow system, it was progress.  However, relative to a system that draws up district lines in a fair and unbiased manner, it would be a step backwards.  A system where minorities are packed into a small number of districts, with the rest then scattered across most of the districts, is just standard gerrymandering designed to minimize, not to ensure, the political rights of the minority groups.

E.  Conclusion

Politicians drawing district lines to favor one party and to ensure their own re-election fundamentally undermines democracy.  Supreme Court justices have themselves called it “distasteful”.  However, to address gerrymandering the court has sought some measure which could be used to ascertain whether the resulting voting outcomes were biased to a degree that could be considered unconstitutional.

But this is not the right question.  One does not judge other aspects of whether the voting process is fair or not by whether the resulting outcomes were by some measure “excessively” affected or not.  It is not clear why such an approach, focused on vote outcomes, should apply to gerrymandering.  Rather, the focus should be on whether the process followed was fair and unbiased or not.

And one can certainly define a fair and unbiased process to draw district lines.  The key is that the process, once established, should be automatic and follow the agreed set of basic principles that define what the districts should be – that they should be of similar population, compact, contiguous, and where possible and consistent with these principles, follow the lines of existing political jurisdictions.

One such process was outlined above.  But there are other possibilities.  The key is that the courts should require, in the name of ensuring a fair vote, that states must decide on some such process and implement it.  And the citizenry should demand the same.

Impact of the 1994 Assault Weapons Ban on Mass Shootings: An Update, Plus What To Do For a Meaningful Reform

A.  Introduction

An earlier post on this blog (from January 2013, following the horrific shooting at Sandy Hook Elementary School in Connecticut), looked at the impact of the 1994 Federal Assault Weapons Ban on the number of (and number of deaths from) mass shootings during the 10-year period the law was in effect.  The data at that point only went through 2012, and with that limited time period one could not draw strong conclusions as to whether the assault weapons ban (with the law as written and implemented) had a major effect.  There were fewer mass shootings over most of the years in that 1994 to 2004 period, but 1998 and 1999 were notable exceptions.

There has now been another horrific shooting at a school – this time at Marjory Stoneman Douglas High School in Parkland, Florida.  There are once again calls to limit access to the military-style semiautomatic assault weapons that have been used in most of these mass shootings (including the ones at Sandy Hook and Stoneman Douglas).  And essentially nothing positive had been done following the Sandy Hook shootings.  Indeed, a number of states passed laws which made such weapons even more readily available than before.  And rather than limiting access to such weapons, the NRA response following Sandy Hook was that armed guards should be posted at every school.  There are, indeed, now more armed guards at our schools.  Yet an armed guard at Stoneman Douglas did not prevent this tragedy.

With the passage of time, we now have five more years of data than we had at the time of the Sandy Hook shooting.  With this additional data, can we now determine with more confidence whether the Assault Weapons Ban had an impact, with fewer shootings incidents and fewer deaths from such shootings?

This post will look at this.  With the additional five years of data, it now appears clear that the 1994 to 2004 period did represent a change in the sadly rising trend, with a reduction most clearly in the number of fatalities from and total victims of those mass shootings.  This was true even though the 1994 Assault Weapons Ban was a decidedly weak law, with a number of loopholes that allowed continued access to such weapons for those who wished to obtain them.  Any new law should address those loopholes, and I will discuss at the end of this post a few such measures so that such a ban would be more meaningful.

B.  The Number of Mass Shootings by Year

The Federal Assault Weapons Ban (formally the “Public Safety and Recreational Firearms Use Protection Act”, and part of a broader crime control bill) was passed by Congress and signed into law on September 13, 1994.  The Act banned the sale of any newly manufactured or imported “semiautomatic assault weapon” (as defined by the Act), as well as of newly manufactured or imported large capacity magazines (holding more than 10 rounds of ammunition).  The Act had a sunset provision where it would be in effect for ten years, after which it could be modified or extended.

However, it was a weak ban, with many loopholes.  First of all, there was a grandfather clause that allowed manufacturers and others to sell all of their existing inventory.  Not surprisingly, manufacturers scaled up production sharply while the ban was being debated, as those inventories could later then be sold, and were.  Second and related to this, there was no constraint on shops or individuals on the sale of weapons that had been manufactured before the start date, provided just that they were legally owned at the time the law went into effect.  Third, “semiautomatic assault weapons” (which included handguns and certain shotguns, in addition to rifles such as the AR-15) were defined quite precisely in the Act.  But with that precision, gun manufacturers could make what were essentially cosmetic changes, with the new weapons then not subject to the Act.  And fourth, with the sunset provision after 10 years (i.e. to September 12, 2004), the Republican-controlled Congress of 2004 (and President George W. Bush) simply could allow the Act to expire, with nothing done to replace it.  And they did.

The ban was therefore weak.  But it is still of interest to see whether even such a weak law might have had an impact on the number of, and severity of, mass shootings during the period it was in effect.

The data used for this analysis were assembled by Mother Jones, the investigative newsmagazine and website.  The data are available for download in spreadsheet form, and is the most thorough and comprehensive such dataset publicly available.  Sadly, the US government has not assembled and made available anything similar.  A line in the Mother Jones spreadsheet is provided for each mass shooting incident in the US since 1982, with copious information on each incident (as could be gathered from contemporaneous news reports) including the weapons used when reported.  I would encourage readers to browse through the spreadsheet to get a sense of mass shootings in America, the details of which are all too often soon forgotten.  My analysis here is based on various calculations one can then derive from this raw data.

This dataset (through 2012) was used in my earlier blog post on the impact of the Assault Weapons Ban, and has now been updated with shootings through February 2018 (as I write this).  To be included, a mass shooting incident was defined by Mother Jones as a shooting in a public location (and so excluded incidents such as in a private home, which are normally domestic violence incidents), or in the context of a conventional crime (such as an armed robbery, or from gang violence), and where at least four people were killed (other than the killer himself if he also died, and note it is almost always a he).  While other possible definitions of what constitutes a “mass shooting” could be used, Mother Jones argues (and I would agree) that this definition captures well what most people would consider a “mass shooting”.  It only constitutes a small subset of all those killed by guns each year, but it is a particularly horrific set.

There was, however, one modification in the updated file, which I adjusted for.  Up through 2012, the definition was as above and included all incidents where four or more people died (other than the killer).  In 2013, the federal government started to refer to mass shootings as those events where three or more people were killed (other than the killer), and Mother Jones adopted this new criterion for the mass shootings it recorded for 2013 and later.  But this added a number of incidents that would not have been included under the earlier criterion (of four or more killed), and would bias any analysis of the trend.  Hence I excluded those cases in the charts shown here.  Including incidents with exactly three killed would have added no additional cases in 2013, but one additional in 2014, three additional in 2015, two additional in 2016, and six additional in 2017 (and none through end-February in 2018).  There would have been a total of 36 additional fatalities (three for each of the 12 additional cases), and 80 additional victims (killed and wounded).

What, then, was the impact of the assault weapons ban?  We will first look at this graphically, as trends are often best seen by eye, and then take a look at some of the numbers, as they can provide better precision.

The chart at the top of this post shows the number of mass shooting events each year from 1982 through 2017, plus for the events so far in 2018 (through end-February).  The numbers were low through the 1980s (zero, one, or two a year), but then rose.  The number of incidents per year was then generally less during the period the Assault Weapons Ban was in effect, but with the notable exceptions of 1998 (three incidents) and especially 1999 (five).  The Columbine High School shooting was in 1999, when 13 died and 24 were wounded.

The number of mass shootings then rose in the years after the ban was allowed to expire.  This was not yet fully clear when one only had data through 2012, but the more recent data shows that the trend is, sadly, clearly upward.  The data suggest that the number of mass shooting incidents were low in the 1980s but then began to rise in the early 1990s; that there was then some fallback during the decade the Assault Weapons Ban was in effect (with 1998 and 1999 as exceptions); but with the lifting of the ban the number of mass shooting incidents began to grow again.  (For those statistically minded, a simple linear regression for the full 1982 to 2017 period indicates an upward trend with a t-statistic of a highly significant 4.6 – any t-statistic of greater than 2.0 is generally taken to be statistically significant.)

C.  The Number of Fatalities and Number of Victims in Mass Shooting Incidents 

These trends are even more clear when one examines what happened to the total number of those killed each year, and the total number of victims (killed and wounded).

First, a chart of fatalities from mass shootings over time shows:

 

Fatalities fluctuated within a relatively narrow band prior to 1994, but then, with the notable exception of 1999, fell while the Assault Weapons Ban was in effect.  And they rose sharply after the ban was allowed to expire.  There is still a great deal of year to year variation, but the increase over the last decade is clear.

And for the total number of victims:

 

One again sees a significant reduction during the period the Assault Weapons Ban was in effect (with again the notable exception of 1999, and now 1998 as well).  The number of victims then rose in most years following the end of the ban, and went off the charts in 2017.  This was due largely to the Las Vegas shooting in October, 2017, where there were 604 victims of the shooter.  But even excluding the Las Vegas case, there were still 77 victims in mass shooting events in 2017, more than in any year prior to 2007 (other than 1999).

D.  The Results in Tables

One can also calculate the averages per year for the pre-ban period (13 years, from 1982 to 1994), the period of the ban (September 1994 to September 2004), and then for the post-ban period (again 13 years, from 2005 to 2017):

Number of Mass Shootings and Their Victims – Averages per Year

Avg per Year

Shootings

Fatalities

Injured

Total Victims

1982-1994

1.5

12.4

14.2

26.6

1995-2004

1.5

9.6

10.1

19.7

2005-2017

3.8

38.6

71.5

110.2

Note:  One shooting in December 2004 (following the lifting of the Assault Weapons Ban in September 2004) is combined here with the 2005 numbers.  And the single shooting in 1994 was in June, before the ban went into effect in September.

The average number of fatalities per year, as well as the number injured and hence the total number of victims, all fell during the period of the ban.  They all then jumped sharply once the ban was lifted.  While one should acknowledge that these are all correlations in time, where much else was also going on, these results are consistent with the ban having a positive effect on reducing the number killed or wounded in such mass shootings.

The number of mass shootings events per year also stabilized during the period the ban was in effect (at an average of 1.5 per year).  That is, while the number of mass shooting events was the same (per year) as before, their lethality was less.  Plus the number of mass shooting events did level off, and fell back if one compares it to the previous half-decade rather than the previous 13 year period.  They had been following a rising trend before.  And the number of mass shootings then jumped sharply after the ban was lifted.

The data also allow us to calculate the average number of victims per mass shooting event, broken down by the type of weapon used:

Average Number of Victims per Mass Shooting, by Weapon Used

Number of Shootings

Fatalities

Injured

Total Victims

Semiauto Rifle Used

26

13.0

34.6

47.6

Semiauto Rifle Not Used

59

7.5

5.6

13.1

Semiauto Handgun Used

63

10.0

17.5

27.5

Semiauto Handgun (but Not Semiautomatic Rifle) Used

48

7.7

6.0

13.7

No Semiauto Weapon Used

11

6.6

4.0

10.6

There were 26 cases where the dataset Mother Jones assembled allowed one to identify specifically that a semiautomatic rifle was used.  (Some news reports were not clear, saying only that a rifle was used.  Such cases were not counted here.)  This was out of a total of 85 mass shooting events where four or more were killed.  But the use of semiautomatic rifles proved to be especially deadly.  On average, there were 13 fatalities per mass shooting when one could positively identify that a semiautomatic rifle was used, versus 7.5 per shooting when it was not.  And there were close to 48 total victims per mass shooting on average when a semiautomatic rifle was used, versus 13 per shooting when it was not.

The figures when a semiautomatic handgun was used (from what could be identified in the news reports) are very roughly about half-way between these two.  But note that there is a great deal of overlap between mass shootings where a semiautomatic handgun was used and where a semiautomatic rifle was also used.  Mass shooters typically take multiple weapons with them as they plan out their attacks, including semiautomatic handguns along with their semiautomatic rifles.  The fourth line in the table shows the figures when a semiautomatic handgun was used but not also a semiautomatic rifle.  These figures are similar to the averages in all of the cases where a semiautomatic rifle was not used (the second line in the table).

The fewest number of fatalities and injured are, however, when no semiautomatic weapons are used at all.  Unfortunately, in only 11 of the 85 mass shootings (13%) were neither a semiautomatic rifle nor a semiautomatic handgun used.  And these 11 might include a few cases where the news reports did not permit a positive identification that a semiautomatic weapon had been used.

E.  What Would Be Needed for a Meaningful Ban

It thus appears that the 1994 Assault Weapons Ban, as weak as it was, had a positive effect on saving lives.  But as noted before, it was flawed, with a number of loopholes which meant that the “ban” was far from a true ban.  Some of these might have been oversights, or issues only learned with experience, but I suspect most reflected compromises that were necessary to get anything approved by Congress.  That will certainly remain an issue.

But if one were to draft a law addressing these issues, what are some of the measures one would include?  I will make a few suggestions here, but this list should not be viewed as anything close to comprehensive:

a)  First, there should not be a 10-year (or any period) sunset provision.  A future Congress could amend the law, or even revoke it, as with any legislation, but this would then require specific action by that future Congress.  But with a sunset provision, it is easy simply to do nothing, as the Republican-controlled Congress did in 2004.

b)  Second, with hindsight one can see that the 1994 law made a mistake by defining precisely what was considered a “semiautomatic” weapon.  This made it possible for manufacturers later to make what were essentially cosmetic changes to the weapons, and then make and sell them.  Rather, a semiautomatic weapon should be defined in any such law by its essential feature, which is that one can fire such a weapon repeatedly simply by pulling the trigger once for each shot, with the weapon loading itself.

c)  Third, fully automatic weapons (those which fire continuously as long as the trigger is pulled) have been banned since 1986 (if manufactured after May 19, 1986, the day President Reagan signed this into law).  But “bump stocks” have not been banned.  Bump stocks are devices that effectively convert a semiautomatic weapon into a fully automatic one.  Following the horrific shooting in Las Vegas on October 1, 2017, in which 58 were killed and 546 injured, and where the shooter used a bump stock to convert his semiautomatic rifles (he had many) into what were effectively fully automatic weapons, there have been calls for bump stocks to be banned.  This should be done, and indeed it is now being recognized that a change in existing law is not even necessary.  Attorney General Jeff Sessions said on February 27 that the Department of Justice is re-examining the issue, and implied that there would “soon” be an announcement by the department of regulations that recognize that a semiautomatic weapon equipped with a bump stock meets the definition of a fully automatic weapon.

d)  Fourth, a major problem with the 1994 Assault Weapons Ban, as drafted, was it only banned the sale of newly manufactured (or imported) semiautomatic weapons from the date the act was signed into law – September 13, 1994.  Manufacturers and shops could sell legally any such weapons produced before then.  Not surprisingly, manufacturers ramped up production (and imports) sharply in the months the Act was being debated in Congress, which provided then an ample supply for a substantial period after the law technically went into effect.

But one could set an earlier date of effectiveness, with the ban covering weapons manufactured or imported from that earlier date.  This is commonly done in tax law.  That is, tax laws being debated during some year will often be made effective for transactions starting from the beginning of the year, or from when the new laws were first proposed, so as not to induce negative actions designed to circumvent the purpose of the new law.

e)  Fifth, the 1994 Assault Weapons Ban allowed the sale to the public of any weapons legally owned before the law went into effect on September 13, 1994 (including all those in inventory).  This is related to, but different from, the issue discussed immediately above.  The issue here is that all such weapons, including those manufactured many years before, could then be sold and resold for as long as those weapons existed.  This could continue for decades.  And with millions of such weapons now in the US, it would be many decades before the supply of such weapons would be effectively reduced.

To accelerate this, one could instead create a government-funded program to purchase (and then destroy) any such weapons that the seller wished to dispose of.  And one would couple this with a ban on the sale of any such weapons to anyone other than the government.  There could be no valid legal objection to this as any sales would be voluntary (although I have no doubt the NRA would object), and would be consistent with the ban on the sale of any newly manufactured semiautomatic weapon.  One would also couple this with the government buying the weapons at a generous price – say the original price paid for the weapon (or the list price it then had), without any reduction for depreciation.

Semiautomatic weapons are expensive.  An assault rifle such as the AR-15 can easily cost $1,000.  And one would expect that as those with such weapons in their households grow older and more mature over time, many will recognize that such a weapon does not provide security.  Rather, numerous studies have shown (see, for example, here, here, here, and here) that those most likely to be harmed by weapons in a household are either the owners themselves or their loved ones.  As the gun owners mature, many are likely to see the danger in keeping such weapons at home, and the attractiveness of disposing of them legally at a good price.  Over time, this could lead to a substantial reduction in the type of weapons which have been used in so many of the mass shootings.

F.  Conclusion

Semiautomatic weapons are of no use in a civilian setting other than to massacre innocent people.  They are of no use in self-defense:  One does not walk down the street, or while shopping in the aisles of a Walmart or a Safeway, with an AR-15 strapped to your back.  One does not open the front door to your house each time the doorbell rings aiming an AR-15 at whoever is there.  Nor are such weapons of any use in hunting.  First, they are not terribly accurate.  And second, if one succeeded in hitting the animal with multiple shots, all one would have is a bloody mess.

Such weapons are used in the military precisely because they are good at killing people.  But for precisely the same reason as fully automatic weapons have been banned since 1986 (and tightly regulated since 1934), semiautomatic weapons should be similarly banned.

The 1994 Assault Weapons Ban sought to do this.  However, it was allowed to expire in 2004.  It also had numerous loopholes which lessened the effectiveness it could have had.  Despite this, the number of those killed and injured in mass shootings fell back substantially while that law was in effect, and then jumped after it expired.  And the number of mass shooting events per year leveled off or fell while it was in effect (depending on the period it is being compared to), and then also jumped once it expired.

There are, however, a number of ways a new law banning such weapons could be written to close off those loopholes.  A partial list is discussed above.  I fully recognize, however, that the likelihood of such a law passing in the current political environment, where Republicans control both the Senate and the House as well as the presidency, are close to nil.  One can hope that at some point in the future the political environment will change to the point where an effective ban on semiautomatic weapons can be passed.  After all, President Reagan, the hero of Republican conservatives, did sign into law the 1986 act that banned fully automatic weapons.  Sadly, I expect many more school children will die from such shootings before this will happen.