End Gerrymandering by Focussing on the Process, Not on the Outcomes

A.  Introduction

There is little that is as destructive to a democracy as gerrymandering.  As has been noted by many, with gerrymandering the politicians are choosing their voters rather than the voters choosing their political representatives.

The diagrams above, in schematic form, show how gerrymandering works.  Suppose one has a state or region with 50 precincts, with 60% that are fully “blue” and 40% that are fully “red”, and where 5 districts need to be drawn.  If the blue party controls the process, they can draw the district lines as in the middle diagram, and win all 5 (100%) of the districts, with just 60% of the voters.  If, in contrast, the red party controls the process for some reason, they could draw the district boundaries as in the diagram on the right.  They would then win 3 of the 5 districts (60%) even though they only account for 40% of the voters.  It works by what is called in the business “packing and cracking”:  With the red party controlling the process, they “pack” as many blue voters as possible into a small number of districts (two in the example here, each with 90% blue voters), and then “crack” the rest by scattering them around in the remaining districts, each as a minority (three districts here, each with 40% blue voters and 60% red).

Gerrymandering leads to cynicism among voters, with the well-founded view that their votes just do not matter.  Possibly even worse, gerrymandering leads to increased polarization, as candidates in districts with lines drawn to be safe for one party or the other do not need to worry about seeking to appeal to voters of the opposite party.  Rather, their main concern is that a more extreme candidate from their own party will not challenge them in a primary, where only those of their own party (and normally mostly just the more extreme voters in their party) will vote.  And this is exactly what we have seen, especially since 2010 when gerrymandering became more sophisticated, widespread, and egregious than ever before.

Gerrymandering has grown in recent decades both because computing power and data sources have grown increasingly sophisticated, and because a higher share of states have had a single political party able to control the process in full (i.e. with both legislative chambers, and the governor when a part of the process, all under a single party’s control).  And especially following the 2010 elections, this has favored the Republicans.  As a result, while there has been one Democratic-controlled state (Maryland) on common lists of the states with the most egregious gerrymandering, most of the states with extreme gerrymandering were Republican-controlled.  Thus, for example, Professor Samuel Wang of Princeton, founder of the Princeton Gerrymandering Project, has identified a list of the eight most egregiously gerrymandered states (by a set of criteria he has helped develop), where one (Maryland) was Democratic-controlled, while the remaining seven were Republican.  Or the Washington Post calculated across all states an average of the degree of compactness of congressional districts:  Of the 15 states with the least compact districts, only two (Maryland and Illinois) were liberal Democratic-controlled states.  And in terms of the “efficiency gap” measure (which I will discuss below), seven states were gerrymandered following the 2010 elections in such a way as to yield two or more congressional seats each in their favor.  All seven were Republican-controlled.

With gerrymandering increasingly common and extreme, a number of cases have gone to the Supreme Court to try to stop it.  However, the Supreme Court has failed as yet to issue a definitive ruling ending the practice.  Rather, it has so far skirted the issue by resolving cases on more narrow grounds, or by sending cases back to lower courts for further consideration.  This may soon change, as the Supreme Court has agreed to take up two cases (affecting lines drawn for congressional districts in North Carolina and in Maryland), with oral arguments scheduled for March 26, 2019.  But it remains to be seen if these cases will lead to a definitive ruling on the practice of partisan gerrymandering or not.

This is not because of a lack of concern by the court.  Even conservative Justice Samuel Alito has conceded that “gerrymandering is distasteful”.  But he, along with the other conservative justices on the court, have ruled against the court taking a position on the gerrymandering cases brought before it, in part, at least, out of the concern that they do not have a clear standard by which to judge whether any particular case of gerrymandering was constitutionally excessive.  This goes back to a 2004 case (Vieth v. Jubelirer) in which the four most conservative justices of the time, led by Justice Antonin Scalia, opined that there could not be such a standard, while the four liberal justices argued that there could.  Justice Anthony Kennedy, in the middle, issued a concurring opinion with the conservative justices there was not then an acceptable such standard before them, but that he would not preclude the possibility of such a standard being developed at some point in the future.

Following this 2004 decision, political scientists and other scholars have sought to come up with such a standard.  Many have been suggested, such as a set of three tests proposed by Professor Wang of Princeton, or measures that focus on the share of seats won to the share of the votes cast, and more.  Probably most attention in recent years has been given to the “efficiency gap” measure proposed by Professor Nicholas Stephanopoulos and Eric McGhee.  The efficiency gap is the gap between the two main parties in the “wasted votes” each party received in some past election in the state (as a share of total votes in the state), where a wasted vote is the sum of all the votes for a losing candidate of that party, plus the votes in excess of 50% when that party’s candidate won.  This provides a direct measure of the two basic tactics of gerrymandering, as described above, of “packing” as many voters of one party as possible in a small number of districts (where they might receive 80 or 90% of the votes, but with all those above 50% “wasted”), and “cracking” (by splitting up the remaining voters of that party into a large number of districts where they will each be in a minority and hence will lose, with those votes then also “wasted”).

But there are problems with each of these measures, including the widely touted efficiency gap measure.  It has often been the case in recent years, in our divided society, that like-minded voters live close to each other, and particular districts in the state then will, as a result, often see the winner of the district receive a very high share of the votes.  Thus, even with no overt gerrymandering, the efficiency gap as measured will appear large.  Furthermore, at the opposite end of this spectrum, the measure will be extremely sensitive if a few districts are close to 50/50.  A shift of just a few percentage points in the vote will then lead one party or the other to lose and hence will then see a big jump in their share of wasted votes (the 49% received by one party or the other).

There is, however, a far more fundamental problem.  And that is that this is simply the wrong question to ask.  With all due respect to Justice Kennedy, and recognizing also that I am an economist and not a lawyer, I do not understand why the focus here is on the voting outcome, rather than on the process by which the district lines were drawn.  The voting outcome is not the standard by which other aspects of voter rights are judged.  Rather, the focus is on whether the process followed was fair and unbiased, with the outcome then whatever it is.

I would argue that the same should apply when district lines are drawn.  Was the process followed fair and unbiased?  The way to ensure that would be to remove the politicians from the process (both directly and indirectly), and to follow instead an automatic procedure by which district lines are drawn in accord with a small number of basic principles.

The next section below will first discuss the basic point that the focus when judging fairness and lack of bias should not be on whether we can come up with some measure based on the vote outcomes, but rather on whether the process that was followed to draw the district lines was fair and unbiased or not.  The section following will then discuss a particular process that illustrates how this could be done.  It would be automatic, and would produce a fair and unbiased drawing of voting district lines that meets the basic principles on which such a map should be based (districts of similar population, compactness, contiguity, and, to the extent consistent with these, respect for the boundaries of existing political jurisdictions such as counties or municipalities).  And while I believe this particular process would be a good one, I would not exclude that others are possible.  The important point is that the courts should require the states to follow some such process, and from the example presented we see that this is indeed feasible.  It is not an impossible task.

The penultimate section of the post will then discuss a few points that arise with any such system, and their implications, and end with a brief section summarizing the key points.

B.  A Fair Voting System Should Be Judged Based on the Process, Not on the Outcomes

Voting rights are fundamental in any democracy.  But in judging whether some aspect of the voting system is proper, we do not try to determine whether or not (by some defined specific measure) the resulting outcomes were improperly skewed or not.

Thus, for example, we take as a basic right that our ballot may be cast in secret.  No government official, nor anyone else for that matter, can insist on seeing how we voted.  Suppose that some state passed a law saying a government-appointed official will look over the shoulder of each of us as we vote, to determine whether we did it “right” or not.  We would expect the courts to strike this down, as an inappropriate process that contravenes our basic voting rights.  We would not expect the courts to say that they should look at the subsequent voting outcomes, and try to come up with some specific measure which would show, with certainty, whether the resulting outcomes were excessively influenced or not.  That would of course be absurd.

As another absurd example, suppose some state passed a law granting those registered in one of the major political parties, but not those registered in the other, access to more early days of voting than the other.  This would be explicitly partisan, and one would assume that the courts would not insist on limiting their assessment to an examination of the later voting outcomes to see whether, by some proposed measure, the resulting outcomes were excessively affected.  The voting system, to be fair, should not lead to a partisan advantage for one party or the other.  But gerrymandering does precisely that.

Yet the courts have so far asked declined to issue a definitive ruling on partisan gerrymandering, and have asked instead whether there might be some measure to determine, in the voting outcomes, whether gerrymandering had led to an excessive partisan advantage for the party drawing the district lines.  And there have been open admissions by senior political figures that district borders were in fact drawn up to provide a partisan advantage.  Indeed, principals involved in the two cases now before the Supreme Court have openly said that partisan advantage was the objective.  In North Carolina, David Lewis, the Republican chair of the committee in the state legislature responsible for drawing up the district lines, said during the debate that “I think electing Republicans is better than electing Democrats. So I drew this map to help foster what I think is better for the country.”

And in the case of Maryland, the Democratic governor of the state in 2010 at the time the congressional district lines were drawn, Martin O’Malley, spoke out in 2018 in writing and in interviews openly acknowledging that he and the Democrats had drawn the district lines for partisan advantage.  But he also now said that this was wrong and that he hoped the Supreme Court would rule against what they had done.

But how to remove partisanship when district lines are drawn?  As long as politicians are directly involved, with their political futures (and those of their colleagues) dependent on the district lines, it is human nature that biases will enter.  And it does not matter whether the biases are conscious and openly expressed, or unconscious and denied.  Furthermore, although possibly diminished, such biases will still enter even with independent commissions drawing the district lines.  There will be some political process by which the commissioners are appointed, and those who are appointed, even if independent, will still be human and will have certain preferences.

The way to address this would rather be to define some automatic process which, given the data on where people live and the specific principles to follow, will be able to draw up district lines that are both fair (follow the stated principles) and unbiased (are not drawn up in order to provide partisan advantage to one party).  In the next section I will present a particular process that would do this.

C.  An Automatic Process to Draw District Lines that are Fair and Unbiased

The boundaries for fair and unbiased districts should be drawn in accord with the following set of principles (and no more):

a)  One Person – One Vote:  Each district should have a similar population;

b)  Contiguity:  Each district must be geographically contiguous.  That is, one continuous boundary line will encompass the entire district and nothing more;

c)  Compactness:  While remaining consistent with the above, districts should be as compact as possible under some specified measure of compactness.

And while not such a fundamental principle, a reasonable objective is also, to the extent possible consistent with the basic principles above, that the district boundaries drawn should follow the lines of existing political jurisdictions (such as of counties or municipalities).

There will still be a need for decisions to be made on the basic process to follow and then on a number of the parameters and specific rules required for any such process.  Individual states will need to make such decisions, and can do so in accordance with their traditions and with what makes sense for their particular state.  But once these “rules of the game” are fully specified, there should then be a requirement that they will remain locked in for some lengthy period (at least to beyond whenever the next decennial redistricting will be needed), so that games cannot be played with the rules in order to bias a redistricting that may soon be coming up.  This will be discussed further below.

Such specific decisions will need to be made in order to fully define the application of the basic principles presented above.  To start, for the one person – one vote principle the Supreme Court has ruled that a 10% margin in population between the largest and smallest districts is an acceptable standard.  And many states have indeed chosen to follow this standard.  However, a state could, if it wished, choose to use a tighter standard, such as a margin in the populations between the largest and smallest districts of no more than 8%, or perhaps 5% or whatever.  A choice needs to be made.

Similarly, a specific measure of compactness will need to be specified.  Mathematically there are several different measures that could be used, but a good one which is both intuitive and relatively easy to apply is that the sum of the lengths of all the perimeters of each of the districts in the state should be minimized.  Note that since the outside borders of the state itself are fixed, this sum can be limited just to the perimeters that are internal to the state.  In essence, since states are to be divided up into component districts (and exhaustively so), the perimeter lines that do this with the shortest total length will lead to districts that are compact.  There will not be wavy lines, nor lines leading to elongated districts, as such lines will sum to a greater total length than possible alternatives.

What, then, would be a specific process (or algorithm) which could be used to draw district lines?  I will recommend one here, which should work well and would be consistent with the basic principles for a fair and unbiased set of district boundaries.  But other processes are possible.  A state could choose some such alternative (but then should stick to it).  The important point is that one should define a fully specified, automatic, and neutral process to draw such district lines, rather than try to determine whether some set of lines, drawn based on the “judgment” of politicians or of others, was “excessively” gerrymandered based on the voting outcomes observed.

Finally, the example will be based on what would be done to draw congressional district lines in a state.  But one could follow a similar process for drawing other such district lines, such as for state legislative districts.

The process would follow a series of steps:

Step 1: The first step would be to define a set of sub-districts within each county in a state (parish in Louisiana) and municipality (in those states where municipalities hold similar governmental responsibilities as a county).  These sub-districts would likely be the districts for county boards or legislative councils in most of the states, and one might typically have a dozen or more of these in such jurisdictions.  When those districts are also being redrawn as part of the decennial redistricting process, then they should be drawn first (based on the principles set out here), before the congressional district lines are drawn.

Each state would define, as appropriate for the institutions of that specific state, the sub-districts that will be used for the purpose of drawing the congressional district lines.  And if no such sub-jurisdictions exist in certain counties of certain states, one could draw up such sub-districts, purely for the purposes of this redistricting exercise, by dividing such counties into compact (based on minimization of the sum of the perimeters), equal population, districts.  While the number of such sub-districts would be defined (as part of the rules set for the process) based on the population of the affected counties, a reasonable number might generally be around 12 or 15.

These sub-districts will then be used in Step 4 below to even out the congressional districts.

Step 2:  An initial division of each state into a set of tentative congressional districts would then be drawn based on minimizing the sum of the lengths of the perimeter lines for all the districts, and requiring that all of the districts in the state have exactly the same population.  Following the 2010 census, the average population in a congressional district across the US was 710,767, but the exact number will vary by state depending on how many congressional seats the state was allocated.

Step 3: This first set of district lines will not, in general, follow county and municipal lines.  In this step 3, the initial set of district lines would then be shifted to the county or municipal line which is geographically closest to it (as defined by minimizing the geographic area that would be shifted in going to that county or city line, in comparison to whatever the alternative jurisdiction would be).  If the populations in the resulting congressional districts are then all within the 10% margin for the populations (or whatever percent margin is chosen by the state) between the largest and the smallest districts, then one is finished and the map is final.

Step 4:  But in general, there may be one or more districts where the resulting population exceeds or falls short of the 10% limit.  One would then make use of the political subdivisions of the counties and municipalities defined in Step 1 to bring them into line.  A specific set of rules for that process would need to be specified.  One such set would be to first determine which congressional district, as then drawn, deviated most from what the mean population should be for the districts in that state.  Suppose that district had too large of a population.  One would then shift one of the political subdivisions in that district from it to whichever adjacent congressional district had the least population (of all adjacent districts).  And the specific political subdivision shifted would then be the one which would have the least adverse impact on the measure of compactness (the sum of perimeter lengths).  Note that the impact on the compactness measure could indeed be positive (i.e. it could make the resulting congressional districts more compact), if the political subdivision eligible to be shifted were in a bend in the county or city line.

If the resulting congressional districts were all now within the 10% population margin (or whatever margin the state had chosen as its standard), one would be finished.  But if this is not the case, then one would repeat Step 4 over and over as necessary, each time for whatever district was then most out of line with the 10% margin.

That is it.  The result would be contiguous and relatively compact congressional districts, each with a similar population (within the 10% margin, or whatever margin is decided upon), and following borders of counties and municipalities or of political sub-divisions within those entities.

This would of course all be done on the computer, and can be once the rules and parameters are all decided as there will no longer be a role for opinion nor an opportunity for political bias to enter.  And while the initial data entry will be significant (as one would need to have the populations and perimeter lengths of each of the political subdivisions, and those of the counties and municipalities that they add up to), such data are now available from standard sources.  Indeed, the data entry needed would be far less than what is typically required for the computer programs used by our politicians to draw up their gerrymandered maps.

D.  Further Remarks

A few more points:

a)  The Redistricting Process, Once Decided, Should be Locked In for a Long Period:  As was discussed above, states will need to make a series of decisions to define fully the specific process it chooses to follow.  As illustrated in the case discussed above, states will need to decide on matters such as what will be the maximum margin of the populations between the largest and smallest districts (no more than 10%, by Supreme Court decision, but it could be less).  And rules will need to be set on, also as in the case discussed above, what measure of compactness to use, or the criterion on which district should be chosen first to have a shift of a sub-district in order to even out the population differences, and so on.

Such decisions will have an impact on the final districts arrived at.  And some of those districts will favor Republicans and some will favor Democrats, just by random.  There would then be a problem if the redistricting were controlled by one party in the state, and that party (through consultants who specialize in this) tried out dozens if not hundreds of possible choices on the parameters to see which would turn out to be most advantageous to it.  While the impact would be far less than what we have now with the deliberate gerrymandering, there could still be some effect.

To stem this, one should require that once choices are made on the process to follow and on the rules and other parameters needed to implement that process, there could not then be a change in that process for the immediately upcoming decennial redistricting.  They would only apply to those following.  While this would not be possible for the very first application of the system, there will likely be a good deal of attention paid by the public to these issues initially so such an attempt to bias the system would be difficult.

As noted, this is not likely to be a major problem, and any such system will not introduce the major biases we have seen in the deliberately gerrymandered maps of numerous states following the 2010 census.  But by locking in any decisions made for a long period, where any random bias in favor of one party in a map might well be reversed following the next census, there will be less of a possibility to game the system by changing the rules, just before a redistricting is due, to favor one party.

b)  Independent Commissions Do Not Suffice  – They Still Need to Decide How to Draw the District Maps:  A reform that has been increasingly advocated by many in recent years is to take the redistricting process out of the hands of the politicians, and instead to appoint independent commissions to draw up the maps.  There are seven states currently with non-partisan or bipartisan, nominally independent, commissions that draw the lines for both congressional and state legislative districts, and a further six who do this for state legislative districts only.  Furthermore, several additional states will use such commissions starting with the redistricting that follows the 2020 census.  Finally, there is Iowa.  While technically not an independent commission, district lines in Iowa are drawn up by non-partisan legislative staff, with the state legislature then approving it or not on a straight up or down vote.  If not approved, the process starts over, and if not approved after three votes it goes to the Iowa Supreme Court.

While certainly a step in the right direction, a problem with such independent commissions is that the process by which members are appointed can be highly politicized.  And even if not overtly politicized, the members appointed will have personal views on who they favor, and it is difficult even with the best of intentions to ensure such views do not enter.

But more fundamentally, even a well-intentioned independent commission will need to make choices on what is, and what is not, a “good” district map.  While most states list certain objectives for the redistricting process in either their state constitutions or in legislation, these are typically vague, such as saying the maps should try to preserve “communities of interest”, but with no clarity on what this in practice means.  Thirty-eight states also call for “compactness”, but few specify what that really means.  Indeed, only two states (Colorado and Iowa) define a specific measure of compactness.  Both states say that compactness should be measured by the sum of the perimeter lines being minimized (the same measure I used in the process discussed above).  However, in the case of Iowa this is taken along with a second measure of compactness (the absolute value of the difference between the length and the width of a district), and it is not clear how these two criteria are to be judged against each other when they differ.  Furthermore, in all states, including Colorado and Iowa, the compactness objective is just one of many objectives, and how to judge tradeoffs between the diverse objectives is not specified.

Even a well-intentioned independent commission will need to have clear criteria to judge what is a good map and what is not.  But once these criteria are fully specified, there is then no need for further opinion to enter, and hence no need for an independent commission.

c)  Appropriate and Inappropriate Principles to Follow: As discussed above, the basic principles that should be followed are:  1) One person – One vote, 2) Contiguity, and 3) Compactness.  Plus, to the extent possible consistent with this, the lines of existing political jurisdictions of a state (such as counties and municipalities) should be respected.

But while most states do call for this (with one person – one vote required by Supreme Court decision, but decided only in 1964), they also call for their district maps to abide by a number of other objectives.  Examples include the preservation of “communities of interest”, as discussed above, where 21 states call for this for their state legislative districts and 16 for their congressional districts (where one should note that congressional districting is not relevant in 7 states as they have only one member of Congress).  Further examples of what are “required” or “allowed” to be considered include preservation of political subdivision lines (45 states); preservation of “district cores” (8 states); and protection of incumbents (8 states).  Interestingly, 10 states explicitly prohibit consideration of the protection of incumbents.  And various states include other factors to consider or not consider as well.

But many, indeed most, of these considerations are left vague.  What does it mean that “communities of interest” are to be preserved where possible?  Who defines what the relevant communities are?  What is the district “core” that is to be preserved?  And as discussed above, there is a similar issue with the stated objective of “compactness”, as while 38 states call for it, only Colorado and Iowa are clear on how it is defined (but then vague on what trade-offs are to be accepted against other objectives).

The result of such multiple objectives, mostly vaguely defined and with no guidance on trade-offs, is that it is easy to come up with the heavily gerrymandered maps we have seen and the resulting strong bias in favor of one political party over the other.  Any district can be rationalized in terms of at least one of the vague objectives (such as preserving a “community of interest”).  These are loopholes which allow the politicians to draw maps favorable to themselves, and should be eliminated.

d)  Racial Preferences: The US has a long history of using gerrymandering (as well as other measures) to effectively disenfranchise minority groups, in particular African-Americans.  This has been especially the case in the American South, under the Jim Crow laws that were in effect through to the 1960s.  The Voting Rights Act of 1965 aimed to change this.  It required states (in particular under amendments to Section 2 passed in 1982 when the Act was reauthorized) to ensure minority groups would be able to have an effective voice in their choice of political representatives, including, under certain circumstances, through the creation of congressional and other legislative districts where the previously disenfranchised minority group would be in the majority (“majority-minority districts”).

However, it has not worked out that way.  Indeed, the creation of majority-minority districts, with African-Americans packed into as small a number of districts as possible and with the rest then scattered across a large number of remaining districts, is precisely what one would do under classic gerrymandering (packing and cracking) designed to limit, not enable, the political influence of such groups.  With the passage of these amendments to the Voting Rights Act in 1982, and then a Supreme Court decision in 1986 which upheld this (Thornburg v. Gingles), Republicans realized in the redistricting following the 1990 census that they could then, in those states where they controlled the process, use this as a means to gerrymander districts to their political advantage.  Newt Gingrich, in particular, encouraged this strategy, and the resulting Republican gains in the South in 1992 and 1994 were an important factor in leading to the Republican take-over of the Congress following the 1994 elections (for the first time in 40 years), with Gingrich then becoming the House Speaker.

Note also that while the Supreme Court, in a 5-4 decision in 2013, essentially gutted a key section of the Voting Rights Act, the section they declared to be unconstitutional was Section 5.  This was the section that required pre-approval by federal authorities of changes in voting statutes in those jurisdictions of the country (mostly the states of the South) with a history of discrimination as defined in the statute.  Left in place was Section 2 of the Voting Rights Act, the section under which the gerrymandering of districts on racial lines has been justified.  It is perhaps not surprising that Republicans have welcomed keeping this Section 2 while protesting Section 5.

One should also recognize that this racial gerrymandering of districts in the South has not led to most African-Americans in the region being represented in Congress by African-Americans.  One can calculate from the raw data (reported here in Ballotpedia, based on US Census data), that as of 2015, 12 of the 71 congressional districts in the core South (Louisiana, Mississippi, Alabama, Georgia, South Carolina, North Carolina, Virginia, and Tennessee) had a majority of African-American residents.  These were all just a single district in each of the states, other than two in North Carolina and four in Georgia.  But the majority of African Americans in those states did not live in those twelve districts.  Of the 13.2 million African-Americans in those eight states, just 5.0 million lived in those twelve districts, while 8.2 million were scattered around the remaining districts.  By packing as many African-Americans as possible in a small number of districts, the Republican legislators were able to create a large number of safe districts for their own party, and the African-Americans in those districts effectively had little say in who was then elected.

The Voting Rights Act was an important measure forward, drafted in reaction to the Jim Crow laws that had effectively undermined the right to vote of African-Americans.  And defined relative to the Jim Crow system, it was progress.  However, relative to a system that draws up district lines in a fair and unbiased manner, it would be a step backwards.  A system where minorities are packed into a small number of districts, with the rest then scattered across most of the districts, is just standard gerrymandering designed to minimize, not to ensure, the political rights of the minority groups.

E.  Conclusion

Politicians drawing district lines to favor one party and to ensure their own re-election fundamentally undermines democracy.  Supreme Court justices have themselves called it “distasteful”.  However, to address gerrymandering the court has sought some measure which could be used to ascertain whether the resulting voting outcomes were biased to a degree that could be considered unconstitutional.

But this is not the right question.  One does not judge other aspects of whether the voting process is fair or not by whether the resulting outcomes were by some measure “excessively” affected or not.  It is not clear why such an approach, focused on vote outcomes, should apply to gerrymandering.  Rather, the focus should be on whether the process followed was fair and unbiased or not.

And one can certainly define a fair and unbiased process to draw district lines.  The key is that the process, once established, should be automatic and follow the agreed set of basic principles that define what the districts should be – that they should be of similar population, compact, contiguous, and where possible and consistent with these principles, follow the lines of existing political jurisdictions.

One such process was outlined above.  But there are other possibilities.  The key is that the courts should require, in the name of ensuring a fair vote, that states must decide on some such process and implement it.  And the citizenry should demand the same.

Market Competition as a Path to Making Medicare Available for All

A.  Introduction

Since taking office just two years ago, the Trump administration has done all it legally could to undermine Obamacare.  The share of the US population without health insurance had been brought down to historic lows under Obama, but they have now moved back up, with roughly half of the gains now lost.  The chart above (from Gallup) traces its path.

This vulnerability of health cover gains to an antagonistic administration has led many Democrats to look for a more fundamental reform that would be better protected.  Many are now calling for an expansion of the popular and successful Medicare program to the full population – it is currently restricted just to those aged 65 and above.  Some form of Medicare-for-All has now been endorsed by most of the candidates that have so far announced they are seeking the Democratic nomination to run for president in 2020, although the specifics differ.

But while Medicare-for-All is popular as an ultimate goal, the path to get there as well as specifics on what the final structure might look like are far from clear (and differ across candidates, even when different alternatives are each labeled “Medicare-for-All”).  There are justifiable concerns on whether there will be disruptions along the way.  And the candidates favoring Medicare-for-All have yet to set out all the details on how that process would work.

But there is no need for the process to be disruptive.  The purpose of this blog post is to set out a possible path where personal choice in a system of market competition can lead to a health insurance system where Medicare is at least available for all who desire it, and where the private insurance that remains will need to be at least as efficient and as attractive to consumers as Medicare.

The specifics will be laid out below, but briefly, the proposal is built around two main observations.  One is that Medicare is a far more efficient, and hence lower cost, system than private health insurance is in the US.  As was discussed in an earlier post on this blog, administrative expenses account for only 2.4% of the cost of traditional Medicare.  All the rest (97.6%) goes to health care providers.  Private health insurers, in contrast, have non-medical expenses of 12% of their total costs, or five times as much.  Medicare is less costly to administer as it is a simpler system and enjoys huge economies of scale.  Private health insurers, in contrast, have set up complex systems of multiple plans and networks of health care providers, pay very generous salaries to CEOs and other senior staff who are skilled at operating in the resulting highly fragmented system, and pay out high profits as well (that in normal years account for roughly one-quarter of that 12% margin).

With Medicare so much more efficient, why has it not pushed out the more costly private insurance providers?  The answer is simple:  Congress has legislated that Medicare is not allowed to compete with them.  And that is the second point:  Remove these legislated constraints, and allow Medicare-managed plans to compete with the private insurance companies (at a price set so that it breaks even).  Americans will then be able to choose, and in this way transition to a system where enrollment in Medicare-managed insurance services is available to all.  And over time, such competition can be expected to lead most to enroll in the Medicare-managed options.  They will be cheaper for a given quality, due to Medicare’s greater efficiency.

There will still be a role for private insurance.  For those competing with Medicare straight on, the private insurers that remain will have to be able to provide as good a product at as good a cost.  But also, private insurers will remain to offer insurance services that supplement what a Medicare insurance plan would provide.  Such optional private insurance would cover services (such as dental services) or costs (Medicare covers just 80% after the deductible) that the basic Medicare plan does not cover.  Medicare will then be the primary insurer, and the private insurance the secondary.  And, importantly, note that in this system the individual will still be receiving all the services that they receive under their current health plans.  This addresses the concern of some that a Medicare-like plan would not be as complete or as comprehensive as what they might have now.  With the optional supplemental, their insurance could cover exactly what they have now, or even more.  Medicare would be providing a core level of coverage, and then, for those who so choose, supplemental private plans can bring the coverage to all that they have now.  But the cost will be lower, as they will gain from the low cost of Medicare for those core services.

More specifically, how would this work?

B.  Allow Medicare to Compete in the Market for Individual Health Insurance Plans

A central part of the Obamacare reforms was the creation of a marketplace where individuals, who do not otherwise have access to a health insurance plan (such as through an employer), could choose to purchase an individual health insurance plan.  As originally proposed, and indeed as initially passed by the House of Representatives, a publicly managed health insurance plan would have been made available (at a premium rate that would cover its full costs) in addition to whatever plans were offered by private insurers.  This would have addressed the problem in the Obamacare markets of often excessive complexity (with constantly changing private plans entering or leaving the different markets), as well as limited and sometimes even no competition in certain regions.  A public option would have always been available everywhere.  But to secure the 60 votes needed to pass in the Senate, the public option had to be dropped (at the insistence of Senator Joe Lieberman of Connecticut).

It could, and should, be introduced now.  Such a public option could be managed by Medicare, and could then piggy-back on the management systems and networks of hospitals, doctors, and other health care providers who already work with Medicare.  However, the insurance plan itself would be broader than what Medicare covers for the elderly, and would meet the standards for a comprehensive health care plan as defined under Obamacare.  Medicare for the elderly is, by design, only partial (for example, it covers only 80% of the cost, after a modest deductible), plus it does not cover services such as for pregnancies.  A public option plan administered by Medicare in the Obamacare marketplace would rather provide services as would be covered under the core “silver plan” option in those markets (the option that is the basis for the determination of the subsidies for low-income households).  And one might consider offering as options plans at the “bronze” and “gold” levels as well.

Such a Medicare-managed public option would provide competition in the Obamacare exchanges.  An important difficulty, especially in the Republican states that have not been supportive of offering such health insurance, is that in certain states (or counties within those states) there have been few health insurers competing with each other, and indeed often only one.  The exchanges are organized by state, and even when insurers decide to offer insurance cover within some state, they may decide to offer it only to residents of certain counties within that state.  The private insurers operate with an expensive business model, built typically around organizing networks of doctors with whom they negotiate individual rates for health care services provided.  It is costly to set this up, and not worthwhile unless they have a substantial number of individuals enrolled in their particular plan.

But one should also recognize that there is a strong incentive in the current Obamacare markets for an individual insurer to provide cover in a particular area if no other insurer is there to compete with them.  That is because the federal subsidy to a low-income individual subscribing to an insurance plan depends on the difference between what insurers charge for a silver-level plan (specifically the second lowest cost for such a plan, if there are two or more insurers in the market) and some given percentage of that individual’s household income (with that share phased out for higher incomes).  What that means is that with no other insurer providing competition in some locale, the one that is offering insurance can charge very high rates for their plans and then receive high federal subsidies.  The ones who then lose in this (aside from the federal taxpayer) are households of middle or higher income who would want to purchase private health insurance, but whose income is above the cutoff for eligibility for the federal subsidies.

The result is that the states with the most expensive health insurance plan costs are those that have sought to undermine the Obamacare marketplace (leading to less competition), while the lowest costs are in those states that have encouraged the Obamacare exchanges and thus have multiple insurers competing with each other.  For example, the two states with the most expensive premium rates in 2019 (average for the benchmark silver plans) were Wyoming (average monthly premium for a 40-year-old of $865, before subsidies) and Nebraska (premium of $838).  Each had only one health insurer provider on the exchanges.  At the other end, the five states with the least expensive average premia, all with multiple providers, were Minnesota ($326), Massachusetts ($332), Rhode Island ($336), Indiana ($339), and New Jersey ($352).  These are not generally considered to be low-cost states, but the cost of the insurance plans in Wyoming and Nebraska were two and a half times higher.

The competition of a Medicare-managed public provider would bring down those extremely high insurance costs in the states with limited or no competition.  And at such lower rates, the total being spent by the federal government to support access by individuals to health insurance will come down.  But to achieve this, Congress will have to allow such competition from a public provider, and management through Medicare would be the most efficient way to do this.  One would still have any private providers who wish to compete.  But consumers would then have a choice.

C.  Allow Medicare to Compete in the Market for Employer-Sponsored Health Insurance Cover

While the market for individual health insurance cover is important to extending the availability of affordable health care to those otherwise without insurance cover, employer-sponsored health insurance plans account for a much higher share of the population.  Excluding those with government-sponsored plans via Medicare, Medicaid, and other such public programs, employer-sponsored plans accounted for 76% of the remaining population, individual plans for 11%, and the uninsured for 14%.

These employer-sponsored plans are dominant in the US for historical reasons.  They receive special tax breaks, which began during World War II.  Due to the tax breaks, it is cheaper for the firm to arrange for employee health insurance through the firm (even though it is in the end paid for by the employee, as part of their total compensation package), than to pay the employee an overall wage with the employee then purchasing the health insurance on his or her own.  The employer can deduct it as a business expense.  But this has led to the highly fragmented system of health insurance cover in the US, with each employer negotiating with private insurers for what will be provided through their firm, with resulting high costs for such insurance.

As many have noted, no one would design such a health care funding system from scratch.  But it is what the US has now, and there is justifiable concern over whether some individuals might encounter significant disruptions when switching over to a more rational system, whether Medicare-for-All or anything else.  It is a concern which needs to be respected, as we need health care treatment when we need it, and one does not want to be locked out of access, even if temporarily, during some transition.  How can this risk be avoided?

One could manage this by avoiding a compulsory switch in insurance plans, but rather provide as an option insurance through a Medicare-managed plan.  That is, a Medicare-managed insurance plan, similar in what is covered to current Medicare, would be allowed to compete with current insurance providers, and employers would have the option to switch to that Medicare plan, either immediately or at some later point, as they wish, to manage health insurance for their employees.

Furthermore, this Medicare-managed insurance could serve as a core insurance plan, to be supplemented by a private insurance plan which could cover costs and health care services that Medicare does not cover (such as dental and vision).  These could be similar to Medicare Supplement plans (often called a Medigap plan), or indeed any private insurance plan that provides additional coverage to what Medicare provides.  Medicare is then the primary insurer, while the private supplemental plan is secondary and covers whatever costs (up to whatever that supplemental plan covers) that are not paid for under the core Medicare plan.

In this way, an individual’s effective coverage could be exactly the same as what they receive now under their current employer-sponsored plan.  Employers would still sponsor these supplemental plans, as an addition to the core Medicare-managed plan that they would also choose (and pay for, like any other insurance plan).  But the cost of the Medicare-managed plus private supplemental plans would typically be less than the cost of the purely private plans, due to the far greater efficiency of Medicare.  And with this supplemental coverage, one would address the concern of many that what they now receive through their employer-sponsored plan is a level of benefits that are greater than what Medicare itself covers.  They don’t want to lose that.  But with such supplemental plans, one could bring what is covered up to exactly what they are covering now.

This is not uncommon.  Personally, I am enrolled in Medicare, while I have (though my former employer) additional cover by a secondary private insurer.  And I pay monthly premia to Medicare and through my former employer to the private insurer for this coverage (with those premia supplemented by my former employer, as part of my retirement package).  With the supplemental coverage, I have exactly the same health care services and share of costs covered as what I had before I became eligible for Medicare.  But the cost to me (and my former employer) is less.  One should recognize that for retirees this is in part due to Medicare for the elderly receiving general fiscal subsidies through the government budget.  But the far greater efficiency of Medicare that allows it to keep its administrative costs low (at just 2.4% of what it spends, with the rest going to health care service providers, as compared to a 12% cost share for private insurance) would lead to lower costs for Medicare than for private providers even without such fiscal support.

Such supplemental coverage is also common internationally.  Canada and France, for example, both have widely admired single-payer health insurance systems (what Medicare-for-All would be), and in both one can purchase supplemental coverage from private insurers for costs and services that are not covered under the core, government managed, single-payer plans.

Under this proposed scheme for the US, the decision by a company of whether to purchase cover from Medicare need not be compulsory.  The company could, if it wished, choose to remain with its current private insurer.  But what would be necessary would be for Congress to remove the restriction that prohibits Medicare from competing with private insurance providers.  Medicare would then be allowed to offer such plans at a price which covers its costs.  Companies could then, if they so chose, purchase such core cover from Medicare and additionally, to supplement such insurance with a private secondary plan.  One would expect that given the high cost of medical services everywhere (but especially in the US) they will take a close look at the comparative costs and value provided, and choose the plan (or set of plans) which is most advantageous to them.

Over time, one would expect a shift towards the Medicare-managed plans, given its greater efficiency.  And private plans, in order to be competitive for the core (primary) insurance available from Medicare, would be forced to improve their own efficiency, or face a smaller and smaller market share.  If they can compete, that is fine.  But given their track record up to now, one would expect that they will leave that market largely to Medicare, and focus instead on providing supplemental coverage for the firms to select from.

D.  Avoiding Cherry-Picking by the Private Insurers

An issue to consider, but which can be addressed, is whether in such a system the private insurers will be able to “cherry-pick” the more lucrative, lower risk, population, leaving those with higher health care costs to the Medicare-managed options.  The result would be higher expenses for the public options, which would require them either to raise their rates (if they price to break even) or require a fiscal subsidy from the general government budget.  And if the public options were forced to raise their rates, there would no longer be a level playing field in the market, effective competition would be undermined, and lower-efficiency private insurers could then remain in the market, raising our overall health system costs.

This is an issue that needs to be addressed in any insurance system, and was addressed for the Obamacare exchanges as originally structured.  While the Trump administration has sought to undermine these, they do provide a guide to what is needed.

Specifically, all insurers on the Obamacare exchanges are required to take on anyone in the geographic region who chooses to enroll in their particular plan, even if they have pre-existing conditions.  This is the key requirement which keeps private insurers from cherry-picking lower-cost enrollees, and excluding those who will likely have higher costs.  However, this then needs to be complemented with: 1) the individual mandate; 2) minimum standards on what constitutes an adequate health insurance plan; and 3) what is in essence a reinsurance system across insurers to compensate those who ended up with high-cost enrollees, by payments from those insurers with what turned out to be a lower cost pool (the “risk corridor” program).  These were all in the original Obamacare system, but: 1) the individual mandate was dropped in the December 2017 Republican tax cut (after the Trump administration said they would no longer enforce it anyway);  2) the Trump administration has weakened the minimum standards; and 3) Senator Marco Rubio was able in late 2015 to insert a provision in a must-pass budget bill which blocked any federal spending to even out payments in the risk corridor program.

Without these measures, it will be impossible to sustain the requirement that insurers provide access to everyone, at a price which reflects the health care risks of the population as a whole. With no individual mandate, those who are currently healthy could choose to free-ride on the system, and enroll in one of the health care plans only when they might literally be on the way to the hospital, or, in a less extreme example, only aim to enroll at the point when they know they will soon have high medical expenses (such as when they decide to have a baby, or to have some non-urgent but expensive medical procedure done).  The need for good minimum standards for health care plans is related to this.  Those who are relatively healthy might decide to enroll in an insurance plan that covers little, but, when diagnosed with say a cancer or some other such medical condition, then and only then enroll in a medical insurance plan that provides good cover for such treatments.  The good medical insurance plans would either soon go bankrupt, or be forced also to reduce what they cover in a race to the bottom.

Finally, risk sharing across insurers is in fact common (it is called reinsurance), and was especially important in the new Obamacare markets as the mix of those who would enroll in the plans, especially in the early years, could not be known.  Thus, as part of Obamacare, a system of “risk corridors” was introduced where insurers who ended up with an expensive mix of enrollees (those with severe medical conditions to treat) would be compensated by those with an unexpectedly low-cost mix of enrollees, with the federal government in the middle to smooth out the payments over time.  The Congressional Budget Office estimated in 2014 that while the payment flows would be substantial ($186 billion over ten years) the inflows would match the outflows, leaving no net budgetary cost.  However, Senator Rubio’s amendment effectively blocked this, as he (incorrectly) characterized the risk corridor program to be a “bailout” fund for the insurers.  But the effect of Rubio’s amendment was to lead smaller insurers and newly established health care coops to exit the market (as they did not have the financial resources to wait for inflows and outflows to even out), reducing competition by leaving only a limited number of the large, deep pocket, insurers who could survive such a wait, and then, with the more limited competition, jack up the insurance premia rates.  The result, as we will discuss immediately below, was to increase, not decrease, federal budgetary costs, while pricing out access to the markets of those with incomes too high to receive the federal subsidies.

Despite these efforts to kill Obamacare and block the extension of health insurance coverage to those Americans who have not had it, another provision in the Obamacare structure has allowed it to survive, at least so far and albeit in a more restrictive (but higher cost) form.  And that is due to the way the system of federal subsidies are provided to those of lower-income households in order to make it possible for them to purchase health insurance at a price they can afford.  As discussed above, these federal subsidies cover the difference between some percentage of a household’s income (with that percentage depending on their income) and the cost of a benchmark silver-level plan in their region.

More specifically, those with incomes up to 400% of the federal poverty line (400% would be $49,960 for an individual in 2019, or $103,000 for a family of four) are eligible to receive a federal subsidy to purchase a qualifying health insurance plan.  The subsidy is equal to the difference between the cost of the benchmark silver-level plan and a percentage of their income, on a sliding scale that starts at 2.08% of income for those earning 133% of the poverty line, and goes up to 9.86% for those earning 400%.  The mathematical result of this is that if the cost of the benchmark health insurance plan goes up by $1, they will receive an extra $1 of subsidy (as their income, and hence their contribution, is still the same).

The result is that measures such as the blocking of the risk corridor program by Senator Rubio’s amendment, or the Trump administration’s decision not to enforce (and then to remove altogether) the individual mandate, or the weakening the standards of what has to be covered in a qualifying health insurance plan, have all had the effect of the insurance companies being forced to raise the insurance premium rates sharply.  While those with incomes up to 400% of the poverty line were not affected by this (they pay the same share of their income), those with incomes higher than the 400% limit have been effectively priced out of these markets.  Only those (whose incomes are above that 400%) with some expensive medical condition might remain, but this then further biases the risk pool to those with high medical expenses.  Finally and importantly, these measures to undermine the markets have led to higher, not lower, federal budgetary costs, as the federal subsidies go up dollar for dollar with the higher premium rates.

So we know how to structure the markets to ensure there will be no cherry-picking of low risk, low cost, enrollees, leaving the high-cost patients for the Medicare-managed option.  But it needs to be done.  The requirement that all the insurance plans accept any enrollee will stop this.  This then needs to be complemented with the individual mandate, minimum standards for the health insurance plans, and some form of risk corridors (reinsurance) program.  The issue is not that this is impossible to do, but rather that the Trump administration (and Republicans in Congress) have sought to undermine it.

This discussion has been couched in terms of the market for individual insurance plans, but the same principles apply in the market for employer-sponsored health insurance.  While not as much discussed, the Affordable Care Act also included an employer mandate (phased in over time), with penalties for firms with 50 employees or more who do not offer a health insurance plan meeting minimum standards to their employees.  There were also tax credits provided to smaller firms who offer such insurance plans.

But the cherry-picking concern is less of an issue for such employer-based coverage than it is for coverage of individuals.  This is because there will be a reasonable degree of risk diversification across individuals (the mix of those with more expensive medical needs and those with less) even with just 100 employees or so.  And smaller firms can often subscribe together with others in the industry to a plan that covers them as a group, thus providing a reasonable degree of diversification.  With the insurance covering everyone in the firm (or group of firms), there will be less of a possibility of trying to cherry-pick among them.

The possibility of cherry-picking is therefore something that needs to be considered when designing some insurance system.  If not addressed, it could lead to a loading of the more costly enrollees onto a public option, thus increasing its costs and requiring either higher premia to subscribe to it or government budget support.  But we know how to address the issue.  The primary tool, which we should want in any case, is to require health insurers to be open to any enrollees, and not block those with pre-existing conditions.  But this then needs to be balanced with the individual mandate, minimum standards for what qualifies as a genuine health insurance plan, and means to reinsure exceptional risks across insurers.  The Obamacare reforms had these, and one cannot say that we do not know how to address the issue.

E.  Conclusion

These proposals are not radical.  And while there has been much discussion of allowing a public option to provide competition for insurance plans in the Obamacare markets, I have not seen much discussion of allowing a Medicare-managed option in the market for employer-sponsored health insurance plans.  Yet the latter market is far larger than the market for private, individual, plans, and a key part of the proposal is to allow such competition here as well.

Allowing such options would enable a smooth transition to Medicare-managed health insurance that would be available to all Americans.  And over time one would expect many if not most to choose such Medicare-managed options. Medicare has demonstrated that it is managed with far great efficiency than private health insurers, and thus it can offer better plans at lower cost than private insurers currently do.  If the private insurers are then able to improve their competitiveness by reducing their costs to what Medicare has been able to achieve, then they may remain.  But I expect that most of them will choose to compete in the markets for supplemental coverage, offering plans that complement the core Medicare-managed plan and which would offer a range of options from which employers can choose for their employer-sponsored health insurance cover.

Conservatives may question, and indeed likely will question, whether government-managed anything can be as efficient, much less more efficient, than privately provided services.  While the facts are clear (Medicare does exist, we have the data on what it costs, and we have the data on what private health insurance costs), some will still not accept this.  However, with such a belief, conservatives should not then be opposed to allowing Medicare-managed health insurance options to compete with the private insurers.  If what they believe is true, the publicly-managed options would be too expensive for an inferior product, and few would enroll in it.

But I suspect that the private insurers realize they would not be able to compete with the Medicare-managed insurance options unless they were able to bring their costs down to a comparable level.  And they do not want to do this as they (and their senior staff) benefit enormously from the current fragmented, high cost, system.  That is, there are important vested interests who will be opposed to opening up the system to competition from Medicare-managed options.  It should be no surprise that they, and the politicians they contribute generously to, will be opposed.

The Fed is Not to Blame for the Falling Stock Market

Just a quick note on this Christmas Eve.  The US stock markets are falling.  The bull market that had started in March 2009, two months after Obama took office, and which then continued through to the end of Obama’s two terms, may be close to an end.  A bear market is commonly defined as one where the S&P500 index (a broad stock market index that most professionals use) has fallen by 20% or more from its previous peak.  As of the close of the markets this December 24, the S&P500 index is 19.8% below the peak it had reached on September 20.  The NASDAQ index is already in bear market territory, as it is 23.6% lower than its previous peak.  And the Dow Jones Industrial average is also close, at a fall of 18.8% from its previous peak.

Trump is blaming the Fed for this.  The Fed has indeed been raising interest rates, since 2015.  The Fed had kept interest rates at close to zero since the financial collapse in 2008 at the end of the Bush administration in order to spur a recovery.  And it had to keep interest rates low for an especially long time as fiscal policy turned from expansionary, in 2009/10, to contractionary, as the Republican Congress elected in 2010 forced through cuts in government spending even though employment had not yet then fully recovered.

Employment did eventually recover, so the Fed could start to bring interest rates back to more normal levels.  This began in late 2015 with an increase in the Fed’s target for the federal funds rate from the previous range of 0% to 0.25%, to a target range of 0.25% to 0.50%.  The federal funds rate is the rate at which banks borrow or lend federal funds (funds on deposit at the Fed) to each other, so that the banks can meet their deposit reserve requirements.  And the funds are borrowed and lent for literally just one night (even though the rates are quoted on an annualized basis).  The Fed manages this by buying and selling US Treasury bills on the open market (thus loosening or tightening liquidity), to keep the federal funds rate within the targeted range.

Since the 2015 increase, the Fed has steadily raised its target for the federal funds rate to the current range of 2.25% to 2.50%.  It raised the target range once in 2016, three times in 2017, and four times in 2018, always in increments of 0.25% points.  The market has never been surprised.  With unemployment having fallen to 5.0% in late 2015, and to just 3.7% now, this is exactly one would expect the Fed to do.

The path is shown in blue in the chart at the top of this post.  The path is for the top end of the target range for the rate, which is the figure most analysts focus on.  And the bottom end will always be 0.25% points below it.  The chart then shows in red the path for the S&P500 index.  For ease of comparison to the path for the federal funds rate, I have rescaled the S&P500 index to 1.0 for March 16, 2017 (the day the Fed raised the target federal funds rate to a ceiling of 1.0%), and then rescaled around that March 16, 2017, value to roughly follow the path of the federal funds rate.  (The underlying data were all drawn from FRED, the economic database maintained by the Federal Reserve Bank of St. Louis.  The data points are daily, for each day the markets were open, and the S&P 500 is as of the daily market close.)

Those paths were roughly similar up to September 2018, and only then did they diverge.  That is, the Fed has been raising interest rates for several years now, and the stock market was also steadily rising.  Increases in the federal funds rate by the Fed in those years did not cause the stock market to fall.  It is disingenuous to claim that it has now.

Why is the stock market now falling then?  While only fools claim to know with certainty what the stock market will do, or why it has moved as it has, Trump’s claim that it is all the Fed’s fault has no basis.  The Fed has been raising interest rates since 2015.  Rather, Trump should be looking at his own administration, capped over the last few days with the stunning incompetence of his Treasury Secretary, Steven Mnuchin.  With a perceived need to “do something” (probably at Trump’s instigation), Mnuchin made a big show of calling on Sunday the heads of the six largest US banks asking if they were fine (they were, at least until they got such calls, and might then have been left wondering whether the Treasury Secretary knew something that they didn’t), and then organizing a meeting of the “Plunge Protection Team” on Monday, Christmas Eve. This all created the sense of an administration in panic.

This comes on top of the reports over the weekend that Trump wants to fire the Chairman of the Fed, Jerome Powell.  Trump had appointed Powell just last year.  Nor would it be legal to fire him (and no president ever has), although some may dispute that.  Finally, and adding to the sense of chaos, a major part of the federal government is on shutdown starting from last Friday night, as Trump refused to approve a budget extension unless he could also get funding to build a border wall.  As of today, it does not appear this will end until some time after January 1.

But it is not just these recent events which may have affected the markets.  After all, the S&P500 index peaked on September 20.  Rather, one must look at the overall mismanagement of economic policy under Trump, perhaps most importantly with the massive tax cut to corporations and the wealthy of last December.  While a corporate tax cut will lead to higher after-tax corporate profits, all else being equal, all else will not be equal.  The cuts have also contributed to a large and growing fiscal deficit, to a size that is unprecedented (even as a share of GDP) during a time of full employment (other than during World War II).  A federal deficit which is already high when times are good will be massive when the next downturn comes.  This will then constrain our ability to address that downturn.

Plus there are other issues, such as the trade wars that Trump appears to take personal pride in, and the reversal of the regulatory reforms put in place after the 2008 economic and financial collapse in order not to repeat the mistakes that led to that crisis.

What will happen to the stock market now?  I really do not know.  Perhaps it will recover from these levels.  But with the mismanagement of economic policy seen in this administration, and a president who acts on whim and is unwilling to listen, it would not be a surprise to see a further fall.  Just don’t try to shift the blame to the Fed.

How Low is Unemployment in Historical Perspective? – The Impact of the Changing Composition of the Labor Force

A.  Introduction

The unemployment rate is low, which is certainly good, and many commentators have noted it is now (at 3.7% in September and October, and an average of 3.9% so far this year) at the lowest the US has seen since the 1960s.  The rate hit 3.4% in late 1968 and early 1969, and averaged about 3.5% in each of those years.

But are those rates really comparable to what they are now?  This is important, not simply for “bragging rights” (or, more seriously, for understanding what policies led to such rates), but also for understanding how much pressure such rates are creating in the labor market.  The concern is that if the unemployment rate goes “too low”, labor will be able to demand a higher nominal wage and that this will then lead to higher price inflation.  Thus the Fed monitors closely what is happening with the unemployment rate, and will start to raise interest rates to cool down the economy if it fears the unemployment rate is falling so low that there soon will be inflationary pressures.  And indeed the Fed has, since 2016, started to raise interest rates (although only modestly so far, with the target federal funds rate up only 2.0% points from the exceptionally low rates it had been reduced to in response to the 2008/09 financial and economic collapse).

A puzzle is why the unemployment rate, at just 3.9% this year, has not in fact led to greater pressures on wages and hence inflation.  It is not because the modestly higher interest rates the Fed has set have led to a marked slowing down of the economy – real GDP grew by 3.0% in the most recent quarter over what it was a year before, in line with the pace of recent years.  Nor are wages growing markedly faster now than what they did in recent years.  Indeed, in real terms (after inflation), wages have been basically flat.

What this blog post will explore is that the unemployment rate, at 3.9% this year, is not in fact directly comparable with the levels achieved some decades ago, as the composition of the labor force has changed markedly.  The share of the labor force who have been to college is now much higher than it was in the 1960s.  Also, the share of the labor force who are young is now much less than it was in the 1960s.  And unemployment rates are now, and always have been, substantially less for those who have gone to college than for those who have not.  Similarly, unemployment rates are far higher for the young, who have just entered the labor force, than they are for those of middle age.

Because of these shifts in the shares, a given overall unemployment rate decades ago would only have happened had there been significantly lower unemployment rates for each of the groups (classified by age and education) than what we have now.  The lower unemployment rates for each of the groups, in that period decades ago, would have been necessary to produce some low overall rate of unemployment, as groups who have always had a relatively higher rate of unemployment (the young and the less educated) accounted for a higher share of the labor force then.  This is important, yet I have not seen any mention of the issue in the media.

As we will see, the impact of this changing composition of the labor force on the overall unemployment has been significant.  The chart at the top of this post shows what the overall unemployment rate would have been, had the composition of the labor force remained at what it was in 1970 (in terms of education level achieved for those aged 25 and above, plus for the share of youth in the labor force aged 16 to 24).  For 2018 (through the end of the third quarter), the unemployment rate at the 1970 composition of the labor force would then have been 5.2% – substantially higher than the 3.9% with the current composition of the labor force.  We will discuss below how these figures were derived.

At 5.2%, pressures in the labor market for higher wages will be substantially less than what one might expect at 3.9%.  This may explain the lack of such pressure seen so far in 2018 (and in recent years).  Although commonly done, it is just too simplistic to compare the current unemployment rate to what it was decades ago, without taking into account the significant changes in the composition of the labor force since then.

The rest of this blog post will first review this changing composition of the labor force – changes which have been substantial.  There are some data issues, as the Bureau of Labor Statistics (the source of all the data used here) changed its categorization of the labor force by education level in 1992.  Strictly speaking, this means that compositional shares before and after 1992 are not fully comparable.  However, we will see that in practice the changes were not such as to lead to major differences in the calculation of what the overall unemployment rate would be.

We will also look at what the unemployment rates have been for each of the groups in the labor force relative to the overall average.  They have been remarkably steady and consistent, although with some interesting, but limited, trends.  Finally, putting together the changing shares and the unemployment rates for each of the groups, one can calculate the figures for the chart at the top of this post, showing what the unemployment rates would have been over time, had the labor force composition not changed.

B.  The Changing Composition of the Labor Force

The composition of the labor force has changed markedly in the US in the decades since World War II, as indeed it has around the world.  More people have been going to college, rather than ending their formal education with high school.  Furthermore, the post-war baby boom which first led (in the 1960s and 70s) to a bulge in the share of the adult labor force who were young, later led to a reduction in this share as the baby boomers aged.

The compositional shares since 1965 (for age) and 1970 (for education) are shown in this chart (where the groups classified by education are of age 25 or higher, and thus their shares plus the share of those aged 16 to 24 will sum to 100%):

The changes in labor force composition are indeed large.  The share of the labor force who have completed college (including those with an advanced degree) has more than tripled, from 11% of the labor force in 1970 to 35% in 2018.  Those with some college have more than doubled, from 9% of the labor force to 23%.  At the other end of the education range, those who have not completed high school fell from 28% of the labor force to just 6%, while those completing high school (and no more) fell from 30% of the labor force to 22%.  And the share of youth in the labor force first rose from 19% in 1965 to a peak of  24 1/2% in 1978, and then fell by close to half to 13% in 2018.

As we will see below, each of these groups has very different unemployment rates relative to each other.  Unemployment rates are far less for those who have graduated from college than they are for those who have not completed high school, or for those 25 or older as compared to those younger.  Comparisons over time of the overall unemployment rate which do not take this changing composition of the labor force into account can therefore be quite misleading.

But first some explanatory notes on the data.  (Those not interested in data issues can skip this and go directly to the next section below.)  The figures were all calculated from data collected and published by the Bureau of Labor Statistics (BLS).  The BLS asks, as part of its regular monthly survey of households, questions on who in the household is participating in the labor force, whether they are employed or unemployed, and what their formal education has been (as well as much else).  From this one can calculate, both overall and for each group identified (such as by age or education) the figures on labor force shares and unemployment rates.

A few definitions to keep in mind:  Adults are considered to be those age 16 and above; to be employed means you worked the previous week (from when you were being surveyed) for at least one hour in a paying job; and to be unemployed means you were not employed but were actively searching for a job.  The labor force would thus be the sum of those employed or unemployed, and the unemployment rate would be the number of unemployed in whatever group as a share of all those in the labor force in that group.  Note also that full-time students, who are not also working in some part-time job, are not part of the labor force.  Nor are those, of whatever age, who are not in a job nor seeking one.

The education question in the survey asks, for each household member in the labor force, what was the “highest level of school” completed, or the “highest degree” received.  However, the question has been worded this way only since 1992.  Prior to 1992, going back to 1940 when they first started to ask about education, the question was phrased as the “highest grade or year of school” completed.  The presumption was that if the person had gone to school for 12 years, that they had completed high school.  And if 13 years that they had completed high school plus had a year at a college level.

However, this presumption was not always correct.  The respondent might only have completed high school after 13 years, having required an extra year.  Thus the BLS (together with the Census Bureau, which asks similar questions in its surveys) changed the way the question was asked in 1992, to focus on the level of schooling completed rather than the number of years of formal schooling enrolled.

For this reason, while all the data here comes from the BLS, the BLS does not make it easy to find the pre-1992 data.  The data series available online all go back only to 1992.  However, for the labor force shares by education category, as shown in the chart above, I was able to find the series under the old definitions in a BLS report on women in the labor force issued in 2015 (see Table 9, with figures that go back to 1970).  But I have not been able to find a similar set of pre-1992 figures for unemployment rates for groups classified by education.  Hence the curve in the chart at the top of this post on the unemployment rate holding constant the composition of the labor force could only start in 1992.

Did the change in education definitions in 1992 make a significant difference for what we are calculating here?  They will matter only to the extent that:  1)  the shifts from one education category to another were large; and 2) the respective unemployment rates where there was a significant shift from one group to another were very different.

As can be seen in the chart above, the only significant shifts in the trends in 1992 was a downward shift (of about 3% points) in the share of the labor force who had completed high school and nothing more, and a similar upward shift (relative to trend) in the share with some college. There are no noticeable shifts in the trends for the other groups.  And as we will see below, the unemployment rates of the two groups with a shift (completed high school, vs. some college) are closer to each other than that for any other pairing of the different groups.  Thus the impact on the calculated unemployment rate of the change in categorization in 1992 should be relatively small.  And we will see below that that in fact is the case.

There was also another, but more minor (in terms of impact), change in 1992.  The BLS always reported the educational composition of the labor force only for those labor force members who were age 25 or above.  However, prior to 1992 it reported the figures only for those up to age 64, while from 1992 onwards it reported the figure at any higher age if still in the labor force, including those who at age 65 or more but not yet retired.  This was done as an increasing share over time of those in the US of age 65 or higher have remained in the labor force rather than retiring.  However, the impact of this change will be small.  First, the share of the labor force of age 65 or more is small.  And second, this will matter only to the extent that the shares by education level differ between those still in the labor force who are age 65 or more, as compared to those in the labor force of ages 25 to 64.  Those differences in education shares are probably not that large.

C.  Differences in Unemployment Rates by Age and Education 

As noted above, unemployment rates differ between groups depending on age and education.  It should not be surprising that those who are young (ages 16 to 24) who are not in school but are seeking a job will experience a high rate of unemployment relative to those who are older (25 and above).  They are just starting out, probably do not have as high an education level (they are not still in school), and lack experience.  And that is indeed what we observe.

At the other extreme we have those who have completed college and perhaps even hold an advanced degree (masters or doctorate).  They are older, have better contacts, normally have skills that have been much in demand, and may have networks that function at a national rather than just local level.  The labor market works much better for them, and one should expect their unemployment rate to be lower.

And this is what we have seen (although unfortunately, for the reasons noted above on the data, the BLS is only making available the unemployment rates by education category for the years since 1992):

The unemployment rates of each group vary substantially over time, in tune with the business cycle, but their position relative to each other is always the same.  That is, the rates move together, where when one is high it will also be high for the others.  This is as one would expect, as movements in unemployment rates are driven primarily by the macroeconomy, with all the rates moving up when aggregate demand falls to spark a recession, and moving down in a recovery.

And there is a clear pattern to these relationships, which can be seen when these unemployment rates are all expressed as a ratio to the overall unemployment rate:

The unemployment rate for those just entering the labor force (ages 16 to 24) has always been about double what the overall unemployment rate was at the time.  And it does not appear to be subject to any major trend, either up or down.  Those in the labor force (and over age 25) with less than a high school degree (the curve in blue) also have experienced a higher rate of unemployment than the overall rate at the time – 40 to 60% higher.  There might be some downward trend, but one cannot yet say whether it is significant.  We need some more years of data.

Those in the labor force with just a high school degree (the curve in green in the chart) have had an unemployment rate very close to the average, with some movement from below the average to just above it in recent years.  Those with some college (in red) have remained below the overall average unemployment rate, although less so now than in the 1990s.  And those with a college degree or more (the curve in purple) have had an unemployment of between 60% below the average in the 1990s to about half now.

There are probably a number of factors behind these trends, and it is not the purpose of this blog post to go into them.  But I would note that these trends are consistent with what a simple supply and demand analysis would suggest.  As seen in the chart in section B of this post, the share of the labor force with a college degree, for example, has risen steadily over time, to 35% of the labor force now from 22% in 1992.  With that much greater supply and share of the labor force, the advantage (in terms of a lower rate of unemployment relative to that of others) can be expected to have diminished.  And we see that.

But what I find surprising is that that impact has been as small as it has.  These ratios have been remarkably steady over the 27 years for which we have data, and those 27 years have included multiple cycles of boom and bust.  And with those ratios markedly different for the different groups, the composition of the labor force will matter a great deal for the overall unemployment rate.

D.  The Unemployment Rate at a Fixed Composition of the Labor Force

As noted above, those in the labor force who are not young, or who have achieved a higher level of formal education, have unemployment rates which are consistently below those who are young or who have less formal education.  Their labor markets differ.  A middle-aged engineer will be considered for jobs across the nation, while someone with who is just a high school graduate likely will not.

Secondly, when we say the economy is at “full employment” there will still be some degree of unemployment.  It will never be at zero, as workers may be in transition between jobs and face varying degrees of difficulty in finding a new job.  But this degree of “frictional unemployment” (as economists call it) will vary, as just noted above, depending on age (prior experience in the labor force) and education.  Hence the “full employment rate of unemployment” (which may sound like an oxymoron, but isn’t) will vary depending on the composition of the labor force.  And more broadly and generally, the interpretation given to any level of unemployment needs to take into account that compositional structure of the labor force, as certain groups will consistently experience a higher or lower rate of unemployment than others, as seen in the chart above.

Thus it is misleading simply to compare overall unemployment rates across long periods of time, as the compositional structure of the labor force has changed greatly over time.  Such simple comparisons of the overall rate may be easy to do, but to understand critical issues (such as how close are we to such a low rate of unemployment that there will be inflationary pressure in the labor market), we should control for labor force composition.

The chart at the top of this post does that, and I repeat it here for convenience (with the addition in purple, to be explained below):

The blue line shows the unemployment rate for the labor force since 1965, as conventionally presented.  The red line shows, in contrast, what the unemployment rate would have been had the unemployment rate for each identified group been whatever it was in each year, but with the labor force composition remaining at what it was in 1970.  The red line is a simple weighted average of the unemployment rates of each group, using as weights what their shares would have been had they remained at the shares of 1970.

The labor force structure of 1970 was taken for this exercise both because it is the earliest year for which I could find the necessary data, and because 1970 is close to 1968 and 1969, when the unemployment rate was at the lowest it has been in the last 60 years.  And the red curve can only start in 1992 because that is the earliest year for which I could find unemployment rates by education category.

The difference is significant.  And while perhaps difficult to tell from just looking at the chart, the difference has grown over time.  In 1992, the overall unemployment rate (with all else equal) at the 1970 compositional shares, would have been 23% higher.  By 2018, it would have grown to 33% higher.  Note also that, had we had the data going back to 1970 for the unemployment rates by education category, the blue and red curves would have met at that point and then started to diverge as the labor force composition changed.

Also, the change in 1992 in the definitions used by the BLS for classifying the labor force by education did not have a significant effect.  For 1992, we can calculate what the unemployment rate would have been using what the compositional shares were in 1991 under the old classification system.  The 1991 shares for the labor force composition would have been very close to what they would have been in 1992, had the BLS kept the old system, as labor force shares change only gradually over time.  That unemployment rate, using the former system of compositional shares but at the 1992 unemployment rates for each of the groups as defined under the then new BLS system of education categories, was almost identical to the unemployment rate in that year:  7.6% instead of 7.5%.  It made almost no difference.  The point is shown in purple on the chart, and is almost indistinguishable from the point on the blue curve.  And both are far from what the unemployment rate would have been in that year at the 1970 compositional weights (9.2%).

E.  Conclusion

The structure of the labor force has changed markedly in the post-World War II period in the US, with a far greater share of the labor force now enjoying a higher level of formal education than we had decades ago, and also a significantly lower share who are young and just starting in the labor force.  Since unemployment rates vary systematically by such groups relative to each other, one needs to take into account the changing composition of the labor force when making comparisons over time.

This is not commonly done.  The unemployment rate has come down in 2018, averaging 3.9% so far and reaching 3.7% in September and October.  It is now below the 3.8% rate it hit in 2000, and is at the lowest seen since 1969, when it hit 3.4% for several months.

But it is misleading to make such simple comparisons as the composition of the labor force has changed markedly over time.  At the 1970 labor force shares, the unemployment rate in 2018 would have been 5.2%, not 3.9%.  And at a 5.2% rate, the inflationary pressures expected with an exceptionally low unemployment rate will not be as strong.  This may, at least in part, explain why we have not seen such inflationary pressures grow this past year.

The Economy Under Trump in 8 Charts – Mostly as Under Obama, Except Now With a Sharp Rise in the Government Deficit

A.  Introduction

President Trump is repeatedly asserting that the economy under his presidency (in contrast to that of his predecessor) is booming, with economic growth and jobs numbers that are unprecedented, and all a sign of his superb management skills.  The economy is indeed doing well, from a short-term perspective.  Growth has been good and unemployment is low.  But this is just a continuation of the trends that had been underway for most of Obama’s two terms in office (subsequent to his initial stabilization of an economy, that was in freefall as he entered office).

However, and importantly, the recent growth and jobs numbers are only being achieved with a high and rising fiscal deficit.  Federal government spending is now growing (in contrast to sharp cuts between 2010 and 2014, after which it was kept largely flat until mid-2017), while taxes (especially for the rich and for corporations) have been cut.  This has led to standard Keynesian stimulus, helping to keep growth up, but at precisely the wrong time.  Such stimulus was needed between 2010 and 2014, when unemployment was still high and declining only slowly.  Imagine what could have been done then to re-build our infrastructure, employing workers (and equipment) that were instead idle.

But now, with the economy at full employment, such policy instead has to be met with the Fed raising interest rates.  And with rising government expenditures and falling tax revenues, the result has been a rise in the fiscal deficit to a level that is unprecedented for the US at a time when the country is not at war and the economy is at or close to full employment.  One sees the impact especially clearly in the amounts the US Treasury has to borrow on the market to cover the deficit.  It has soared in 2018.

This blog post will look at these developments, tracing developments from 2008 (the year before Obama took office) to what the most recent data allow.  With this context, one can see what has been special, or not, under Trump.

First a note on sources:  Figures on real GDP, on foreign trade, and on government expenditures, are from the National Income and Product Accounts (NIPA) produced by the Bureau of Economic Analysis (BEA) of the Department of Commerce.  Figures on employment and unemployment are from the Bureau of Labor Statistics (BLS) of the Department of Labor.  Figures on the federal budget deficit are from the Congressional Budget Office (CBO).  And figures on government borrowing are from the US Treasury.

B.  The Growth in GDP and in the Number Employed, and the Unemployment Rate

First, what has happened to overall output, and to jobs?  The chart at the top of this post shows the growth of real GDP, presented in terms of growth over the same period one year before (in order to even out the normal quarterly fluctuations).  GDP was collapsing when Obama took office in January 2009.  He was then able to turn this around quickly, with positive quarterly growth returning in mid-2009, and by mid-2010 GDP was growing at a pace of over 3% (in terms of growth over the year-earlier period).  It then fluctuated within a range from about 1% to almost 4% for the remainder of his term in office.  It would have been higher had the Republican Congress not forced cuts in fiscal expenditures despite the continued unemployment.  But growth still averaged 2.2% per annum in real terms from mid-2009 to end-2016, despite those cuts.

GDP growth under Trump hit 3.0% (over the same period one year before) in the third quarter of 2018.  This is good.  And it is the best such growth since … 2015.  That is not really so special.

Net job growth has followed the same basic path as GDP:

 

Jobs were collapsing when Obama took office, he was quickly able to stabilize this with the stimulus package and other measures (especially by the Fed), and job growth resumed.  By late 2011, net job growth (in terms of rolling 12-month totals (which is the same as the increase over what jobs were one year before) was over 2 million per year.  It went to as high as 3 million by early 2015.  Under Trump, it hit 2 1/2 million by September 2018.  This is pretty good, especially with the economy now at or close to full employment.  And it is the best since … January 2017, the month Obama left office.

Finally, the unemployment rate:

Unemployment was rising rapidly as Obama was inaugurated, and hit 10% in late 2009.  It then fell, and at a remarkably steady pace.  It could have fallen faster had government spending not been cut back, but nonetheless it was falling.  And this has continued under Trump.  While commendable, it is not a miracle.

C.  Foreign Trade

Trump has also launched a trade war.  Starting in late 2017, high tariffs were imposed on imports of certain foreign-produced products, with such tariffs then raised and extended to other products when foreign countries responded (as one would expect) with tariffs of their own on selected US products.  Trump claims his new tariffs will reduce the US trade deficit.  As discussed in an earlier blog post, such a belief reflects a fundamental misunderstanding of how the trade balance is determined.

But what do we see in the data?:

The trade deficit has not been reduced – it has grown in 2018.  While it might appear there had been some recovery (reduction in the deficit) in the second quarter of the year, this was due to special factors.  Exports primarily of soybeans and corn to China (but also other products, and to other countries where new tariffs were anticipated) were rushed out in that quarter in order arrive before retaliatory tariffs were imposed (which they were – in July 2018 in the case of China).  But this was simply a bringing forward of products that, under normal conditions, would have been exported later.  And as one sees, the trade balance returned to its previous path in the third quarter.

The growing trade imbalance is a concern.  For 2018, it is on course for reaching 5% of GDP (when measured in constant prices of 2012).  But as was discussed in the earlier blog post on the determination of the trade balance, it is not tariffs which determine what that overall balance will be for the economy.  Rather, it is basic macro factors (the balance between domestic savings and domestic investment) that determine what the overall trade balance will be.  Tariffs may affect the pattern of trade (shifting imports and exports from one country to another), but they won’t reduce the overall deficit unless the domestic savings/investment balance is changed.  And tariffs have little effect on that balance.

And while the trend of a growing trade imbalance since Trump took office is a continuation of the trend seen in the years before, when Obama was president, there is a key difference.  Under Obama, the trade deficit did increase (become more negative), especially from its lowest point in the middle of 2009.  But this increase in the deficit was not driven by higher government spending – government spending on goods and services (both as a share of GDP and in constant dollar terms) actually fell.  That is, government savings rose (dissavings was reduced, as there was a deficit).  Private domestic savings was also largely unchanged (as a share of GDP).  Rather, what drove the higher trade deficit during Obama’s term was the recovery in private investment from the low point it had reached in the 2008/09 recession.

The situation under Trump is different.  Government spending is now growing, as is the government deficit, and this is driving the trade deficit higher.  We will discuss this next.

D.  Government Accounts

An increase in government spending is needed in an economic downturn to sustain demand so that unemployment will be reduced (or at least not rise by as much otherwise).  Thus government spending was allowed to rise in 2008, in the last year of the Bush administration, in response to the downturn that began in December 2007.  This continued, and was indeed accelerated, as part of the stimulus program passed by Congress soon after Obama took office.  But federal government spending on goods and services peaked in mid-2010, and after that fell.  The Republican Congress forced further expenditure cuts, and by late 2013 the federal government was spending less (in real terms) than it was in early 2008:

This was foolish.  Unemployment was over 9 1/2% in mid-2010, and still over 6 1/2% in late-2013 (see the chart of the unemployment rate above).  And while the unemployment rate did fall over this period, there was justified criticism that the pace of recovery was slow.  The cuts in government spending during this period acted as a major drag on the economy, holding back the pace of recovery.  Never before had a US administration done this in the period after a downturn (at least not in the last half-century where I have examined the data).  Government spending grew especially rapidly under Reagan following the 1981/82 downturn.

Federal government spending on goods and services was then essentially flat in real terms from late 2013 to the end of Obama’s term in office.  And this more or less continued through FY2017 (the last budget of Obama), i.e. through the third quarter of CY2018.  But then, in the fourth quarter of CY2017 (the first quarter of FY2018, as the fiscal year runs from October to September), in the first full budget under Trump, federal government spending started to rise sharply.  See the chart above.  And this has continued.

There are certainly high priority government spending needs.  But the sequencing has been terribly mismanaged.  Higher government spending (e.g. to repair our public infrastructure) could have been carried out when unemployment was still high.  Utilizing idle resources, one would not only have put people to work, but also would have done this at little cost to the overall economy.  The workers were unemployed otherwise.

But higher government spending now, when unemployment is low, means that workers hired for government-funded projects have to be drawn from other activities.  While the unemployment rate can be squeezed downward some, and has been, there is a limit to how far this can go.  And since we are close to that limit, the Fed is raising interest rates in order to curtail other spending.

One sees this in the numbers.  Overall private fixed investment fell at an annual rate of 0.3% in the third quarter of 2018 (based on the initial estimates released by the BEA in late October), led by a 7.9% fall in business investment in structures (offices, etc.) and by a 4.0% fall in residential investment (homes).  While these are figures only for one quarter (there was a deceleration in the second quarter, but not an absolute fall), and can be expected to eventually change (with the economy growing, investment will at some point need to rise to catch up), the direction so far is worrisome.

And note also that this fall in the pace of investment has happened despite the huge cuts in corporate taxes from the start of this year.  Trump officials and Republicans in Congress asserted that the cuts in taxes on corporate profits would lead to a surge in investment.  Many economists (including myself, in the post cited above) noted that there was little reason to believe such tax cuts would sput corporate investment.  Such investment in the US is not now constrained by a lack of available cash to the corporations, so giving them more cash is not going to make much of a difference.  Rather, that windfall would instead lead corporations to increase dividends as well as share buybacks in order to distribute the excess cash to their shareholders.  And that is indeed what has happened, with share buybacks hitting record levels this year.

Returning to government spending, for the overall impact on the economy one should also examine such spending at the state and local level, in addition to the federal.  The picture is largely similar:

This mostly follows the same pattern as seen above for federal government spending on goods and services, with the exception that there was an increase in total government spending from early 2014 to early-2016, when federal spending was largely flat.  This may explain, in part, the relatively better growth in GDP seen over that period (see the chart at the top of this post), and then the slower pace in 2016 as all spending leveled off.

But then, starting in late-2017, total government expenditures on goods and services started to rise.  It was, however, largely driven by the federal government component.  Even though federal government spending accounted only for a bit over one-third (38%) of total government spending on goods and services in the quarter when Trump took office, almost two-thirds (65%) of the increase in government spending since then was due to higher spending by the federal government.  All this is classical Keynesian stimulus, but at a time when the economy is close to full employment.

So far we have focused on government spending on goods and services, as that is the component of government spending which enters directly as a component of GDP spending.  It is also the component of the government accounts which will in general have the largest multiplier effect on GDP.  But to arrive at the overall fiscal deficit, one must also take into account government spending on transfers (such as for Social Security), as well as tax revenues.  For these, and for the overall deficit, it is best to move to fiscal year numbers, where the Congressional Budget Office (CBO) provides the most easily accessible and up-to-date figures.

Tracing the overall federal fiscal deficit, now by fiscal year and in nominal dollar terms, one finds:

The deficit is now growing (the fiscal balance is becoming more negative) and indeed has been since FY2016.  What happened in FY2016?  Primarily there was a sharp reduction in the pace of tax revenues being collected.  And this has continued through FY2018, spurred further by the major tax cut bill of December 2017.  Taxes had been rising, along with the economic recovery, increasing by an average of $217 billion per year between FY2010 and FY2015 (calculated from CBO figures), but this then decelerated to a pace of just $26 billion per year between FY2015 and FY2018, and just $13 billion in FY2018.  The rate of growth in taxes between FY2015 and FY2018 was just 0.8%, or less even than just inflation.

Federal government spending, including on transfers, also rose over this period, but by less than taxes fell.  Overall federal government spending rose by an average of just $46 billion per year between FY2010 and FY2015 (a rate of growth of 1.3% per annum, or less than inflation in those years), and then by $140 billion per year (in nominal dollar terms) between FY2015 and FY2018.  But this step up in overall spending (of $94 billion per year) was well less than the step down in the pace of tax collection (a reduction of $191 billion per year, the difference between $217 billion annual growth over FY2010-15 and the $26 billion annual growth over FY2015-18).

That is, about two-thirds (67%) of the increase in the fiscal deficit since FY2015 can be attributed to taxes being cut, and just one-third (33%) to spending going up.

Looking forward, this is expected to get far worse.  As was discussed in an earlier post on this blog, the CBO is forecasting (in their most recent forecast, from April 2018) that the fiscal deficits under Trump will reach close to $1 trillion in FY2019, and will exceed 5% of GDP for most of the 2020s.  This is unprecedented for the US economy at full employment, other than during World War II.  Furthermore, these CBO forecasts are under the optimistic scenario that there will be no economic downturn over this period.  But that has never happened before in the US.

Deficits need to be funded by borrowing.  And one sees an especially sharp jump in the net amount being borrowed in the markets in CY 2018:

 

These figures are for calendar years, and the number for 2018 includes what the US Treasury announced on October 29 it expects to borrow in the fourth quarter.  Note this borrowing is what the Treasury does in the regular, commercial, markets, and is a net figure (i.e. new borrowing less repayment of debt coming due).  It comes after whatever the net impact of public trust fund operations (such as for the Social Security Trust Fund) is on Treasury funding needs.

The turnaround in 2018 is stark.  The US Treasury now expects to borrow in the financial markets, net, a total of $1,338 billion in 2018, up from $546 billion in 2017.  And this is at time of low unemployment, in sharp contrast to 2008 to 2010, when the economy had fallen into the worst economic downturn since the Great Depression  Tax revenues were then low (incomes were low) while spending needed to be kept up.  The last time unemployment was low and similar to what it is now, in the late-1990s during the Clinton administration, the fiscal accounts were in surplus.  They are far from that now. 

E. Conclusion 

The economy has continued to grow since Trump took office, with GDP and employment rising and unemployment falling.  This has been at rates much the same as we saw under Obama.  There is, however, one big difference.  Fiscal deficits are now rising rapidly.  Such deficits are unprecedented for the US at a time when unemployment is low.  And the deficits have led to a sharp jump in Treasury borrowing needs.

These deficits are forecast to get worse in the coming years even if the economy should remain at full employment.  Yet there will eventually be a downturn.  There always has been.  And when that happens, deficits will jump even further, as taxes will fall in a downturn while spending needs will rise.

Other countries have tried such populist economic policies as Trump is now following, when despite high fiscal deficits at a time of full employment, taxes are cut while government spending is raised.  They have always, in the end, led to disasters.