The Survey of Establishments Say Employment is Rising, But the Survey of Households Say It Is Falling – Why?

A.  Introduction

Those who follow the monthly release of the Employment Situation report of the Bureau of Labor Statistics (with the most recent issue, for April, released on May 3) may have noticed something curious.  While the figures on total employment derived from the BLS survey of establishments reported strong growth, of an estimated 263,000 in April, the BLS survey of households (from which the rate of unemployment is estimated) reported that estimated employment fell by 103,000.  And while there is month-to-month volatility in the figures (they are survey estimates, after all), this has now been happening for several months in a row:  The establishment survey has been reporting strong growth in employment while the household survey has been reporting a fall.  The one exception was for February, where the current estimate from the establishment survey is that employment grew that month by a relatively modest 56,000 (higher than the initial estimate), while the household survey reported strong growth in employment that month of 255,000.

The chart above shows this graphically, with the figures presented in terms of their change relative to where they were in April 2017, two years ago.  For reasons we will discuss below, there is substantially greater volatility in the employment estimates derived from the household survey than one finds in the employment estimates derived from the establishment survey.  But even accounting for this, a significant gap appears to have opened up between the estimated growth in employment derived from the two sources.  Note also that the estimated labor force (derived from the household survey) has also been going down recently.  The unemployment rate came down to just 3.6% in the most recent month not because estimated employment rose – it in fact fell by 103,000 workers.  Rather, the measured unemployment rate came down because the labor force fell by even more (by 490,000 workers).

There are a number of reasons why the estimates from the two surveys differ, and this blog post will discuss what these are.  To start, and as the BLS tries to make clear, the concept of “employment” as estimated in the establishment survey is different from that as measured in the household survey.  They are measuring different, albeit close, things.  But there are other factors as well.

One can, however, work out estimates where the employment concepts are defined almost, but not quite, the same.  What is needed can be found in figures provided as part of the household survey.  We will look at those below and present the results in a chart similar to that above, but with employment figures from the household survey data adjusted (to the extent possible) to match the employment concept of the establishment survey.  But one finds that the gap that has opened up between the employment estimates of the two surveys remains, similar to that in the chart above.

There are residual differences in the two employment estimates.  And they follow a systematic pattern that appear to be correlated with the unemployment rate.  The final section below will look at this, and discuss what might be the cause.

The issues here are fairly technical ones, and this blog post may be of most interest to those interested in digging into the numbers and seeing what lies behind the headline figures that are the normal focus of news reports.  And while a consistent discrepancy appears to have opened up between the two estimates of employment growth, the underlying cause is not clear.  Nor are the implications for policy yet fully clear.  But the numbers may imply that we should be paying more attention to the much slower growth in the estimates of total employment derived from the household survey, than the figures from the establishment survey that we normally focus on.  We will find in coming months whether the inconsistency that has developed signals a change in the employment picture, or simply reflects unusual volatility in the underlying data.

B.  The BLS Surveys of Establishments, and of Households

The monthly BLS employment report is based on findings from two monthly surveys the BLS conducts, one of establishments and a second of households.  As described by the BLS in the Techincal Note that is released as part of each month’s report (and which we will draw upon here), they need both.  And while the surveys cover a good deal of material other than employment and related issues, we will focus here just on the elements relevant to the employment estimates.

The establishment survey covers primarily business establishments, but also includes government agencies, non-profits, and most other entities that employ workers for a wage.  However, the establishment survey does not include those employed in agriculture (for some reason, possibly some historical bureaucratic issue between agencies), as well as certain employment that can not be covered by a survey of establishments.  Thus they do not cover the self-employed (if they work in an unincorporated business), nor unpaid family workers.  Nor do they cover those employed directly by households (e.g. for childcare).

But for the business establishments, government agencies, and other entities that they do cover, they are thorough.  They survey more than 142,000 establishments each month, covering 689,000 individual worksites, and in all cover in this “sample” approximately one-third of all nonfarm employees.  This means they obtain direct figures each month on the employment of about 50 million workers (out of the approximately 150 million employed in the US), with this closer to a census than a normal sample survey.  But the extensive coverage is necessary in order to be able to arrive at statistically valid sample sizes at the detailed individual industries for which they provide figures.  And because of this giant sample size, the monthly employment figures cited publicly are normally taken from the establishment survey.

To arrive at unemployment rates and other figures, one must however survey households.  Businesses will know who they employ, but not who is unemployed.  And while the current sample size used of households is 60,000, this is far smaller relative to the sample size used for establishments (142,000) than it might appear.  A household will in general have just one or two workers, while a business establishment (or a government agency) could employ thousands.

Thus the much greater volatility seen in the employment estimates from the household survey should not be a surprise.  But they need the household survey to determine who is in the labor force.  They define this to be those adults of age 16 or older, who are either employed (even for just one hour, if paid) in the preceding week, or who, if not employed, were available for a job and were actively searching for one at some point in the four week period before the week of the survey.  Only in this way can the BLS determine the share of the labor force that is employed, and the share unemployed.  The survey of establishments by its nature cannot provide such information no matter what its sample size.

For this and other reasons, the definition of what is covered in “employment” between these two surveys will differ.  In particular:

a)  As discussed above, the establishment survey does not cover employment in the agricultural sector.  While they could, in principle, include agriculture, for some reason they do not.  The household survey does include those in agriculture.

b)  The establishment survey also does not include the self-employed (unless they are running an incorporated business).  They only survey businesses (or government agencies and non-profits), and hence cannot capture those who are self-employed.

c)  The establishment survey also does not capture unpaid family workers.  The household survey counts them as part of the labor force and employed if they worked in the family business 15 hours or more in the week preceding the survey.

d)  The establishment survey, since it does not cover households, cannot include private household workers (such as those providing childcare services).  The household survey does.

e)  Each of the above will lead to the count in the household survey of those employed being higher than what is counted in the establishment survey.  Working in the opposite direction, someone holding two or more jobs will be counted in the establishment survey two or more times (once for each job they hold).  The establishment being surveyed will only know who is working for them, and not whether they are also working elsewhere.  The household survey, however, will count such a worker as just one employed person.

f) The household survey also counts as employed those who are on unpaid leave (such as maternity leave).  The establishment survey does not (although it is not clear to me why they couldn’t – it would improve comparability if they would).

g)  The household survey also only includes those aged 16 or older as possibly in the labor force and employed.  The establishment survey covers all its workers, whatever their age.

There are therefore important differences between the two surveys as to who is covered in the figures provided for “total employment”.  And while the BLS tries to make this clear, the differences are often ignored in references by, for example, the news media.  One can, however, adjust for most, but not all, of these differences.  The data required are provided in the BLS monthly report (for recent months), or online (for the complete series).  But how to do so is not made obvious, as the data series required are scattered across several different tables in the report.

I will discuss in more detail in the next section below what I did to adjust the household survey figures to the employment concept as used in the establishment survey.  Adjustments could be made for each of the categories (a) through (e) in the list above, but was not possible for (f) and (g).  However, the latter are relatively small, with the residual difference following an interesting pattern that we will examine.

When those adjustments are made, the number of employed as estimated from the household survey, but reflecting (almost) the concept as estimated in the establishment survey, looks as follows:

 

While there are some differences between the estimates here and those in the chart at the top of this post of employment made using the household survey (as adjusted), the basic pattern remains.  While employment as estimated from the household survey (and excluding those in agriculture, the self-employed, unpaid family workers, household employees, and adjusted for multiple jobholders) is now growing, it was growing over the last half year at a much slower pace than what the establishment survey suggests.

C.  Adjustments Made to the Employment Estimates So They Will Reflect Similar Concepts

As noted above, adjustments were made to the employment figures to bring the two concepts of the different surveys into line with each other, to the extent possible.  While in principle one could have adjusted either, I chose to adjust the employment concept of the household survey to reflect the more narrow employment concept of the establishment survey.  This was because the underlying data needed to make the adjustments all came from the household survey, and it was better to keep the figures for the adjustments to be made all from the same source.

Adjustments could be made to reflect each of the issues listed above in (a) through (e), but not for (f) or (g).  But there were still some issues among the (a) through (e) adjustments.  Specifically:

1)  I sought to work out the series going back to January 1980, in order to capture several business cycles, but not all of the data required went back that far.  Specifically, the series on those holding multiple jobs started only in January 1994, and the series on household employees only started in January 2000.

2)  I also worked, to the extent possible, with the seasonally adjusted figures (for the establishment survey figures as well as those from the household survey).  However, the figures on unpaid family workers and of household employees were only available without seasonal adjustment.  I was therefore forced to use these.  But since the numbers in these categories are quite small relative to the overall number employed, one does not see a noticeable difference in the graphs.

One can then compare, as a ratio, the figures for total employment as adjusted from the household survey to those from the establishment survey.  The ratio will equal 1.0 when the figures are the same.  This was done in steps (depending on how far back one could go with the data), with the result:

 

The curve in black, which can go back all the way to 1980, shows the ratio when the employment figure in the household survey is adjusted by taking out those who are self-employed (in unincorporated businesses) and those employed in agriculture.  The curve in blue, from 1994 onwards, then adds in one job for each of those holding multiple jobs.  The assumption being made is that those with multiple jobs almost always have two jobs.  The establishment survey would count these as two employees (at two different establishments), while the household survey will only count these as one person (holding more than one job).  Therefore adding a count of one for each person holding multiple jobs will bring the employment concepts used in the two surveys into alignment (and on the basis used in the establishment survey).

Finally, the curve in red subtracts out unpaid family workers in non-agricultural sectors (as those in the agricultural sector will have already been taken out when total employees in agriculture were subtracted), plus subtracts out household employees.  Neither of these series are available in seasonally adjusted form, but they are small relative to total employment, so this makes little difference.

What is interesting is that even with all these adjustments, the ratio of the adjusted figures for employment from the household survey to those from the establishment survey follows a regular pattern.  The ratio is low when unemployment was low (as it was in 2000, at the end of the Clinton administration, and to a lesser extent now).  And it is high when unemployment was high, such as in mid-1980s during the Reagan administration (with a downturn that started in 1982) and again during the downturn of 2008/09 that began at the end of the Bush administration, with unemployment then peaking in 2010 before it started its steady recovery.

Keep in mind that the relative difference in the employment figures between the household survey (as adjusted) and the establishment survey are not large:  about 1% now and a peak of about 3% in 2009/10.  But there is a consistent difference.

Why?  In part there are still two categories of workers where we had no estimates available to adjust the figures from the household survey to align them with the employment concept of the establishment survey:  for those on unpaid leave (who are included as “employed” in the household survey but not in the establishment survey), and for those under age 16 who are working (who are not counted in the household survey but are counted as employees in the establishment survey).

These two categories of workers might account for the difference, but we do not know whether they will fully account for the difference as we have no estimates.  A more interesting question is whether these two categories might account for the correlation observed with unemployment.  We could speculate that during periods of high unemployment (such as 2009/10), those taking unpaid leave might be relatively high (thus bumping up the ratio), and that those under age 16 may find it particularly hard, relative to others, to find jobs when unemployment is high (as employers can easily higher older workers then, with this then also bumping up the ratio relative to times when overall unemployment is low).  But this would just be speculation, and indeed more like an ex-post rationalization of what is observed than an explanation.

Still, despite the statistical noise seen in the chart, the basic pattern is clear.  And that is of a ratio that goes up and down with unemployment.  But it is not large.  Based on the change in the ratio observed from May 2010 to April 2011 (using a 12 month average to smooth out the monthly fluctuations), to the average over May 2018 to April 2019, the monthly divergence in the employment growth figures would only be 23,000 workers.  That is, the unexplained residual difference in recent years between the growth in employment (as estimated by the household survey and as estimated by the establishment survey) would be about 23,000 jobs per month.

But the differences in the estimates for the monthly change in employment between the (adjusted) series from the household survey and that from the establishment survey are much more.  Between October 2018 and April 2019, employment in the adjusted household survey series grew by 65,000 per month on average.  In the establishment survey series the growth was 207,000 per month.  The difference (142,000) is much greater than the 23,000 that can be explained by whatever has been driving down the ratio between the two series since 2010 as unemployment has come down.  Or put another way, the 65,000 figure can be increased by 23,000 per month to 88,000 per month, from adding in the unexplained residual change we observe in the ratio between the two series in recent years.  That 88,000 increase in employment per month from the (adjusted) household survey figures is substantially less than the 207,000 per month figure found in the establishment survey.

D.  Conclusion

Due to the statistical noise in the employment estimates of the household series, one has to be extremely cautious in drawing any conclusions.  While a gap has opened up in the last half year between the growth in the employment estimates of the household survey and those of the establishment survey, it is still early to say whether that gap reflects something significant or not.

The gap is especially large if one just looks at the “employment” figures as published.  Employment as recorded in the household survey has fallen between December 2018 and now, and has been essentially flat since October.  But the total employment concepts between the two surveys differ, so such a direct comparison is not terribly meaningful.  However, if the figures from the household survey are adjusted (to the extent possible) to match the employment concept of the business survey, there is still a large difference.  Employment (under this concept) grew by 207,000 per month in the establishment survey, but by just 88,000 per month in the adjusted household survey figures.

Whether this difference is significant is not yet clear, due to the statistical noise in the household survey figures.  But it might be a sign that employment growth has been less than the headline figures from the establishment survey suggest.  We will see in coming months whether this pattern continues, or whether one series starts tracking the other more closely (and if so, which to which).

Allow the IRS to Fill In Our Tax Forms For Us – It Can and It Should

A.  Introduction

Having recently completed and filed this year’s income tax forms, it is timely to examine what impact the Republican tax bill, pushed quickly through Congress in December 2017 along largely party-line votes, has had on the taxes we pay and on the process by which we figure out what they are.  I will refer to the bill as the Trump/GOP tax bill as the new law reflected both what the Republican leadership in Congress wanted and what the Trump administration pushed for.

We already know well that the cuts went largely to the very well-off.  The chart above is one more confirmation of this.  It was calculated from figures in a recent report by the staff of the Joint Committee on Taxation of the US Congress, released on March 25, 2019 (report #JCX-10-19).  While those earning more than $1 million in 2019 will, on average, see their taxes cut by $64,428 per tax filing unit (i.e. generally households), those earning $10,000 or less will see a reduction of just $21.  And on the scale of the chart, it is indeed difficult to impossible even to see the bars depicting the reductions in taxes for those earning less than $50,000 or so.

The sharp bias in favor of the rich was discussed in a previous post on this blog, based there on estimates from a different group (the Tax Policy Center, a non-partisan think tank) but with similar results.  And while it is of course true that those who are richer will have more in taxes that can be cut (one could hardly cut $64,428 from a taxpayer earning less than $10,000), it is not simply the absolute amounts but also the share of taxes which were cut much further for the rich than for the poor.  According to the Joint Committee on Taxation report cited above, those earning $30,000 or less will only see their taxes cut by 0.5% of their incomes, while those earning between $0.5 million and $1.0 million will see a cut of 3.1%.  That is more than six times as much as a share of incomes.  That is perverse.

And the overall average reduction in individual income taxes will only be a bit less than 10% of the tax revenues being paid before.  This is in stark contrast to the more than 50% reduction in corporate income taxes that we have already observed in what was paid by corporations in 2018.

Furthermore, while taxes for households in some income category may have on average gone down, the numerous changes made to the tax code on the Trump/GOP bill meant that for many it did not.  Estimates provided in the Joint Committee on Taxation report cited above (see Table 2 of the report) indicate that for 2019 a bit less than two-thirds of tax filing units (households) will see a reduction in their taxes of $100 or more, but more than one-third will see either no significant change (less than $100) or a tax increase.  The impacts vary widely, even for those with the same income, depending on a household’s particular situation.

But the Trump/GOP tax bill promised not just a reduction in taxes, but also a reduction in tax complexity, by eliminating loopholes and from other such measures.  The claim was that most Americans would then be able to fill in their tax returns “on a postcard”.  But as is obvious to anyone who has filed their forms this year, it is hardly that.  This blog post will discuss why this is so and why filling in one’s tax returns remains such a headache.  The fundamental reason is simple:  The tax system is not less complex than before, but more.

There is, however, a way to address this, and not solely by ending the complexity (although that would in itself be desirable).  Even with the tax code as complicated as it now is (and more so after the Trump/GOP bill), the IRS could complete for each of us a draft of what our filing would look like based on the information that the IRS already collects.  Those draft forms would match what would be due for perhaps 80 to 85% of us (basically almost all of those who take the standard deduction).  For that 80 to 85% one would simply sign the forms and return them along with a payment if taxes are due or a request for a refund if a refund is due.  Most remaining taxpayers would also be able to use these initial draft forms from the IRS, but for them as the base for what they would need to file.  In their cases, additions or subtractions would be made to reflect items such as itemized deductions (mostly) and certain special tax factors (for some) where the information necessary to complete such calculations would not have been provided in the normal flow of reports to the IRS.  And a small number of filers might continue to fill in all their forms as now.  That small number would be no worse than now, while life would be much simpler for the 95% or more (perhaps 99% or more) who could use the pre-filled in forms from the IRS either in their entirety or as a base to start from.

The IRS receives most of the information required to do this already for each of us (and all that is required for most of us).  But what would be different is that instead of the IRS using such information to check what we filed after the fact, and then impose a fine (or worse) if we made a mistake, the IRS would now use that same information to fill in the forms for us.  We would then review and check them, and if necessary or advantageous to our situation we could then adjust them.  We will discuss how such a tax filing system could work below.

B.  Our Tax Forms are Now Even More Complex Than Before

Trump and the Republican leaders in Congress promised that with the Trump/GOP tax bill, the tax forms we would need to file could, for most of us, fit just on a postcard.  And Treasury Secretary Steven Mnuchin then asserted that the IRS (part of Treasury) did just that.  But this is simply nonsense, as anyone who has had to struggle with the new Form 1040s (or even just looked at them) could clearly see.

Specifically:

a)  Form 1040 is not a postcard, but a sheet of paper (front and back), to which one must attach up to six separate schedules.  This previously all fit on one sheet of paper, but now one has to complete and file up to seven just for the 1040 itself.

b)  Furthermore, there are no longer the forms 1040-EZ or 1040-A which were used by those with less complex tax situations.  Now everyone needs to work from a fully comprehensive Form 1040, and try to figure out what may or may not apply in their particular circumstances.

c)  The number of labeled lines on the old 1040 came to 79.  On the new forms (including the attached schedules) they come to 75.  But this is misleading, as what used to be counted as lines 1 through 6 on the old 1040 are now no longer counted (even though they are still there and are needed).  Including these, the total number of numbered lines comes to 81, or basically the same as before (and indeed more).

d)  Spreading out the old Form 1040 from one sheet of paper to seven does, however, lead to a good deal of extra white space.  This was likely done to give it (the first sheet) the “appearance” of a postcard.  But the forms would have been much easier to fill in, with less likelihood of error, if some of that white space had been used instead for sub-totals and other such entries so that all the steps needed to calculate one’s taxes were clear.

e)  Specifically, with the six new schedules, one has to carry over computations or totals from five of them (all but the last) to various lines on the 1040 itself.  But this was done, confusingly, in several different ways:  1)  The total from Schedule 4 was carried over to its own line (line 14) on the 1040.  It would have been best if all of them had been done this way, but they weren’t.  Instead, 2) The total from Schedule 2 was added to a number of other items on line 11 of the 1040, with the total of those separate items then shown on line 11.  And 3) The total from Schedule 1 was added to the sum of what is shown on the lines above it (lines1 through 5b of the 1040) and then recorded on line 6 of the 1040.

If this looks confusing, it is because it is.  I made numerous mistakes on this when completing my own returns (yes – I do these myself, as I really want to know how they are done).  I hope my final returns were done correctly.  And it is not simply me.  Early indications (as of early March) were that errors on this year’s tax forms were up by 200% over last year’s (i.e. they tripled).

f)  There is also the long-standing issue that the actual forms that one has to fill out are in fact substantially greater than those that one files, as one has to fill in numerous worksheets in order to calculate certain of the figures.  These worksheets should be considered part of the returns, and not hidden in the directions, in order to provide an honest picture of what is involved.  And they don’t fit on a postcard.

g)  But possibly what is most misleading about what is involved in filling out the returns is not simply what is on the 1040 itself, but also the need to include on the 1040 figures from numerous additional forms (for those that may apply).  Few if any of them are applicable to one’s particular tax situation, but to know whether they do or not one has to review each of those forms and make such a determination.  How does one know whether some form applies when there is a statement on the 1040 such as “Enter the amount, if any, from Form xxxx”?  The only way to know is to look up the form (fortunately now this can be done on the internet), read through it along with the directions, and then determine whether it may apply to you.  Furthermore, in at least a few cases one can only know if the form applies to your situation is by filling it in and then comparing the result found to some other item to see whether filing that particular form applies to you.

There are more than a few such forms.  By my count, one has just on the Form 1040 plus its Schedules 1 through 5 amounts that might need to be entered from Forms 8814, 4972, 8812, 8863, 4797, 8889, 2106, 3903, SE, 6251, 8962, 2441, 8863, 8880, 5695, 3800, 8801,1116, 4137, 8919, 5329, 5405, 8959, 8960, 965-A, 8962, 4136, 2439, and 8885.  Each of these forms may apply to certain taxpayers, but mostly only a tiny fraction of them.  But all taxpayers will need to know whether they might apply to their particular situation.  They can often guess that they probably won’t (and it likely would be a good guess, as most of these forms only apply to a tiny sliver of Americans), but the only way to know for sure is to check each one out.

Filling out one’s individual income tax forms has, sadly, never been easy.  But it has now become worse.  And while the new look of the Form 1040 appears to be a result of a political decision by the Trump administration (“make it look like it could fit on a postcard”), the IRS should mostly not be blamed for the complexity.  That complexity is a consequence of tax law, as written by Congress, which finds it politically advantageous to reward what might be a tiny number of supporters (and campaign contributors) with some special tax break.  And when Congress does this, the IRS must then design a new form to reflect that new law, and incorporate it into the Form 1040 and now the new attached schedules.  And then everyone, not simply the tiny number of tax filers to whom it might in fact apply, must then determine whether or not it applies to them.

There are, of course, also more fundamental causes of the complexity in the tax code, which must then be reflected in the forms.  The most important is the decision by our Congress to tax different forms of income differently, where wages earned will in general be taxed at the highest rates (up to 37%) while capital gains (including dividends on stocks held for more than 60 days) are taxed at rates of just 20% or less.  And there are a number of other forms of income that are taxed at various rates (including now, under the Trump/GOP tax bill, an effectively lower tax rate for certain company owners on the incomes they receive from their companies, as well as new special provisions of benefit to real estate developers).  As discussed in an earlier post on this blog, there is no good rationale, economic or moral, to justify this.  It leads to complex tax calculations as the different forms of income must each be identified and then taxed at rates that interact with each other.  And it leads to tremendous incentives to try to shift your type of income, when you are in a position to do so, from wages, say, to a type taxed at a lower rate (such as stock options that will later be taxed only at the long-term capital gains rate).

Given this complexity, it is no surprise that most Americans turn either to professional tax preparers (accountants and others) to fill in their tax forms for them, or to special tax preparation software such as TurboTax.  Based on statistics for the 2018 tax filing season (for 2017 taxes), 72.1 million tax filers hired professionals to prepare their tax forms, or 51% of the 141.5 million tax returns filed.  The cost varies by what needs to be filed, but even assuming an average fee of just $500 per return, this implies a total of over $36 billion is being paid by taxpayers for just this service.

Most of the remaining 49% of tax filers use tax preparation software for their returns (a bit over three-quarters of them).  But these are problematic as well.  There is also a cost (other than for extremely simple returns), but the software itself may not be that good.  A recent review by Consumer Reports found problems with each of the four major tax preparation software packages it tested (TurboTax, H&R Block, TaxSlayer, and TaxAct), and concluded they are not to be trusted.

And on top of this, there is the time the taxpayer must spend to organize all the records that will be needed in order to complete the tax returns – whether by a hired professional tax preparer, or by software, or by one’s own hand.  A 2010 report by a presidential commission examing options for tax reform estimated that Americans spend about 2.5 billion hours a year to do what is necessary to file their individual income tax returns, equivalent to $62.5 billion at an average time cost of $25 per hour.

Finally there are the headaches.  Figuring one’s taxes, even if a professional is hired to fill in the forms, is not something anyone wants to spend time on.

There is a better way.  With the information that is already provided to the IRS each year, the IRS could complete and provide to each of us a draft set of tax forms which would suffice (i.e. reflect exactly what our tax obligation is) for probably 80% or more of households.  And most of the remainder could use such draft forms as a base and then provide some simple additions or subtractions to arrive at what their tax obligation is.  The next section will discuss how this could be done.

C.  Have the IRS Prepare Draft Tax Returns for Each of Us

The IRS already receives, from employers, financial institutions, and others, information on the incomes provided to each of us during the tax year.  And these institutions then tell us each January what they provided to the IRS.  Employers tell us on W-2 forms what wages were paid to us, and financial institutions will tell us through various 1099 forms what was paid to us in interest, in dividends, in realized capital gains, in earnings from retirement plans, and from other such sources of returns on our investments.  Reports are also filed with the IRS for major transactions such as from the sale of a home or other real estate.

The IRS thus has very good information on our income each year.  Our family situation is also generally stable from year to year, although it can vary sometimes (such as when a child is born).  But basing an initial draft estimate on the household situation of the previous year will generally be correct, and can be amended when needed.  One could also easily set up an online system through which tax filers could notify the IRS when such events occur, to allow the IRS to incorporate those changes into the draft tax forms they next produce.

For most of those who take the standard deduction, the IRS could then fill in our tax forms exactly.  And most Americans take the standard deduction. Prior to the Trump/GOP tax bill, about 70% of tax filers did, and it is now estimated that with the changes resulting from the new tax bill, about 90% will.  Under the Trump/GOP tax bill, the basic standard deduction was doubled (while personal exemptions were eliminated, so not all those taking the standard deduction ended up better off).  And perhaps of equal importance, the deduction that could be taken on state and local taxes was capped at $10,000 while how much could be deducted on mortgage interest was also narrowed, so itemization was no longer advantageous for many (with these new limitations primarily affecting those living in states that vote for Democrats – not likely a coincidence).

The IRS could thus prepare filled in tax forms for each of us, based on information contained in what we had filed in earlier years and assuming the standard deduction is going to be taken.  But they would just be drafts.  They would be sent to us for our review, and if everything is fine (and for most of the 90% taking the standard deduction they would be) we would simply sign the forms and return them (along with a check if some additional tax is due, or information on where to deposit a refund if a tax refund is due).

But for the 10% where itemized deductions are advantageous, and for a few others who are in some special tax situation, one could either start with the draft forms and make additions or subtractions to reflect simple adjustments, or, if one wished, prepare a new set of forms reflecting one’s tax situation.  There would likely not be many of the latter, but it would be an option, and no worse than what is currently required of everyone.

For those making adjustments, the changes could simply be made at the end.  For example (and likely the most common such situation), suppose it was advantageous to take itemized deductions rather than the standard deduction.  One would fill in the regular Schedule A (as now), but then rather than recomputing all of the forms, one could subtract from the taxes due an amount based on what the excess was of the itemized deductions over the standard deduction, and one’s tax rate.  Suppose the excess of the itemized deductions over the standard deduction for the filer came to $1,000.  Then for the very rich (households earning over $600,000 a year after deductions), one would reduce the taxes due by 37%, or $370.  Those earning $400,000 to $600,000, in the 35% bracket, would subtract $350.  And so on down to the lower brackets, where those in the 12% bracket (those earning $19,050 to $77,400) would subtract $120 (and those earning less than $19,050 are unlikely to itemize).

[Side Note:  Why do the rich receive what is in effect a larger subsidy from the government than the poor do for what they itemize, such as for contributions to charities?  That is, why do the rich effectively pay just $630 for their contribution to a charity ($1,000 minus $370), while the poor pay $880 ($1,000 minus $120) for their contribution to possibly the exact same charity?  There really is no economic, much less moral, reason for this, but that is in fact how the US tax code is currently written.  As discussed in an earlier post on this blog, the government subsidy for such deductions could instead be set to be the same for all, at say a rate of 20% or so.  There is no reason why the rich should receive a bigger subsidy than the poor receive for the contributions they make.]

Another area where the information the IRS would not have complete information to compute taxes due would be where the tax filer had sold a capital asset which had been purchased before 2010.  The IRS only started in 2010 to require that financial institutions report the cost basis for assets sold, and this cost basis is needed to compute capital gains (or losses).  But as time passes, a smaller and smaller share of assets sold will have been purchased before 2010.  The most important, for most people, will likely be the cost of the home they bought if before 2010, but such a sale will happen only once (unless they owned multiple real estate assets in 2010).

But a simple adjustment could be made to reflect the cost basis of such assets, similar to the adjustment for itemized deductions.  The draft tax forms filled in by the IRS would leave as blank (zero) the cost basis of the assets sold in the year for which it did not have a figure reported.  The tax filer would then determine what the cost basis of all such assets should be (as they do now), add them up, and then subtract 20% of that total cost basis from the taxes due (for those in the 20% bracket for long term capital gains, as most people with capital gains are, or use 15% or 0% if those tax brackets apply in their particular cases).

There will still be a few tax filers with more complex situations where the IRS draft computations are not helpful, who will want to do their own forms.  This is fine – there would always be that option.  But such individuals would still be no worse off than what is required now.  And their number is likely to be very small.  While a guess, I would say that what the IRS could provide to tax filers would be fully sufficient and accurate for 80 to 85% of Americans, and that simple additions or subtractions to the draft forms (as described above) would work for most of the rest.  Probably less than 5% of filers would need to complete a full set of forms themselves, and possibly less than 1%.

D. Final Remarks

Such an approach would be new for the US.  But there is nothing revolutionary about it.  Indeed, it is common elsewhere in the world.  Much of Western Europe already follows such an approach or some variant of it, in particular all of the Scandinavian countries as well as Germany, Spain, and the UK, and also Japan.  Small countries, such as Chile and Estonia, have it, as do large ones.

It has also often been proposed for the US.  Indeed, President Reagan proposed it as part of his tax reduction and simplification bill in 1985, then candidate Barack Obama proposed it in 2007 in a speech on middle class tax fairness, a presidential commission in 2010 included it as one of the proposals in its report on simplifying the tax system, and numerous academics and others have also argued in its favor.

It would also likely save money at the IRS.  The IRS collects already most of the information needed.  But that information is not then sent back to us in fully or partially filled in tax forms, but rather is used by the IRS after we file to check to see whether we got anything wrong.  And if we did, we then face a fine or possibly worse.  Completing our tax returns should not be a game of “gotcha” with the IRS, but rather an effort to ensure we have them right.

Such a reform has, however, been staunchly opposed by narrow interests who benefit from the current frustrating system.  Intuit, the seller of TurboTax software, has been particularly aggressive through its congressional lobbying and campaign contributions in using Congress to block the IRS from pursuing this, as has H&R Block.  They of course realize that if tax filing were easy, with the IRS completing most or all of the forms for us, there would be no need to spend what comes to billions of dollars for software from Intuit and others.  But the morality of a business using its lobbying and campaign contributions to ensure life is made particularly burdensome for the citizenry, so that it can then sell a product to make it easier, is something to be questioned.

One can, however, understand the narrow commercial interests of Intuit and the tax software companies.  One can also, sadly, understand the opposition of a number of conservative political activists, with Grover Norquist the most prominent and in the lead. They have also aggressively lobbied Congress to block the IRS from making tax filing simpler.  They are ideologically opposed to taxes, and see the burden and difficulty in figuring out one’s taxes as a positive, not as a negative.  The hope is that with more people complaining about how difficult it is to fill in their tax forms, the more people will be in favor of cutting taxes.  While that view on how people see taxes might well be accurate, what many may not realize is that the tax cuts of recent decades have led to greater complexity and difficulty, not less.  With new loopholes for certain narrow interests, and with income taxed differently depending on the source of that income (with income from wealth taxed at a much lower rate than income from labor), the system has become more complex while generating less revenue overall.

But it is perverse that Congress should legislate in favor of making life more difficult.  The tax system is indeed necessary and crucial, as Reagan correctly noted in his 1985 speech on tax reform, but as he also noted in that speech, there is no need to make them difficult.  Most Americans, Reagan argued, should be able, and would be able under his proposals, to use what he called a “return-free” system, with the IRS working out the taxes due.

The system as proposed above would do this.  It would also be voluntary.  If one disagreed with the pre-filled in forms sent by the IRS, and could not make the simple adjustments (up or down) to the taxes due through the measures as discussed above, one could always fill in the entire set of forms oneself.  But for that small number of such cases this would just be the same as is now required for all.  Furthermore, if one really was concerned about the IRS filling in one’s forms for some reason (it is not clear what that might be), one could easily have a system of opting-out, where one would notify the IRS that one did not want the service.

The tax code itself should still be simplified.  There are many reforms that can and should be implemented, if there was the political will.  The 2010 presidential commission presented numerous options for what could be done.  But even with the current complex system, or rather especially because of the current complex system, there is no valid reason why figuring out and filing our taxes should be so difficult.  Let the IRS do it for us.

Taxes on Corporate Profits Have Continued to Collapse

 

The Bureau of Economic Analysis (BEA) released earlier today its second estimate of GDP growth in the fourth quarter ot 2018.  (Confusingly, it was officially called the “third” estimate, but was only the second as what would have been the first, due in January, was never done due to Trump shutting down most agencies of the federal government in December and January due to his border wall dispute.)  Most public attention was rightly focussed on the downward revision in the estimate of real GDP growth in the fourth quarter, from a 2.6% annual rate estimated last month, to 2.2% now.  And current estimates are that growth in the first quarter of 2019 will be substantially less than that.

But there is much more in the BEA figures than just GDP growth.  The second report of the BEA also includes initial estimates of corporate profits and the taxes they pay (as well as much else).  The purpose of this note is to update an earlier post on this blog that examined what happened to corporate profit tax revenues following the Trump / GOP tax cuts of late 2017.  That earlier post was based on figures for just the first half of 2018.

We now have figures for the full year, and they confirm what had earlier been found – corporate profit tax revenues have indeed plummeted.  As seen in the chart at the top of this post, corporate profit taxes were in the range of only $150 to $160 billion (at annual rates) in the four quarters of 2018.  This was less than half the $300 to $350 billion range in the years before 2018.  And there is no sign that this collapse in revenues was due to special circumstances of one quarter or another.  We see it in all four quarters.

The collapse shows through even more clearly when one examines what they were as a share of corporate profits:

 

The rate fell from a range of generally 15 to 16%, and sometimes 17%, in the earlier years, to just 7.0% in 2018.  And it was an unusually steady rate of 7.0% throughout the year.  Note that under the Trump / GOP tax bill, the standard rate for corporate profit tax was cut from 35% previously to a new headline rate of 21%.  But the actual rate paid turned out (on average over all firms) to come to just 7.0%, or only one-third as much.  The tax bill proponents claimed that while the headline rate was being cut, they would close loopholes so the amount collected would not go down.  But instead loopholes were not only kept, but expanded, and revenues collected fell by more than half.

If the average corporate profit tax rate paid in 2018 had been not 7.0%, but rather at the rate it was on average over the three prior fiscal years (FY2015 to 2017) of 15.5%, an extra $192.2 billion in revenues would have been collected.

There was also a reduction in personal income taxes collected.  While the proportional fall was less, a much higher share of federal income taxes are now borne by individuals than by corporations.  (They were more evenly balanced decades ago, when the corporate profit tax rates were much higher – they reached over 50% in terms of the amount actually collected in the early 1950s.)  Federal personal income tax as a share of personal income was 9.2% in 2018, and again quite steady at that rate over each of the four quarters.  Over the three prior fiscal years of FY2015 to 2017, this rate averaged 9.6%.  Had it remained at that 9.6%, an extra $77.3 billion would have been collected in 2018.

The total reduction in tax revenues from these two sources in 2018 was therefore $270 billion.  While it is admittedly simplistic to extrapolate this out over ten years, if one nevertheless does (assuming, conservatively, real growth of 1% a year and price growth of 2%, for a total growth of about 3% a year), the total revenue loss would sum to $3.1 trillion.  And if one adds to this, as one should, the extra interest expense on what would now be a higher public debt (and assuming an average interest rate for government borrowing of 2.6%), the total loss grows to $3.5 trillion.

This is huge.  To give a sense of the magnitude, an earlier post on this blog found that revenues equal to the original forecast loss under the Trump / GOP tax plan (summing to $1.5 trillion over the next decade, and then continuing) would suffice to ensure the Social Security Trust Fund would be fully funded forever.  As things are now, if nothing is done the Trust Fund will run out in about 2034.  And Republicans insist that the gap is so large that nothing can be done, and that the system will have to crash unless retired seniors accept a sharp reduction in what are already low benefits.

But with losses under the Trump / GOP tax bill of $3.1 trillion over ten years, less than half of those losses would suffice to ensure Social Security could survive at contracted benefit levels.  One cannot argue that we can afford such a huge tax cut, but cannot afford what is needed to ensure Social Security remains solvent.

In the nearer term, the tax cuts have led to a large growth in the fiscal deficit.  Even the US Treasury itself is currently forecasting that the federal budget deficit will reach $1.1 trillion in FY2019 (5.2% of GDP), up from $779 billion in FY2018.  It is unprecedented to have such high fiscal deficits at a time of full employment, other than during World War II.  Proper fiscal management would call for something closer to a balanced budget, or even a surplus, in those periods when the economy is at full employment, while deficits should be expected (and indeed called for) during times of economic downturns, when unemployment is high.  But instead we are doing the opposite.  This will put the economy in a precarious position when the next economic downturn comes.  And eventually it will, as it always has.

End Gerrymandering by Focussing on the Process, Not on the Outcomes

A.  Introduction

There is little that is as destructive to a democracy as gerrymandering.  As has been noted by many, with gerrymandering the politicians are choosing their voters rather than the voters choosing their political representatives.

The diagrams above, in schematic form, show how gerrymandering works.  Suppose one has a state or region with 50 precincts, with 60% that are fully “blue” and 40% that are fully “red”, and where 5 districts need to be drawn.  If the blue party controls the process, they can draw the district lines as in the middle diagram, and win all 5 (100%) of the districts, with just 60% of the voters.  If, in contrast, the red party controls the process for some reason, they could draw the district boundaries as in the diagram on the right.  They would then win 3 of the 5 districts (60%) even though they only account for 40% of the voters.  It works by what is called in the business “packing and cracking”:  With the red party controlling the process, they “pack” as many blue voters as possible into a small number of districts (two in the example here, each with 90% blue voters), and then “crack” the rest by scattering them around in the remaining districts, each as a minority (three districts here, each with 40% blue voters and 60% red).

Gerrymandering leads to cynicism among voters, with the well-founded view that their votes just do not matter.  Possibly even worse, gerrymandering leads to increased polarization, as candidates in districts with lines drawn to be safe for one party or the other do not need to worry about seeking to appeal to voters of the opposite party.  Rather, their main concern is that a more extreme candidate from their own party will not challenge them in a primary, where only those of their own party (and normally mostly just the more extreme voters in their party) will vote.  And this is exactly what we have seen, especially since 2010 when gerrymandering became more sophisticated, widespread, and egregious than ever before.

Gerrymandering has grown in recent decades both because computing power and data sources have grown increasingly sophisticated, and because a higher share of states have had a single political party able to control the process in full (i.e. with both legislative chambers, and the governor when a part of the process, all under a single party’s control).  And especially following the 2010 elections, this has favored the Republicans.  As a result, while there has been one Democratic-controlled state (Maryland) on common lists of the states with the most egregious gerrymandering, most of the states with extreme gerrymandering were Republican-controlled.  Thus, for example, Professor Samuel Wang of Princeton, founder of the Princeton Gerrymandering Project, has identified a list of the eight most egregiously gerrymandered states (by a set of criteria he has helped develop), where one (Maryland) was Democratic-controlled, while the remaining seven were Republican.  Or the Washington Post calculated across all states an average of the degree of compactness of congressional districts:  Of the 15 states with the least compact districts, only two (Maryland and Illinois) were liberal Democratic-controlled states.  And in terms of the “efficiency gap” measure (which I will discuss below), seven states were gerrymandered following the 2010 elections in such a way as to yield two or more congressional seats each in their favor.  All seven were Republican-controlled.

With gerrymandering increasingly common and extreme, a number of cases have gone to the Supreme Court to try to stop it.  However, the Supreme Court has failed as yet to issue a definitive ruling ending the practice.  Rather, it has so far skirted the issue by resolving cases on more narrow grounds, or by sending cases back to lower courts for further consideration.  This may soon change, as the Supreme Court has agreed to take up two cases (affecting lines drawn for congressional districts in North Carolina and in Maryland), with oral arguments scheduled for March 26, 2019.  But it remains to be seen if these cases will lead to a definitive ruling on the practice of partisan gerrymandering or not.

This is not because of a lack of concern by the court.  Even conservative Justice Samuel Alito has conceded that “gerrymandering is distasteful”.  But he, along with the other conservative justices on the court, have ruled against the court taking a position on the gerrymandering cases brought before it, in part, at least, out of the concern that they do not have a clear standard by which to judge whether any particular case of gerrymandering was constitutionally excessive.  This goes back to a 2004 case (Vieth v. Jubelirer) in which the four most conservative justices of the time, led by Justice Antonin Scalia, opined that there could not be such a standard, while the four liberal justices argued that there could.  Justice Anthony Kennedy, in the middle, issued a concurring opinion with the conservative justices there was not then an acceptable such standard before them, but that he would not preclude the possibility of such a standard being developed at some point in the future.

Following this 2004 decision, political scientists and other scholars have sought to come up with such a standard.  Many have been suggested, such as a set of three tests proposed by Professor Wang of Princeton, or measures that focus on the share of seats won to the share of the votes cast, and more.  Probably most attention in recent years has been given to the “efficiency gap” measure proposed by Professor Nicholas Stephanopoulos and Eric McGhee.  The efficiency gap is the gap between the two main parties in the “wasted votes” each party received in some past election in the state (as a share of total votes in the state), where a wasted vote is the sum of all the votes for a losing candidate of that party, plus the votes in excess of 50% when that party’s candidate won.  This provides a direct measure of the two basic tactics of gerrymandering, as described above, of “packing” as many voters of one party as possible in a small number of districts (where they might receive 80 or 90% of the votes, but with all those above 50% “wasted”), and “cracking” (by splitting up the remaining voters of that party into a large number of districts where they will each be in a minority and hence will lose, with those votes then also “wasted”).

But there are problems with each of these measures, including the widely touted efficiency gap measure.  It has often been the case in recent years, in our divided society, that like-minded voters live close to each other, and particular districts in the state then will, as a result, often see the winner of the district receive a very high share of the votes.  Thus, even with no overt gerrymandering, the efficiency gap as measured will appear large.  Furthermore, at the opposite end of this spectrum, the measure will be extremely sensitive if a few districts are close to 50/50.  A shift of just a few percentage points in the vote will then lead one party or the other to lose and hence will then see a big jump in their share of wasted votes (the 49% received by one party or the other).

There is, however, a far more fundamental problem.  And that is that this is simply the wrong question to ask.  With all due respect to Justice Kennedy, and recognizing also that I am an economist and not a lawyer, I do not understand why the focus here is on the voting outcome, rather than on the process by which the district lines were drawn.  The voting outcome is not the standard by which other aspects of voter rights are judged.  Rather, the focus is on whether the process followed was fair and unbiased, with the outcome then whatever it is.

I would argue that the same should apply when district lines are drawn.  Was the process followed fair and unbiased?  The way to ensure that would be to remove the politicians from the process (both directly and indirectly), and to follow instead an automatic procedure by which district lines are drawn in accord with a small number of basic principles.

The next section below will first discuss the basic point that the focus when judging fairness and lack of bias should not be on whether we can come up with some measure based on the vote outcomes, but rather on whether the process that was followed to draw the district lines was fair and unbiased or not.  The section following will then discuss a particular process that illustrates how this could be done.  It would be automatic, and would produce a fair and unbiased drawing of voting district lines that meets the basic principles on which such a map should be based (districts of similar population, compactness, contiguity, and, to the extent consistent with these, respect for the boundaries of existing political jurisdictions such as counties or municipalities).  And while I believe this particular process would be a good one, I would not exclude that others are possible.  The important point is that the courts should require the states to follow some such process, and from the example presented we see that this is indeed feasible.  It is not an impossible task.

The penultimate section of the post will then discuss a few points that arise with any such system, and their implications, and end with a brief section summarizing the key points.

B.  A Fair Voting System Should Be Judged Based on the Process, Not on the Outcomes

Voting rights are fundamental in any democracy.  But in judging whether some aspect of the voting system is proper, we do not try to determine whether or not (by some defined specific measure) the resulting outcomes were improperly skewed or not.

Thus, for example, we take as a basic right that our ballot may be cast in secret.  No government official, nor anyone else for that matter, can insist on seeing how we voted.  Suppose that some state passed a law saying a government-appointed official will look over the shoulder of each of us as we vote, to determine whether we did it “right” or not.  We would expect the courts to strike this down, as an inappropriate process that contravenes our basic voting rights.  We would not expect the courts to say that they should look at the subsequent voting outcomes, and try to come up with some specific measure which would show, with certainty, whether the resulting outcomes were excessively influenced or not.  That would of course be absurd.

As another absurd example, suppose some state passed a law granting those registered in one of the major political parties, but not those registered in the other, access to more early days of voting than the other.  This would be explicitly partisan, and one would assume that the courts would not insist on limiting their assessment to an examination of the later voting outcomes to see whether, by some proposed measure, the resulting outcomes were excessively affected.  The voting system, to be fair, should not lead to a partisan advantage for one party or the other.  But gerrymandering does precisely that.

Yet the courts have so far asked declined to issue a definitive ruling on partisan gerrymandering, and have asked instead whether there might be some measure to determine, in the voting outcomes, whether gerrymandering had led to an excessive partisan advantage for the party drawing the district lines.  And there have been open admissions by senior political figures that district borders were in fact drawn up to provide a partisan advantage.  Indeed, principals involved in the two cases now before the Supreme Court have openly said that partisan advantage was the objective.  In North Carolina, David Lewis, the Republican chair of the committee in the state legislature responsible for drawing up the district lines, said during the debate that “I think electing Republicans is better than electing Democrats. So I drew this map to help foster what I think is better for the country.”

And in the case of Maryland, the Democratic governor of the state in 2010 at the time the congressional district lines were drawn, Martin O’Malley, spoke out in 2018 in writing and in interviews openly acknowledging that he and the Democrats had drawn the district lines for partisan advantage.  But he also now said that this was wrong and that he hoped the Supreme Court would rule against what they had done.

But how to remove partisanship when district lines are drawn?  As long as politicians are directly involved, with their political futures (and those of their colleagues) dependent on the district lines, it is human nature that biases will enter.  And it does not matter whether the biases are conscious and openly expressed, or unconscious and denied.  Furthermore, although possibly diminished, such biases will still enter even with independent commissions drawing the district lines.  There will be some political process by which the commissioners are appointed, and those who are appointed, even if independent, will still be human and will have certain preferences.

The way to address this would rather be to define some automatic process which, given the data on where people live and the specific principles to follow, will be able to draw up district lines that are both fair (follow the stated principles) and unbiased (are not drawn up in order to provide partisan advantage to one party).  In the next section I will present a particular process that would do this.

C.  An Automatic Process to Draw District Lines that are Fair and Unbiased

The boundaries for fair and unbiased districts should be drawn in accord with the following set of principles (and no more):

a)  One Person – One Vote:  Each district should have a similar population;

b)  Contiguity:  Each district must be geographically contiguous.  That is, one continuous boundary line will encompass the entire district and nothing more;

c)  Compactness:  While remaining consistent with the above, districts should be as compact as possible under some specified measure of compactness.

And while not such a fundamental principle, a reasonable objective is also, to the extent possible consistent with the basic principles above, that the district boundaries drawn should follow the lines of existing political jurisdictions (such as of counties or municipalities).

There will still be a need for decisions to be made on the basic process to follow and then on a number of the parameters and specific rules required for any such process.  Individual states will need to make such decisions, and can do so in accordance with their traditions and with what makes sense for their particular state.  But once these “rules of the game” are fully specified, there should then be a requirement that they will remain locked in for some lengthy period (at least to beyond whenever the next decennial redistricting will be needed), so that games cannot be played with the rules in order to bias a redistricting that may soon be coming up.  This will be discussed further below.

Such specific decisions will need to be made in order to fully define the application of the basic principles presented above.  To start, for the one person – one vote principle the Supreme Court has ruled that a 10% margin in population between the largest and smallest districts is an acceptable standard.  And many states have indeed chosen to follow this standard.  However, a state could, if it wished, choose to use a tighter standard, such as a margin in the populations between the largest and smallest districts of no more than 8%, or perhaps 5% or whatever.  A choice needs to be made.

Similarly, a specific measure of compactness will need to be specified.  Mathematically there are several different measures that could be used, but a good one which is both intuitive and relatively easy to apply is that the sum of the lengths of all the perimeters of each of the districts in the state should be minimized.  Note that since the outside borders of the state itself are fixed, this sum can be limited just to the perimeters that are internal to the state.  In essence, since states are to be divided up into component districts (and exhaustively so), the perimeter lines that do this with the shortest total length will lead to districts that are compact.  There will not be wavy lines, nor lines leading to elongated districts, as such lines will sum to a greater total length than possible alternatives.

What, then, would be a specific process (or algorithm) which could be used to draw district lines?  I will recommend one here, which should work well and would be consistent with the basic principles for a fair and unbiased set of district boundaries.  But other processes are possible.  A state could choose some such alternative (but then should stick to it).  The important point is that one should define a fully specified, automatic, and neutral process to draw such district lines, rather than try to determine whether some set of lines, drawn based on the “judgment” of politicians or of others, was “excessively” gerrymandered based on the voting outcomes observed.

Finally, the example will be based on what would be done to draw congressional district lines in a state.  But one could follow a similar process for drawing other such district lines, such as for state legislative districts.

The process would follow a series of steps:

Step 1: The first step would be to define a set of sub-districts within each county in a state (parish in Louisiana) and municipality (in those states where municipalities hold similar governmental responsibilities as a county).  These sub-districts would likely be the districts for county boards or legislative councils in most of the states, and one might typically have a dozen or more of these in such jurisdictions.  When those districts are also being redrawn as part of the decennial redistricting process, then they should be drawn first (based on the principles set out here), before the congressional district lines are drawn.

Each state would define, as appropriate for the institutions of that specific state, the sub-districts that will be used for the purpose of drawing the congressional district lines.  And if no such sub-jurisdictions exist in certain counties of certain states, one could draw up such sub-districts, purely for the purposes of this redistricting exercise, by dividing such counties into compact (based on minimization of the sum of the perimeters), equal population, districts.  While the number of such sub-districts would be defined (as part of the rules set for the process) based on the population of the affected counties, a reasonable number might generally be around 12 or 15.

These sub-districts will then be used in Step 4 below to even out the congressional districts.

Step 2:  An initial division of each state into a set of tentative congressional districts would then be drawn based on minimizing the sum of the lengths of the perimeter lines for all the districts, and requiring that all of the districts in the state have exactly the same population.  Following the 2010 census, the average population in a congressional district across the US was 710,767, but the exact number will vary by state depending on how many congressional seats the state was allocated.

Step 3: This first set of district lines will not, in general, follow county and municipal lines.  In this step 3, the initial set of district lines would then be shifted to the county or municipal line which is geographically closest to it (as defined by minimizing the geographic area that would be shifted in going to that county or city line, in comparison to whatever the alternative jurisdiction would be).  If the populations in the resulting congressional districts are then all within the 10% margin for the populations (or whatever percent margin is chosen by the state) between the largest and the smallest districts, then one is finished and the map is final.

Step 4:  But in general, there may be one or more districts where the resulting population exceeds or falls short of the 10% limit.  One would then make use of the political subdivisions of the counties and municipalities defined in Step 1 to bring them into line.  A specific set of rules for that process would need to be specified.  One such set would be to first determine which congressional district, as then drawn, deviated most from what the mean population should be for the districts in that state.  Suppose that district had too large of a population.  One would then shift one of the political subdivisions in that district from it to whichever adjacent congressional district had the least population (of all adjacent districts).  And the specific political subdivision shifted would then be the one which would have the least adverse impact on the measure of compactness (the sum of perimeter lengths).  Note that the impact on the compactness measure could indeed be positive (i.e. it could make the resulting congressional districts more compact), if the political subdivision eligible to be shifted were in a bend in the county or city line.

If the resulting congressional districts were all now within the 10% population margin (or whatever margin the state had chosen as its standard), one would be finished.  But if this is not the case, then one would repeat Step 4 over and over as necessary, each time for whatever district was then most out of line with the 10% margin.

That is it.  The result would be contiguous and relatively compact congressional districts, each with a similar population (within the 10% margin, or whatever margin is decided upon), and following borders of counties and municipalities or of political sub-divisions within those entities.

This would of course all be done on the computer, and can be once the rules and parameters are all decided as there will no longer be a role for opinion nor an opportunity for political bias to enter.  And while the initial data entry will be significant (as one would need to have the populations and perimeter lengths of each of the political subdivisions, and those of the counties and municipalities that they add up to), such data are now available from standard sources.  Indeed, the data entry needed would be far less than what is typically required for the computer programs used by our politicians to draw up their gerrymandered maps.

D.  Further Remarks

A few more points:

a)  The Redistricting Process, Once Decided, Should be Locked In for a Long Period:  As was discussed above, states will need to make a series of decisions to define fully the specific process it chooses to follow.  As illustrated in the case discussed above, states will need to decide on matters such as what will be the maximum margin of the populations between the largest and smallest districts (no more than 10%, by Supreme Court decision, but it could be less).  And rules will need to be set on, also as in the case discussed above, what measure of compactness to use, or the criterion on which district should be chosen first to have a shift of a sub-district in order to even out the population differences, and so on.

Such decisions will have an impact on the final districts arrived at.  And some of those districts will favor Republicans and some will favor Democrats, just by random.  There would then be a problem if the redistricting were controlled by one party in the state, and that party (through consultants who specialize in this) tried out dozens if not hundreds of possible choices on the parameters to see which would turn out to be most advantageous to it.  While the impact would be far less than what we have now with the deliberate gerrymandering, there could still be some effect.

To stem this, one should require that once choices are made on the process to follow and on the rules and other parameters needed to implement that process, there could not then be a change in that process for the immediately upcoming decennial redistricting.  They would only apply to those following.  While this would not be possible for the very first application of the system, there will likely be a good deal of attention paid by the public to these issues initially so such an attempt to bias the system would be difficult.

As noted, this is not likely to be a major problem, and any such system will not introduce the major biases we have seen in the deliberately gerrymandered maps of numerous states following the 2010 census.  But by locking in any decisions made for a long period, where any random bias in favor of one party in a map might well be reversed following the next census, there will be less of a possibility to game the system by changing the rules, just before a redistricting is due, to favor one party.

b)  Independent Commissions Do Not Suffice  – They Still Need to Decide How to Draw the District Maps:  A reform that has been increasingly advocated by many in recent years is to take the redistricting process out of the hands of the politicians, and instead to appoint independent commissions to draw up the maps.  There are seven states currently with non-partisan or bipartisan, nominally independent, commissions that draw the lines for both congressional and state legislative districts, and a further six who do this for state legislative districts only.  Furthermore, several additional states will use such commissions starting with the redistricting that follows the 2020 census.  Finally, there is Iowa.  While technically not an independent commission, district lines in Iowa are drawn up by non-partisan legislative staff, with the state legislature then approving it or not on a straight up or down vote.  If not approved, the process starts over, and if not approved after three votes it goes to the Iowa Supreme Court.

While certainly a step in the right direction, a problem with such independent commissions is that the process by which members are appointed can be highly politicized.  And even if not overtly politicized, the members appointed will have personal views on who they favor, and it is difficult even with the best of intentions to ensure such views do not enter.

But more fundamentally, even a well-intentioned independent commission will need to make choices on what is, and what is not, a “good” district map.  While most states list certain objectives for the redistricting process in either their state constitutions or in legislation, these are typically vague, such as saying the maps should try to preserve “communities of interest”, but with no clarity on what this in practice means.  Thirty-eight states also call for “compactness”, but few specify what that really means.  Indeed, only two states (Colorado and Iowa) define a specific measure of compactness.  Both states say that compactness should be measured by the sum of the perimeter lines being minimized (the same measure I used in the process discussed above).  However, in the case of Iowa this is taken along with a second measure of compactness (the absolute value of the difference between the length and the width of a district), and it is not clear how these two criteria are to be judged against each other when they differ.  Furthermore, in all states, including Colorado and Iowa, the compactness objective is just one of many objectives, and how to judge tradeoffs between the diverse objectives is not specified.

Even a well-intentioned independent commission will need to have clear criteria to judge what is a good map and what is not.  But once these criteria are fully specified, there is then no need for further opinion to enter, and hence no need for an independent commission.

c)  Appropriate and Inappropriate Principles to Follow: As discussed above, the basic principles that should be followed are:  1) One person – One vote, 2) Contiguity, and 3) Compactness.  Plus, to the extent possible consistent with this, the lines of existing political jurisdictions of a state (such as counties and municipalities) should be respected.

But while most states do call for this (with one person – one vote required by Supreme Court decision, but decided only in 1964), they also call for their district maps to abide by a number of other objectives.  Examples include the preservation of “communities of interest”, as discussed above, where 21 states call for this for their state legislative districts and 16 for their congressional districts (where one should note that congressional districting is not relevant in 7 states as they have only one member of Congress).  Further examples of what are “required” or “allowed” to be considered include preservation of political subdivision lines (45 states); preservation of “district cores” (8 states); and protection of incumbents (8 states).  Interestingly, 10 states explicitly prohibit consideration of the protection of incumbents.  And various states include other factors to consider or not consider as well.

But many, indeed most, of these considerations are left vague.  What does it mean that “communities of interest” are to be preserved where possible?  Who defines what the relevant communities are?  What is the district “core” that is to be preserved?  And as discussed above, there is a similar issue with the stated objective of “compactness”, as while 38 states call for it, only Colorado and Iowa are clear on how it is defined (but then vague on what trade-offs are to be accepted against other objectives).

The result of such multiple objectives, mostly vaguely defined and with no guidance on trade-offs, is that it is easy to come up with the heavily gerrymandered maps we have seen and the resulting strong bias in favor of one political party over the other.  Any district can be rationalized in terms of at least one of the vague objectives (such as preserving a “community of interest”).  These are loopholes which allow the politicians to draw maps favorable to themselves, and should be eliminated.

d)  Racial Preferences: The US has a long history of using gerrymandering (as well as other measures) to effectively disenfranchise minority groups, in particular African-Americans.  This has been especially the case in the American South, under the Jim Crow laws that were in effect through to the 1960s.  The Voting Rights Act of 1965 aimed to change this.  It required states (in particular under amendments to Section 2 passed in 1982 when the Act was reauthorized) to ensure minority groups would be able to have an effective voice in their choice of political representatives, including, under certain circumstances, through the creation of congressional and other legislative districts where the previously disenfranchised minority group would be in the majority (“majority-minority districts”).

However, it has not worked out that way.  Indeed, the creation of majority-minority districts, with African-Americans packed into as small a number of districts as possible and with the rest then scattered across a large number of remaining districts, is precisely what one would do under classic gerrymandering (packing and cracking) designed to limit, not enable, the political influence of such groups.  With the passage of these amendments to the Voting Rights Act in 1982, and then a Supreme Court decision in 1986 which upheld this (Thornburg v. Gingles), Republicans realized in the redistricting following the 1990 census that they could then, in those states where they controlled the process, use this as a means to gerrymander districts to their political advantage.  Newt Gingrich, in particular, encouraged this strategy, and the resulting Republican gains in the South in 1992 and 1994 were an important factor in leading to the Republican take-over of the Congress following the 1994 elections (for the first time in 40 years), with Gingrich then becoming the House Speaker.

Note also that while the Supreme Court, in a 5-4 decision in 2013, essentially gutted a key section of the Voting Rights Act, the section they declared to be unconstitutional was Section 5.  This was the section that required pre-approval by federal authorities of changes in voting statutes in those jurisdictions of the country (mostly the states of the South) with a history of discrimination as defined in the statute.  Left in place was Section 2 of the Voting Rights Act, the section under which the gerrymandering of districts on racial lines has been justified.  It is perhaps not surprising that Republicans have welcomed keeping this Section 2 while protesting Section 5.

One should also recognize that this racial gerrymandering of districts in the South has not led to most African-Americans in the region being represented in Congress by African-Americans.  One can calculate from the raw data (reported here in Ballotpedia, based on US Census data), that as of 2015, 12 of the 71 congressional districts in the core South (Louisiana, Mississippi, Alabama, Georgia, South Carolina, North Carolina, Virginia, and Tennessee) had a majority of African-American residents.  These were all just a single district in each of the states, other than two in North Carolina and four in Georgia.  But the majority of African Americans in those states did not live in those twelve districts.  Of the 13.2 million African-Americans in those eight states, just 5.0 million lived in those twelve districts, while 8.2 million were scattered around the remaining districts.  By packing as many African-Americans as possible in a small number of districts, the Republican legislators were able to create a large number of safe districts for their own party, and the African-Americans in those districts effectively had little say in who was then elected.

The Voting Rights Act was an important measure forward, drafted in reaction to the Jim Crow laws that had effectively undermined the right to vote of African-Americans.  And defined relative to the Jim Crow system, it was progress.  However, relative to a system that draws up district lines in a fair and unbiased manner, it would be a step backwards.  A system where minorities are packed into a small number of districts, with the rest then scattered across most of the districts, is just standard gerrymandering designed to minimize, not to ensure, the political rights of the minority groups.

E.  Conclusion

Politicians drawing district lines to favor one party and to ensure their own re-election fundamentally undermines democracy.  Supreme Court justices have themselves called it “distasteful”.  However, to address gerrymandering the court has sought some measure which could be used to ascertain whether the resulting voting outcomes were biased to a degree that could be considered unconstitutional.

But this is not the right question.  One does not judge other aspects of whether the voting process is fair or not by whether the resulting outcomes were by some measure “excessively” affected or not.  It is not clear why such an approach, focused on vote outcomes, should apply to gerrymandering.  Rather, the focus should be on whether the process followed was fair and unbiased or not.

And one can certainly define a fair and unbiased process to draw district lines.  The key is that the process, once established, should be automatic and follow the agreed set of basic principles that define what the districts should be – that they should be of similar population, compact, contiguous, and where possible and consistent with these principles, follow the lines of existing political jurisdictions.

One such process was outlined above.  But there are other possibilities.  The key is that the courts should require, in the name of ensuring a fair vote, that states must decide on some such process and implement it.  And the citizenry should demand the same.

Market Competition as a Path to Making Medicare Available for All

A.  Introduction

Since taking office just two years ago, the Trump administration has done all it legally could to undermine Obamacare.  The share of the US population without health insurance had been brought down to historic lows under Obama, but they have now moved back up, with roughly half of the gains now lost.  The chart above (from Gallup) traces its path.

This vulnerability of health cover gains to an antagonistic administration has led many Democrats to look for a more fundamental reform that would be better protected.  Many are now calling for an expansion of the popular and successful Medicare program to the full population – it is currently restricted just to those aged 65 and above.  Some form of Medicare-for-All has now been endorsed by most of the candidates that have so far announced they are seeking the Democratic nomination to run for president in 2020, although the specifics differ.

But while Medicare-for-All is popular as an ultimate goal, the path to get there as well as specifics on what the final structure might look like are far from clear (and differ across candidates, even when different alternatives are each labeled “Medicare-for-All”).  There are justifiable concerns on whether there will be disruptions along the way.  And the candidates favoring Medicare-for-All have yet to set out all the details on how that process would work.

But there is no need for the process to be disruptive.  The purpose of this blog post is to set out a possible path where personal choice in a system of market competition can lead to a health insurance system where Medicare is at least available for all who desire it, and where the private insurance that remains will need to be at least as efficient and as attractive to consumers as Medicare.

The specifics will be laid out below, but briefly, the proposal is built around two main observations.  One is that Medicare is a far more efficient, and hence lower cost, system than private health insurance is in the US.  As was discussed in an earlier post on this blog, administrative expenses account for only 2.4% of the cost of traditional Medicare.  All the rest (97.6%) goes to health care providers.  Private health insurers, in contrast, have non-medical expenses of 12% of their total costs, or five times as much.  Medicare is less costly to administer as it is a simpler system and enjoys huge economies of scale.  Private health insurers, in contrast, have set up complex systems of multiple plans and networks of health care providers, pay very generous salaries to CEOs and other senior staff who are skilled at operating in the resulting highly fragmented system, and pay out high profits as well (that in normal years account for roughly one-quarter of that 12% margin).

With Medicare so much more efficient, why has it not pushed out the more costly private insurance providers?  The answer is simple:  Congress has legislated that Medicare is not allowed to compete with them.  And that is the second point:  Remove these legislated constraints, and allow Medicare-managed plans to compete with the private insurance companies (at a price set so that it breaks even).  Americans will then be able to choose, and in this way transition to a system where enrollment in Medicare-managed insurance services is available to all.  And over time, such competition can be expected to lead most to enroll in the Medicare-managed options.  They will be cheaper for a given quality, due to Medicare’s greater efficiency.

There will still be a role for private insurance.  For those competing with Medicare straight on, the private insurers that remain will have to be able to provide as good a product at as good a cost.  But also, private insurers will remain to offer insurance services that supplement what a Medicare insurance plan would provide.  Such optional private insurance would cover services (such as dental services) or costs (Medicare covers just 80% after the deductible) that the basic Medicare plan does not cover.  Medicare will then be the primary insurer, and the private insurance the secondary.  And, importantly, note that in this system the individual will still be receiving all the services that they receive under their current health plans.  This addresses the concern of some that a Medicare-like plan would not be as complete or as comprehensive as what they might have now.  With the optional supplemental, their insurance could cover exactly what they have now, or even more.  Medicare would be providing a core level of coverage, and then, for those who so choose, supplemental private plans can bring the coverage to all that they have now.  But the cost will be lower, as they will gain from the low cost of Medicare for those core services.

More specifically, how would this work?

B.  Allow Medicare to Compete in the Market for Individual Health Insurance Plans

A central part of the Obamacare reforms was the creation of a marketplace where individuals, who do not otherwise have access to a health insurance plan (such as through an employer), could choose to purchase an individual health insurance plan.  As originally proposed, and indeed as initially passed by the House of Representatives, a publicly managed health insurance plan would have been made available (at a premium rate that would cover its full costs) in addition to whatever plans were offered by private insurers.  This would have addressed the problem in the Obamacare markets of often excessive complexity (with constantly changing private plans entering or leaving the different markets), as well as limited and sometimes even no competition in certain regions.  A public option would have always been available everywhere.  But to secure the 60 votes needed to pass in the Senate, the public option had to be dropped (at the insistence of Senator Joe Lieberman of Connecticut).

It could, and should, be introduced now.  Such a public option could be managed by Medicare, and could then piggy-back on the management systems and networks of hospitals, doctors, and other health care providers who already work with Medicare.  However, the insurance plan itself would be broader than what Medicare covers for the elderly, and would meet the standards for a comprehensive health care plan as defined under Obamacare.  Medicare for the elderly is, by design, only partial (for example, it covers only 80% of the cost, after a modest deductible), plus it does not cover services such as for pregnancies.  A public option plan administered by Medicare in the Obamacare marketplace would rather provide services as would be covered under the core “silver plan” option in those markets (the option that is the basis for the determination of the subsidies for low-income households).  And one might consider offering as options plans at the “bronze” and “gold” levels as well.

Such a Medicare-managed public option would provide competition in the Obamacare exchanges.  An important difficulty, especially in the Republican states that have not been supportive of offering such health insurance, is that in certain states (or counties within those states) there have been few health insurers competing with each other, and indeed often only one.  The exchanges are organized by state, and even when insurers decide to offer insurance cover within some state, they may decide to offer it only to residents of certain counties within that state.  The private insurers operate with an expensive business model, built typically around organizing networks of doctors with whom they negotiate individual rates for health care services provided.  It is costly to set this up, and not worthwhile unless they have a substantial number of individuals enrolled in their particular plan.

But one should also recognize that there is a strong incentive in the current Obamacare markets for an individual insurer to provide cover in a particular area if no other insurer is there to compete with them.  That is because the federal subsidy to a low-income individual subscribing to an insurance plan depends on the difference between what insurers charge for a silver-level plan (specifically the second lowest cost for such a plan, if there are two or more insurers in the market) and some given percentage of that individual’s household income (with that share phased out for higher incomes).  What that means is that with no other insurer providing competition in some locale, the one that is offering insurance can charge very high rates for their plans and then receive high federal subsidies.  The ones who then lose in this (aside from the federal taxpayer) are households of middle or higher income who would want to purchase private health insurance, but whose income is above the cutoff for eligibility for the federal subsidies.

The result is that the states with the most expensive health insurance plan costs are those that have sought to undermine the Obamacare marketplace (leading to less competition), while the lowest costs are in those states that have encouraged the Obamacare exchanges and thus have multiple insurers competing with each other.  For example, the two states with the most expensive premium rates in 2019 (average for the benchmark silver plans) were Wyoming (average monthly premium for a 40-year-old of $865, before subsidies) and Nebraska (premium of $838).  Each had only one health insurer provider on the exchanges.  At the other end, the five states with the least expensive average premia, all with multiple providers, were Minnesota ($326), Massachusetts ($332), Rhode Island ($336), Indiana ($339), and New Jersey ($352).  These are not generally considered to be low-cost states, but the cost of the insurance plans in Wyoming and Nebraska were two and a half times higher.

The competition of a Medicare-managed public provider would bring down those extremely high insurance costs in the states with limited or no competition.  And at such lower rates, the total being spent by the federal government to support access by individuals to health insurance will come down.  But to achieve this, Congress will have to allow such competition from a public provider, and management through Medicare would be the most efficient way to do this.  One would still have any private providers who wish to compete.  But consumers would then have a choice.

C.  Allow Medicare to Compete in the Market for Employer-Sponsored Health Insurance Cover

While the market for individual health insurance cover is important to extending the availability of affordable health care to those otherwise without insurance cover, employer-sponsored health insurance plans account for a much higher share of the population.  Excluding those with government-sponsored plans via Medicare, Medicaid, and other such public programs, employer-sponsored plans accounted for 76% of the remaining population, individual plans for 11%, and the uninsured for 14%.

These employer-sponsored plans are dominant in the US for historical reasons.  They receive special tax breaks, which began during World War II.  Due to the tax breaks, it is cheaper for the firm to arrange for employee health insurance through the firm (even though it is in the end paid for by the employee, as part of their total compensation package), than to pay the employee an overall wage with the employee then purchasing the health insurance on his or her own.  The employer can deduct it as a business expense.  But this has led to the highly fragmented system of health insurance cover in the US, with each employer negotiating with private insurers for what will be provided through their firm, with resulting high costs for such insurance.

As many have noted, no one would design such a health care funding system from scratch.  But it is what the US has now, and there is justifiable concern over whether some individuals might encounter significant disruptions when switching over to a more rational system, whether Medicare-for-All or anything else.  It is a concern which needs to be respected, as we need health care treatment when we need it, and one does not want to be locked out of access, even if temporarily, during some transition.  How can this risk be avoided?

One could manage this by avoiding a compulsory switch in insurance plans, but rather provide as an option insurance through a Medicare-managed plan.  That is, a Medicare-managed insurance plan, similar in what is covered to current Medicare, would be allowed to compete with current insurance providers, and employers would have the option to switch to that Medicare plan, either immediately or at some later point, as they wish, to manage health insurance for their employees.

Furthermore, this Medicare-managed insurance could serve as a core insurance plan, to be supplemented by a private insurance plan which could cover costs and health care services that Medicare does not cover (such as dental and vision).  These could be similar to Medicare Supplement plans (often called a Medigap plan), or indeed any private insurance plan that provides additional coverage to what Medicare provides.  Medicare is then the primary insurer, while the private supplemental plan is secondary and covers whatever costs (up to whatever that supplemental plan covers) that are not paid for under the core Medicare plan.

In this way, an individual’s effective coverage could be exactly the same as what they receive now under their current employer-sponsored plan.  Employers would still sponsor these supplemental plans, as an addition to the core Medicare-managed plan that they would also choose (and pay for, like any other insurance plan).  But the cost of the Medicare-managed plus private supplemental plans would typically be less than the cost of the purely private plans, due to the far greater efficiency of Medicare.  And with this supplemental coverage, one would address the concern of many that what they now receive through their employer-sponsored plan is a level of benefits that are greater than what Medicare itself covers.  They don’t want to lose that.  But with such supplemental plans, one could bring what is covered up to exactly what they are covering now.

This is not uncommon.  Personally, I am enrolled in Medicare, while I have (though my former employer) additional cover by a secondary private insurer.  And I pay monthly premia to Medicare and through my former employer to the private insurer for this coverage (with those premia supplemented by my former employer, as part of my retirement package).  With the supplemental coverage, I have exactly the same health care services and share of costs covered as what I had before I became eligible for Medicare.  But the cost to me (and my former employer) is less.  One should recognize that for retirees this is in part due to Medicare for the elderly receiving general fiscal subsidies through the government budget.  But the far greater efficiency of Medicare that allows it to keep its administrative costs low (at just 2.4% of what it spends, with the rest going to health care service providers, as compared to a 12% cost share for private insurance) would lead to lower costs for Medicare than for private providers even without such fiscal support.

Such supplemental coverage is also common internationally.  Canada and France, for example, both have widely admired single-payer health insurance systems (what Medicare-for-All would be), and in both one can purchase supplemental coverage from private insurers for costs and services that are not covered under the core, government managed, single-payer plans.

Under this proposed scheme for the US, the decision by a company of whether to purchase cover from Medicare need not be compulsory.  The company could, if it wished, choose to remain with its current private insurer.  But what would be necessary would be for Congress to remove the restriction that prohibits Medicare from competing with private insurance providers.  Medicare would then be allowed to offer such plans at a price which covers its costs.  Companies could then, if they so chose, purchase such core cover from Medicare and additionally, to supplement such insurance with a private secondary plan.  One would expect that given the high cost of medical services everywhere (but especially in the US) they will take a close look at the comparative costs and value provided, and choose the plan (or set of plans) which is most advantageous to them.

Over time, one would expect a shift towards the Medicare-managed plans, given its greater efficiency.  And private plans, in order to be competitive for the core (primary) insurance available from Medicare, would be forced to improve their own efficiency, or face a smaller and smaller market share.  If they can compete, that is fine.  But given their track record up to now, one would expect that they will leave that market largely to Medicare, and focus instead on providing supplemental coverage for the firms to select from.

D.  Avoiding Cherry-Picking by the Private Insurers

An issue to consider, but which can be addressed, is whether in such a system the private insurers will be able to “cherry-pick” the more lucrative, lower risk, population, leaving those with higher health care costs to the Medicare-managed options.  The result would be higher expenses for the public options, which would require them either to raise their rates (if they price to break even) or require a fiscal subsidy from the general government budget.  And if the public options were forced to raise their rates, there would no longer be a level playing field in the market, effective competition would be undermined, and lower-efficiency private insurers could then remain in the market, raising our overall health system costs.

This is an issue that needs to be addressed in any insurance system, and was addressed for the Obamacare exchanges as originally structured.  While the Trump administration has sought to undermine these, they do provide a guide to what is needed.

Specifically, all insurers on the Obamacare exchanges are required to take on anyone in the geographic region who chooses to enroll in their particular plan, even if they have pre-existing conditions.  This is the key requirement which keeps private insurers from cherry-picking lower-cost enrollees, and excluding those who will likely have higher costs.  However, this then needs to be complemented with: 1) the individual mandate; 2) minimum standards on what constitutes an adequate health insurance plan; and 3) what is in essence a reinsurance system across insurers to compensate those who ended up with high-cost enrollees, by payments from those insurers with what turned out to be a lower cost pool (the “risk corridor” program).  These were all in the original Obamacare system, but: 1) the individual mandate was dropped in the December 2017 Republican tax cut (after the Trump administration said they would no longer enforce it anyway);  2) the Trump administration has weakened the minimum standards; and 3) Senator Marco Rubio was able in late 2015 to insert a provision in a must-pass budget bill which blocked any federal spending to even out payments in the risk corridor program.

Without these measures, it will be impossible to sustain the requirement that insurers provide access to everyone, at a price which reflects the health care risks of the population as a whole. With no individual mandate, those who are currently healthy could choose to free-ride on the system, and enroll in one of the health care plans only when they might literally be on the way to the hospital, or, in a less extreme example, only aim to enroll at the point when they know they will soon have high medical expenses (such as when they decide to have a baby, or to have some non-urgent but expensive medical procedure done).  The need for good minimum standards for health care plans is related to this.  Those who are relatively healthy might decide to enroll in an insurance plan that covers little, but, when diagnosed with say a cancer or some other such medical condition, then and only then enroll in a medical insurance plan that provides good cover for such treatments.  The good medical insurance plans would either soon go bankrupt, or be forced also to reduce what they cover in a race to the bottom.

Finally, risk sharing across insurers is in fact common (it is called reinsurance), and was especially important in the new Obamacare markets as the mix of those who would enroll in the plans, especially in the early years, could not be known.  Thus, as part of Obamacare, a system of “risk corridors” was introduced where insurers who ended up with an expensive mix of enrollees (those with severe medical conditions to treat) would be compensated by those with an unexpectedly low-cost mix of enrollees, with the federal government in the middle to smooth out the payments over time.  The Congressional Budget Office estimated in 2014 that while the payment flows would be substantial ($186 billion over ten years) the inflows would match the outflows, leaving no net budgetary cost.  However, Senator Rubio’s amendment effectively blocked this, as he (incorrectly) characterized the risk corridor program to be a “bailout” fund for the insurers.  But the effect of Rubio’s amendment was to lead smaller insurers and newly established health care coops to exit the market (as they did not have the financial resources to wait for inflows and outflows to even out), reducing competition by leaving only a limited number of the large, deep pocket, insurers who could survive such a wait, and then, with the more limited competition, jack up the insurance premia rates.  The result, as we will discuss immediately below, was to increase, not decrease, federal budgetary costs, while pricing out access to the markets of those with incomes too high to receive the federal subsidies.

Despite these efforts to kill Obamacare and block the extension of health insurance coverage to those Americans who have not had it, another provision in the Obamacare structure has allowed it to survive, at least so far and albeit in a more restrictive (but higher cost) form.  And that is due to the way the system of federal subsidies are provided to those of lower-income households in order to make it possible for them to purchase health insurance at a price they can afford.  As discussed above, these federal subsidies cover the difference between some percentage of a household’s income (with that percentage depending on their income) and the cost of a benchmark silver-level plan in their region.

More specifically, those with incomes up to 400% of the federal poverty line (400% would be $49,960 for an individual in 2019, or $103,000 for a family of four) are eligible to receive a federal subsidy to purchase a qualifying health insurance plan.  The subsidy is equal to the difference between the cost of the benchmark silver-level plan and a percentage of their income, on a sliding scale that starts at 2.08% of income for those earning 133% of the poverty line, and goes up to 9.86% for those earning 400%.  The mathematical result of this is that if the cost of the benchmark health insurance plan goes up by $1, they will receive an extra $1 of subsidy (as their income, and hence their contribution, is still the same).

The result is that measures such as the blocking of the risk corridor program by Senator Rubio’s amendment, or the Trump administration’s decision not to enforce (and then to remove altogether) the individual mandate, or the weakening the standards of what has to be covered in a qualifying health insurance plan, have all had the effect of the insurance companies being forced to raise the insurance premium rates sharply.  While those with incomes up to 400% of the poverty line were not affected by this (they pay the same share of their income), those with incomes higher than the 400% limit have been effectively priced out of these markets.  Only those (whose incomes are above that 400%) with some expensive medical condition might remain, but this then further biases the risk pool to those with high medical expenses.  Finally and importantly, these measures to undermine the markets have led to higher, not lower, federal budgetary costs, as the federal subsidies go up dollar for dollar with the higher premium rates.

So we know how to structure the markets to ensure there will be no cherry-picking of low risk, low cost, enrollees, leaving the high-cost patients for the Medicare-managed option.  But it needs to be done.  The requirement that all the insurance plans accept any enrollee will stop this.  This then needs to be complemented with the individual mandate, minimum standards for the health insurance plans, and some form of risk corridors (reinsurance) program.  The issue is not that this is impossible to do, but rather that the Trump administration (and Republicans in Congress) have sought to undermine it.

This discussion has been couched in terms of the market for individual insurance plans, but the same principles apply in the market for employer-sponsored health insurance.  While not as much discussed, the Affordable Care Act also included an employer mandate (phased in over time), with penalties for firms with 50 employees or more who do not offer a health insurance plan meeting minimum standards to their employees.  There were also tax credits provided to smaller firms who offer such insurance plans.

But the cherry-picking concern is less of an issue for such employer-based coverage than it is for coverage of individuals.  This is because there will be a reasonable degree of risk diversification across individuals (the mix of those with more expensive medical needs and those with less) even with just 100 employees or so.  And smaller firms can often subscribe together with others in the industry to a plan that covers them as a group, thus providing a reasonable degree of diversification.  With the insurance covering everyone in the firm (or group of firms), there will be less of a possibility of trying to cherry-pick among them.

The possibility of cherry-picking is therefore something that needs to be considered when designing some insurance system.  If not addressed, it could lead to a loading of the more costly enrollees onto a public option, thus increasing its costs and requiring either higher premia to subscribe to it or government budget support.  But we know how to address the issue.  The primary tool, which we should want in any case, is to require health insurers to be open to any enrollees, and not block those with pre-existing conditions.  But this then needs to be balanced with the individual mandate, minimum standards for what qualifies as a genuine health insurance plan, and means to reinsure exceptional risks across insurers.  The Obamacare reforms had these, and one cannot say that we do not know how to address the issue.

E.  Conclusion

These proposals are not radical.  And while there has been much discussion of allowing a public option to provide competition for insurance plans in the Obamacare markets, I have not seen much discussion of allowing a Medicare-managed option in the market for employer-sponsored health insurance plans.  Yet the latter market is far larger than the market for private, individual, plans, and a key part of the proposal is to allow such competition here as well.

Allowing such options would enable a smooth transition to Medicare-managed health insurance that would be available to all Americans.  And over time one would expect many if not most to choose such Medicare-managed options. Medicare has demonstrated that it is managed with far great efficiency than private health insurers, and thus it can offer better plans at lower cost than private insurers currently do.  If the private insurers are then able to improve their competitiveness by reducing their costs to what Medicare has been able to achieve, then they may remain.  But I expect that most of them will choose to compete in the markets for supplemental coverage, offering plans that complement the core Medicare-managed plan and which would offer a range of options from which employers can choose for their employer-sponsored health insurance cover.

Conservatives may question, and indeed likely will question, whether government-managed anything can be as efficient, much less more efficient, than privately provided services.  While the facts are clear (Medicare does exist, we have the data on what it costs, and we have the data on what private health insurance costs), some will still not accept this.  However, with such a belief, conservatives should not then be opposed to allowing Medicare-managed health insurance options to compete with the private insurers.  If what they believe is true, the publicly-managed options would be too expensive for an inferior product, and few would enroll in it.

But I suspect that the private insurers realize they would not be able to compete with the Medicare-managed insurance options unless they were able to bring their costs down to a comparable level.  And they do not want to do this as they (and their senior staff) benefit enormously from the current fragmented, high cost, system.  That is, there are important vested interests who will be opposed to opening up the system to competition from Medicare-managed options.  It should be no surprise that they, and the politicians they contribute generously to, will be opposed.