Andrew Yang’s Proposed $1,000 per Month Grant: Issues Raised in the Democratic Debate

A.  Introduction

This is the second in a series of posts on this blog addressing issues that have come up during the campaign of the candidates for the Democratic nomination for president, and which specifically came up in the October 15 Democratic debate.  As flagged in the previous blog post, one can find a transcript of the debate at the Washington Post website, and a video of the debate at the CNN website.

This post will address Andrew Yang’s proposal of a $1,000 per month grant for every adult American (which I will mostly refer to here as a $12,000 grant per year).  This policy is called a universal basic income (or UBI), and has been explored in a few other countries as well.  It has received increased attention in recent years, in part due to the sharp growth in income inequality in the US of recent decades, that began around 1980.  If properly designed, such a $12,000 grant per adult per year could mark a substantial redistribution of income.  But the degree of redistribution depends directly on how the funding would be raised.  As we will discuss below, Yang’s specific proposals for that are problematic.  There are also other issues with such a program which, even if well designed, calls into question whether it would be the best approach to addressing inequality.  All this will be discussed below.

First, however, it is useful to address two misconceptions that appear to be widespread.  One is that many appear to believe that the $12,000 per adult per year would not need to come from somewhere.  That is, everyone would receive it, but no one would have to provide the funds to pay for it.  That is not possible.  The economy produces so much, whatever is produced accrues as incomes to someone, and if one is to transfer some amount ($12,000 here) to each adult then the amounts so transferred will need to come from somewhere.  That is, this is a redistribution.  There is nothing wrong with a redistribution, if well designed, but it is not a magical creation of something out of nothing.

The other misconception, and asserted by Yang as the primary rationale for such a $12,000 per year grant, is that a “Fourth Industrial Revolution” is now underway which will lead to widespread structural unemployment due to automation.  This issue was addressed in the previous post on this blog, where I noted that the forecast job losses due to automation in the coming years are not out of line with what has been the norm in the US for at least the last 150 years.  There has always been job disruption and turnover, and while assistance should certainly be provided to workers whose jobs will be affected, what is expected in the years going forward is similar to what we have had in the past.

It is also a good thing that workers should not be expected to rely on a $12,000 per year grant to make up for a lost job.  Median earnings of a full-time worker was an estimated $50,653 in 2018, according to the Census Bureau.  A grant of $12,000 would not go far in making up for this.

So the issue is one of redistribution, and to be fair to Yang, I should note that he posts on his campaign website a fair amount of detail on how the program would be paid for.  I make use of that information below.  But the numbers do not really add up, and for a candidate who champions math (something I admire), this is disappointing.

B.  Yang’s Proposal of a $1,000 Monthly Grant to All Americans

First of all, the overall cost.  This is easy to calculate, although not much discussed.  The $12,000 per year grant would go to every adult American, who Yang defines as all those over the age of 18.  There were very close to 250 million Americans over the age of 18 in 2018, so at $12,000 per adult the cost would be $3.0 trillion.

This is far from a small amount.  With GDP of approximately $20 trillion in 2018 ($20.58 trillion to be more precise), such a program would come to 15% of GDP.  That is huge.  Total taxes and revenues received by the federal government (including all income taxes, all taxes for Social Security and Medicare, and everything else) only came to $3.3 trillion in FY2018.  This is only 10% more than the $3.0 trillion that would have been required for Yang’s $12,000 per adult grants.  Or put another way, taxes and other government revenues would need almost to be doubled (raised by 91%) to cover the cost of the program.  As another comparison, the cost of the tax cuts that Trump and the Republican leadership rushed through Congress in December 2017 was forecast to be an estimated $150 billion per year.  That was a big revenue loss.  But the Yang proposal would cost 20 times as much.

With such amounts to be raised, Yang proposes on his campaign website a number of taxes and other measures to fund the program.  One is a value-added tax (VAT), and from his very brief statements during the debates but also in interviews with the media, one gets the impression that all of the program would be funded by a value-added tax.  But that is not the case.  He in fact says on his campaign website that the VAT, at the rate and coverage he would set, would raise only about $800 billion.  This would come only to a bit over a quarter (27%) of the $3.0 trillion needed.  There is a need for much more besides, and to his credit, he presents plans for most (although not all) of this.

So what does he propose specifically?:

a) A New Value-Added Tax:

First, and as much noted, he is proposing that the US institute a VAT at a rate of 10%.  He estimates it would raise approximately $800 billion a year, and for the parameters for the tax that he sets, that is a reasonable estimate.  A VAT is common in most of the rest of the world as it is a tax that is relatively easy to collect, with internal checks that make underreporting difficult.  It is in essence a tax on consumption, similar to a sales tax but levied only on the added value at each stage in the production chain.  Yang notes that a 10% rate would be approximately half of the rates found in Europe (which is more or less correct – the rates in Europe in fact vary by country and are between 17 and 27% in the EU countries, but the rates for most of the larger economies are in the 19 to 22% range).

A VAT is a tax on what households consume, and for that reason a regressive tax.  The poor and middle classes who have to spend all or most of their current incomes to meet their family needs will pay a higher share of their incomes under such a tax than higher-income households will.  For this reason, VAT systems as implemented will often exempt (or tax at a reduced rate) certain basic goods such as foodstuffs and other necessities, as such goods account for a particularly high share of the expenditures of the poor and middle classes.  Yang is proposing this as well.  But even with such exemptions (or lower VAT rates), a VAT tax is still normally regressive, just less so.

Furthermore, households will in the end be paying the tax, as prices will rise to reflect the new tax.  Yang asserts that some of the cost of the VAT will be shifted to businesses, who would not be able, he says, to pass along the full cost of the tax.  But this is not correct.  In the case where the VAT applies equally to all goods, the full 10% will be passed along as all goods are affected equally by the now higher cost, and relative prices will not change.  To the extent that certain goods (such as foodstuffs and other necessities) are exempted, there could be some shift in demand to such goods, but the degree will depend on the extent to which they are substitutable for the goods which are taxed.  If they really are necessities, such substitution is likely to be limited.

A VAT as Yang proposes thus would raise a substantial amount of revenues, and the $800 billion figure is a reasonable estimate.  This total would be on the order of half of all that is now raised by individual income taxes in the US (which was $1,684 billion in FY2018).  But one cannot avoid that such a tax is paid by households, who will face higher prices on what they purchase, and the tax will almost certainly be regressive, impacting the poor and middle classes the most (with the extent dependent on how many and which goods are designated as subject to a reduced VAT rate, or no VAT at all).  But whether regressive or not, everyone will be affected and hence no one will actually see a net increase of $12,000 in purchasing power from the proposed grant  Rather, it will be something less.

b)  A Requirement to Choose Either the $12,000 Grants, or Participation in Existing Government Social Programs

Second, Yang’s proposal would require that households who currently benefit from government social programs, such as for welfare or food stamps, would be required to give up those benefits if they choose to receive the $12,000 per adult per year.  He says this will lead to reduced government spending on such social programs of $500 to $600 billion a year.

There are two big problems with this.  The first is that those programs are not that large.  While it is not fully clear how expansive Yang’s list is of the programs which would then be denied to recipients of the $12,000 grants, even if one included all those included in what the Congressional Budget Office defines as “Income Security” (“unemployment compensation, Supplemental Security Income, the refundable portion of the earned income and child tax credits, the Supplemental Nutrition Assistance Program [food stamps], family support, child nutrition, and foster care”), the total spent in FY2018 was only $285 billion.  You cannot save $500 to $600 billion if you are only spending $285 billion.

Second, such a policy would be regressive in the extreme.  Poor and near-poor households, and only such households, would be forced to choose whether to continue to receive benefits under such existing programs, or receive the $12,000 per adult grant per year.  If they are now receiving $12,000 or more in such programs per adult household member, they would receive no benefit at all from what is being called a “universal” basic income grant.  To the extent they are now receiving less than $12,000 from such programs (per adult), they may gain some benefit, but less than $12,000 worth.  For example, if they are now receiving $10,000 in benefits (per adult) from current programs, their net gain would be just $2,000 (setting aside for the moment the higher prices they would also now need to pay due to the 10% VAT).  Furthermore, only the poor and near-poor who are being supported by such government programs will see such an effective reduction in their $12,000 grants.  The rich and others, who benefit from other government programs, will not see such a cut in the programs or tax subsidies that benefit them.

c)  Savings in Other Government Programs 

Third, Yang argues that with his universal basic income grant, there would be a reduction in government spending of $100 to $200 billion a year from lower expenditures on “health care, incarceration, homelessness services and the like”, as “people would be able to take better care of themselves”.  This is clearly more speculative.  There might be some such benefits, and hopefully would be, but without experience to draw on it is impossible to say how important this would be and whether any such savings would add up to such a figure.  Furthermore, much of those savings, were they to follow, would accrue not to the federal government but rather to state and local governments.  It is at the state and local level where most expenditures on incarceration and homelessness, and to a lesser degree on health care, take place.  They would not accrue to the federal budget.

d)  Increased Tax Revenues From a Larger Economy

Fourth, Yang states that with the $12,000 grants the economy would grow larger – by 12.5% he says (or $2.5 trillion in increased GDP).  He cites a 2017 study produced by scholars at the Roosevelt Institute, a left-leaning non-profit think tank based in New York, which examined the impact on the overall economy, under several scenarios, of precisely such a $12,000 annual grant per adult.

There are, however, several problems:

i)  First, under the specific scenario that is closest to the Yang proposal (where the grants would be funded through a combination of taxes and other actions), the impact on the overall economy forecast in the Roosevelt Institute study would be either zero (when net distribution effects are neutral), or small (up to 2.6%, if funded through a highly progressive set of taxes).

ii)  The reason for this result is that the model used by the Roosevelt Institute researchers assumes that the economy is far from full employment, and that economic output is then entirely driven by aggregate demand.  Thus with a new program such as the $12,000 grants, which is fully paid for by taxes or other measures, there is no impact on aggregate demand (and hence no impact on economic output) when net distributional effects are assumed to be neutral.  If funded in a way that is not distributionally neutral, such as through the use of highly progressive taxes, then there can be some effect, but it would be small.

In the Roosevelt Institute model, there is only a substantial expansion of the economy (of about 12.5%) in a scenario where the new $12,000 grants are not funded at all, but rather purely and entirely added to the fiscal deficit and then borrowed.  And with the current fiscal deficit now about 5% of GDP under Trump (unprecedented even at 5% in a time of full employment, other than during World War II), and the $12,000 grants coming to $3.0 trillion or 15% of GDP, this would bring the overall deficit to 20% of GDP!

Few economists would accept that such a scenario is anywhere close to plausible.  First of all, the current unemployment rate of 3.5% is at a 50 year low.  The economy is at full employment.  The Roosevelt Institute researchers are asserting that this is fictitious, and that the economy could expand by a substantial amount (12.5% in their scenario) if the government simply spent more and did not raise taxes to cover any share of the cost.  They also assume that a fiscal deficit of 20% of GDP would not have any consequences, such as on interest rates.  Note also an implication of their approach is that the government spending could be on anything, including, for example, the military.  They are using a purely demand-led model.

iii)  Finally, even if one assumes the economy will grow to be 12.5% larger as a result of the grants, even the Roosevelt Institute researchers do not assume it will be instantaneous.  Rather, in their model the economy becomes 12.5% larger only after eight years.  Yang is implicitly assuming it will be immediate.

There are therefore several problems in the interpretation and use of the Roosevelt Institute study.  Their scenario for 12.5% growth is not the one that follows from Yang’s proposals (which is funded, at least to a degree), nor would GDP jump immediately by such an amount.  And the Roosevelt Insitute model of the economy is one that few economists would accept as applicable in the current state of the economy, with its 3.5% unemployment.

But there is also a further problem.  Even assuming GDP rises instantly by 12.5%, leading to an increase in GDP of $2.5 trillion (from a current $20 trillion), Yang then asserts that this higher GDP will generate between $800 and $900 billion in increased federal tax revenue.  That would imply federal taxes of 32 to 36% on the extra output.  But that is implausible.  Total federal tax (and all other) revenues are only 17.5% of GDP.  While in a progressive tax system the marginal tax revenues received on an increase in income will be higher than at the average tax rate, the US system is no longer very progressive.  And the rates are far from what they would need to be twice as high at the margin (32 to 36%) as they are at the average (17.5%).  A more plausible estimate of the increased federal tax revenues from an economy that somehow became 12.5% larger would not be the $800 to $900 billion Yang calculates, but rather about half that.

Might such a universal basic income grant affect the size of the economy through other, more orthodox, channels?  That is certainly possible, although whether it would lead to a higher or to a lower GDP is not clear.  Yang argues that it would lead recipients to manage their health better, to stay in school longer, to less criminality, and to other such social benefits.  Evidence on this is highly limited, but it is in principle conceivable in a program that does properly redistribute income towards those with lower incomes (where, as discussed above, Yang’s specific program has problems).  Over fairly long periods of time (generations really) this could lead to a larger and stronger economy.

But one will also likely see effects working in the other direction.  There might be an increase in spouses (wives usually) who choose to stay home longer to raise their children, or an increase in those who decide to retire earlier than they would have before, or an increase in the average time between jobs by those who lose or quit from one job before they take another, and other such impacts.  Such impacts are not negative in themselves, if they reflect choices voluntarily made and now possible due to a $12,000 annual grant.  But they all would have the effect of reducing GDP, and hence the tax revenues that follow from some level of GDP.

There might therefore be both positive and negative impacts on GDP.  However, the impact of each is likely to be small, will mostly only develop over time, and will to some extent cancel each other out.  What is likely is that there will be little measurable change in GDP in whichever direction.

e)  Other Taxes

Fifth, Yang would institute other taxes to raise further amounts.  He does not specify precisely how much would be raised or what these would be, but provides a possible list and says they would focus on top earners and on pollution.  The list includes a financial transactions tax, ending the favorable tax treatment now given to capital gains and carried interest, removing the ceiling on wages subject to the Social Security tax, and a tax on carbon emissions (with a portion of such a tax allocated to the $12,000 grants).

What would be raised by such new or increased taxes would depend on precisely what the rates would be and what they would cover.  But the total that would be required, under the assumption that the amounts that would be raised (or saved, when existing government programs are cut) from all the measures listed above are as Yang assumes, would then be between $500 and $800 billion (as the revenues or savings from the programs listed above sum to $2.2 to $2.5 trillion).  That is, one might need from these “other taxes” as much as would be raised by the proposed new VAT.

But as noted in the discussion above, the amounts that would be raised by those measures are often likely to be well short of what Yang says will be the case.  One cannot save $500 to $600 billion in government programs for the poor and near-poor if government is spending only $285 billion on such programs, for example.  A more plausible figure for what might be raised by those proposals would be on the order of $1 trillion, mostly from the VAT, and not the $2.2 to $2.5 trillion Yang says will be the case.

C.  An Assessment

Yang provides a fair amount of detail on how he would implement a universal basic income grant of $12,000 per adult per year, and for a political campaign it is an admirable amount of detail.  But there are still, as discussed above, numerous gaps that prevent anything like a complete assessment of the program.  But a number of points are evident.

To start, the figures provided are not always plausible.  The math just does not add up, and for someone who extolls the need for good math (and rightly so), this is disappointing.  One cannot save $500 to $600 billion in programs for the poor and near-poor when only $285 billion is being spent now.  One cannot assume that the economy will jump immediately by 12.5% (which even the Roosevelt Institute model forecasts would only happen in eight years, and under a scenario that is the opposite of that of the Yang program, and in a model that few economists would take as credible in any case).  Even if the economy did jump by so much immediately, one would not see an increase of $800 to $900 billion in federal tax revenues from this but rather more like half that.  And other such issues.

But while the proposal is still not fully spelled out (in particular on which other taxes would be imposed to fill out the program), we can draw a few conclusions.  One is that the one group in society who will clearly not gain from the $12,000 grants is the poor and near-poor, who currently make use of food stamp and other such programs and decide to stay with those programs.  They would then not be eligible for the $12,000 grants.  And keep in mind that $12,000 per adult grants are not much, if you have nothing else.  One would still be below the federal poverty line if single (where the poverty line in 2019 is $12,490) or in a household with two adults and two or more children (where the poverty line, with two children, is $25,750).  On top of this, such households (like all households) will pay higher prices for at least some of what they purchase due to the new VAT.  So such households will clearly lose.

Furthermore, those poor or near-poor households who do decide to switch, thus giving up their eligibility for food stamps and other such programs, will see a net gain that is substantially less than $12,000 per adult.  The extent will depend on how much they receive now from those social programs.  Those who receive the most (up to $12,000 per adult), who are presumably also most likely to be the poorest among them, will lose the most.  This is not a structure that makes sense for a program that is purportedly designed to be of most benefit to the poorest.

For middle and higher-income households the net gain (or loss) from the program will depend on the full set of taxes that would be needed to fund the program.  One cannot say who will gain and who will lose until the structure of that full set of taxes is made clear.  This is of course not surprising, as one needs to keep in mind that this is a program of redistribution:  Funds will be raised (by taxes) that disproportionately affect certain groups, to be distributed then in the $12,000 grants.  Some will gain and some will lose, but overall the balance has to be zero.

One can also conclude that such a program, providing for a universal basic income with grants of $12,000 per adult, will necessarily be hugely expensive.  It would cost $3 trillion a year, which is 15% of GDP.  Funding it would require raising all federal tax and other revenue by 91% (excluding any offset by cuts in government social programs, which are however unlikely to amount to anything close to what Yang assumes).  Raising funds of such magnitude is completely unrealistic.  And yet despite such costs, the grants provided of $12,000 per adult would be poverty level incomes for those who do not have a job or other source of support.

One could address this by scaling back the grant, from $12,000 to something substantially less, but then it becomes less meaningful to an individual.  The fundamental problem is the design as a universal grant, to all adults.  While this might be thought to be politically attractive, any such program then ends up being hugely expensive.

The alternative is to design a program that is specifically targeted to those who need such support.  Rather than attempting to hide the distributional consequences in a program that claims to be universal (but where certain groups will gain and certain groups will lose, once one takes fully into account how it will be funded), make explicit the redistribution that is being sought.  With this clear, one can then design a focussed program that addresses that redistribution aim.

Finally, one should recognize that there are other policies as well that might achieve those aims that may not require explicit government-intermediated redistribution.  For example, Senator Cory Booker in the October 15 debate noted that a $15 per hour minimum wage would provide more to those now at the minimum wage than a $12,000 annual grant.  This remark was not much noted, but what Senator Booker said was true.  The federal minimum wage is currently $7.25 per hour.  This is low – indeed, it is less (in real terms) than what it was when Harry Truman was president.  If the minimum wage were raised to $15 per hour, a worker now at the $7.25 rate would see an increase in income of $15.00 – $7.25 = $7.75 per hour, and over a year of 40 hour weeks would see an increase in income of $7.75 x 40 x 52 = $16,120.00.  This is well more than a $12,000 annual grant would provide.

Republican politicians have argued that raising the minimum wage by such a magnitude will lead to widespread unemployment.  But there is no evidence that changes in the minimum wage that we have periodically had in the past (whether federal or state level minimum wages) have had such an adverse effect.  There is of course certainly some limit to how much it can be raised, but one should recognize that the minimum wage would now be over $24 per hour if it had been allowed to grow at the same pace as labor productivity since the late 1960s.

Income inequality is a real problem in the US, and needs to be addressed.  But there are problems with Yang’s specific version of a universal basic income.  While one may be able to fix at least some of those problems and come up with something more reasonable, it would still be massively disruptive given the amounts to be raised.  And politically impossible.  A focus on more targeted programs, as well as on issues such as the minimum wage, are likely to prove far more productive.

Allow the IRS to Fill In Our Tax Forms For Us – It Can and It Should

A.  Introduction

Having recently completed and filed this year’s income tax forms, it is timely to examine what impact the Republican tax bill, pushed quickly through Congress in December 2017 along largely party-line votes, has had on the taxes we pay and on the process by which we figure out what they are.  I will refer to the bill as the Trump/GOP tax bill as the new law reflected both what the Republican leadership in Congress wanted and what the Trump administration pushed for.

We already know well that the cuts went largely to the very well-off.  The chart above is one more confirmation of this.  It was calculated from figures in a recent report by the staff of the Joint Committee on Taxation of the US Congress, released on March 25, 2019 (report #JCX-10-19).  While those earning more than $1 million in 2019 will, on average, see their taxes cut by $64,428 per tax filing unit (i.e. generally households), those earning $10,000 or less will see a reduction of just $21.  And on the scale of the chart, it is indeed difficult to impossible even to see the bars depicting the reductions in taxes for those earning less than $50,000 or so.

The sharp bias in favor of the rich was discussed in a previous post on this blog, based there on estimates from a different group (the Tax Policy Center, a non-partisan think tank) but with similar results.  And while it is of course true that those who are richer will have more in taxes that can be cut (one could hardly cut $64,428 from a taxpayer earning less than $10,000), it is not simply the absolute amounts but also the share of taxes which were cut much further for the rich than for the poor.  According to the Joint Committee on Taxation report cited above, those earning $30,000 or less will only see their taxes cut by 0.5% of their incomes, while those earning between $0.5 million and $1.0 million will see a cut of 3.1%.  That is more than six times as much as a share of incomes.  That is perverse.

And the overall average reduction in individual income taxes will only be a bit less than 10% of the tax revenues being paid before.  This is in stark contrast to the more than 50% reduction in corporate income taxes that we have already observed in what was paid by corporations in 2018.

Furthermore, while taxes for households in some income category may have on average gone down, the numerous changes made to the tax code on the Trump/GOP bill meant that for many it did not.  Estimates provided in the Joint Committee on Taxation report cited above (see Table 2 of the report) indicate that for 2019 a bit less than two-thirds of tax filing units (households) will see a reduction in their taxes of $100 or more, but more than one-third will see either no significant change (less than $100) or a tax increase.  The impacts vary widely, even for those with the same income, depending on a household’s particular situation.

But the Trump/GOP tax bill promised not just a reduction in taxes, but also a reduction in tax complexity, by eliminating loopholes and from other such measures.  The claim was that most Americans would then be able to fill in their tax returns “on a postcard”.  But as is obvious to anyone who has filed their forms this year, it is hardly that.  This blog post will discuss why this is so and why filling in one’s tax returns remains such a headache.  The fundamental reason is simple:  The tax system is not less complex than before, but more.

There is, however, a way to address this, and not solely by ending the complexity (although that would in itself be desirable).  Even with the tax code as complicated as it now is (and more so after the Trump/GOP bill), the IRS could complete for each of us a draft of what our filing would look like based on the information that the IRS already collects.  Those draft forms would match what would be due for perhaps 80 to 85% of us (basically almost all of those who take the standard deduction).  For that 80 to 85% one would simply sign the forms and return them along with a payment if taxes are due or a request for a refund if a refund is due.  Most remaining taxpayers would also be able to use these initial draft forms from the IRS, but for them as the base for what they would need to file.  In their cases, additions or subtractions would be made to reflect items such as itemized deductions (mostly) and certain special tax factors (for some) where the information necessary to complete such calculations would not have been provided in the normal flow of reports to the IRS.  And a small number of filers might continue to fill in all their forms as now.  That small number would be no worse than now, while life would be much simpler for the 95% or more (perhaps 99% or more) who could use the pre-filled in forms from the IRS either in their entirety or as a base to start from.

The IRS receives most of the information required to do this already for each of us (and all that is required for most of us).  But what would be different is that instead of the IRS using such information to check what we filed after the fact, and then impose a fine (or worse) if we made a mistake, the IRS would now use that same information to fill in the forms for us.  We would then review and check them, and if necessary or advantageous to our situation we could then adjust them.  We will discuss how such a tax filing system could work below.

B.  Our Tax Forms are Now Even More Complex Than Before

Trump and the Republican leaders in Congress promised that with the Trump/GOP tax bill, the tax forms we would need to file could, for most of us, fit just on a postcard.  And Treasury Secretary Steven Mnuchin then asserted that the IRS (part of Treasury) did just that.  But this is simply nonsense, as anyone who has had to struggle with the new Form 1040s (or even just looked at them) could clearly see.

Specifically:

a)  Form 1040 is not a postcard, but a sheet of paper (front and back), to which one must attach up to six separate schedules.  This previously all fit on one sheet of paper, but now one has to complete and file up to seven just for the 1040 itself.

b)  Furthermore, there are no longer the forms 1040-EZ or 1040-A which were used by those with less complex tax situations.  Now everyone needs to work from a fully comprehensive Form 1040, and try to figure out what may or may not apply in their particular circumstances.

c)  The number of labeled lines on the old 1040 came to 79.  On the new forms (including the attached schedules) they come to 75.  But this is misleading, as what used to be counted as lines 1 through 6 on the old 1040 are now no longer counted (even though they are still there and are needed).  Including these, the total number of numbered lines comes to 81, or basically the same as before (and indeed more).

d)  Spreading out the old Form 1040 from one sheet of paper to seven does, however, lead to a good deal of extra white space.  This was likely done to give it (the first sheet) the “appearance” of a postcard.  But the forms would have been much easier to fill in, with less likelihood of error, if some of that white space had been used instead for sub-totals and other such entries so that all the steps needed to calculate one’s taxes were clear.

e)  Specifically, with the six new schedules, one has to carry over computations or totals from five of them (all but the last) to various lines on the 1040 itself.  But this was done, confusingly, in several different ways:  1)  The total from Schedule 4 was carried over to its own line (line 14) on the 1040.  It would have been best if all of them had been done this way, but they weren’t.  Instead, 2) The total from Schedule 2 was added to a number of other items on line 11 of the 1040, with the total of those separate items then shown on line 11.  And 3) The total from Schedule 1 was added to the sum of what is shown on the lines above it (lines1 through 5b of the 1040) and then recorded on line 6 of the 1040.

If this looks confusing, it is because it is.  I made numerous mistakes on this when completing my own returns (yes – I do these myself, as I really want to know how they are done).  I hope my final returns were done correctly.  And it is not simply me.  Early indications (as of early March) were that errors on this year’s tax forms were up by 200% over last year’s (i.e. they tripled).

f)  There is also the long-standing issue that the actual forms that one has to fill out are in fact substantially greater than those that one files, as one has to fill in numerous worksheets in order to calculate certain of the figures.  These worksheets should be considered part of the returns, and not hidden in the directions, in order to provide an honest picture of what is involved.  And they don’t fit on a postcard.

g)  But possibly what is most misleading about what is involved in filling out the returns is not simply what is on the 1040 itself, but also the need to include on the 1040 figures from numerous additional forms (for those that may apply).  Few if any of them are applicable to one’s particular tax situation, but to know whether they do or not one has to review each of those forms and make such a determination.  How does one know whether some form applies when there is a statement on the 1040 such as “Enter the amount, if any, from Form xxxx”?  The only way to know is to look up the form (fortunately now this can be done on the internet), read through it along with the directions, and then determine whether it may apply to you.  Furthermore, in at least a few cases one can only know if the form applies to your situation is by filling it in and then comparing the result found to some other item to see whether filing that particular form applies to you.

There are more than a few such forms.  By my count, one has just on the Form 1040 plus its Schedules 1 through 5 amounts that might need to be entered from Forms 8814, 4972, 8812, 8863, 4797, 8889, 2106, 3903, SE, 6251, 8962, 2441, 8863, 8880, 5695, 3800, 8801,1116, 4137, 8919, 5329, 5405, 8959, 8960, 965-A, 8962, 4136, 2439, and 8885.  Each of these forms may apply to certain taxpayers, but mostly only a tiny fraction of them.  But all taxpayers will need to know whether they might apply to their particular situation.  They can often guess that they probably won’t (and it likely would be a good guess, as most of these forms only apply to a tiny sliver of Americans), but the only way to know for sure is to check each one out.

Filling out one’s individual income tax forms has, sadly, never been easy.  But it has now become worse.  And while the new look of the Form 1040 appears to be a result of a political decision by the Trump administration (“make it look like it could fit on a postcard”), the IRS should mostly not be blamed for the complexity.  That complexity is a consequence of tax law, as written by Congress, which finds it politically advantageous to reward what might be a tiny number of supporters (and campaign contributors) with some special tax break.  And when Congress does this, the IRS must then design a new form to reflect that new law, and incorporate it into the Form 1040 and now the new attached schedules.  And then everyone, not simply the tiny number of tax filers to whom it might in fact apply, must then determine whether or not it applies to them.

There are, of course, also more fundamental causes of the complexity in the tax code, which must then be reflected in the forms.  The most important is the decision by our Congress to tax different forms of income differently, where wages earned will in general be taxed at the highest rates (up to 37%) while capital gains (including dividends on stocks held for more than 60 days) are taxed at rates of just 20% or less.  And there are a number of other forms of income that are taxed at various rates (including now, under the Trump/GOP tax bill, an effectively lower tax rate for certain company owners on the incomes they receive from their companies, as well as new special provisions of benefit to real estate developers).  As discussed in an earlier post on this blog, there is no good rationale, economic or moral, to justify this.  It leads to complex tax calculations as the different forms of income must each be identified and then taxed at rates that interact with each other.  And it leads to tremendous incentives to try to shift your type of income, when you are in a position to do so, from wages, say, to a type taxed at a lower rate (such as stock options that will later be taxed only at the long-term capital gains rate).

Given this complexity, it is no surprise that most Americans turn either to professional tax preparers (accountants and others) to fill in their tax forms for them, or to special tax preparation software such as TurboTax.  Based on statistics for the 2018 tax filing season (for 2017 taxes), 72.1 million tax filers hired professionals to prepare their tax forms, or 51% of the 141.5 million tax returns filed.  The cost varies by what needs to be filed, but even assuming an average fee of just $500 per return, this implies a total of over $36 billion is being paid by taxpayers for just this service.

Most of the remaining 49% of tax filers use tax preparation software for their returns (a bit over three-quarters of them).  But these are problematic as well.  There is also a cost (other than for extremely simple returns), but the software itself may not be that good.  A recent review by Consumer Reports found problems with each of the four major tax preparation software packages it tested (TurboTax, H&R Block, TaxSlayer, and TaxAct), and concluded they are not to be trusted.

And on top of this, there is the time the taxpayer must spend to organize all the records that will be needed in order to complete the tax returns – whether by a hired professional tax preparer, or by software, or by one’s own hand.  A 2010 report by a presidential commission examing options for tax reform estimated that Americans spend about 2.5 billion hours a year to do what is necessary to file their individual income tax returns, equivalent to $62.5 billion at an average time cost of $25 per hour.

Finally there are the headaches.  Figuring one’s taxes, even if a professional is hired to fill in the forms, is not something anyone wants to spend time on.

There is a better way.  With the information that is already provided to the IRS each year, the IRS could complete and provide to each of us a draft set of tax forms which would suffice (i.e. reflect exactly what our tax obligation is) for probably 80% or more of households.  And most of the remainder could use such draft forms as a base and then provide some simple additions or subtractions to arrive at what their tax obligation is.  The next section will discuss how this could be done.

C.  Have the IRS Prepare Draft Tax Returns for Each of Us

The IRS already receives, from employers, financial institutions, and others, information on the incomes provided to each of us during the tax year.  And these institutions then tell us each January what they provided to the IRS.  Employers tell us on W-2 forms what wages were paid to us, and financial institutions will tell us through various 1099 forms what was paid to us in interest, in dividends, in realized capital gains, in earnings from retirement plans, and from other such sources of returns on our investments.  Reports are also filed with the IRS for major transactions such as from the sale of a home or other real estate.

The IRS thus has very good information on our income each year.  Our family situation is also generally stable from year to year, although it can vary sometimes (such as when a child is born).  But basing an initial draft estimate on the household situation of the previous year will generally be correct, and can be amended when needed.  One could also easily set up an online system through which tax filers could notify the IRS when such events occur, to allow the IRS to incorporate those changes into the draft tax forms they next produce.

For most of those who take the standard deduction, the IRS could then fill in our tax forms exactly.  And most Americans take the standard deduction. Prior to the Trump/GOP tax bill, about 70% of tax filers did, and it is now estimated that with the changes resulting from the new tax bill, about 90% will.  Under the Trump/GOP tax bill, the basic standard deduction was doubled (while personal exemptions were eliminated, so not all those taking the standard deduction ended up better off).  And perhaps of equal importance, the deduction that could be taken on state and local taxes was capped at $10,000 while how much could be deducted on mortgage interest was also narrowed, so itemization was no longer advantageous for many (with these new limitations primarily affecting those living in states that vote for Democrats – not likely a coincidence).

The IRS could thus prepare filled in tax forms for each of us, based on information contained in what we had filed in earlier years and assuming the standard deduction is going to be taken.  But they would just be drafts.  They would be sent to us for our review, and if everything is fine (and for most of the 90% taking the standard deduction they would be) we would simply sign the forms and return them (along with a check if some additional tax is due, or information on where to deposit a refund if a tax refund is due).

But for the 10% where itemized deductions are advantageous, and for a few others who are in some special tax situation, one could either start with the draft forms and make additions or subtractions to reflect simple adjustments, or, if one wished, prepare a new set of forms reflecting one’s tax situation.  There would likely not be many of the latter, but it would be an option, and no worse than what is currently required of everyone.

For those making adjustments, the changes could simply be made at the end.  For example (and likely the most common such situation), suppose it was advantageous to take itemized deductions rather than the standard deduction.  One would fill in the regular Schedule A (as now), but then rather than recomputing all of the forms, one could subtract from the taxes due an amount based on what the excess was of the itemized deductions over the standard deduction, and one’s tax rate.  Suppose the excess of the itemized deductions over the standard deduction for the filer came to $1,000.  Then for the very rich (households earning over $600,000 a year after deductions), one would reduce the taxes due by 37%, or $370.  Those earning $400,000 to $600,000, in the 35% bracket, would subtract $350.  And so on down to the lower brackets, where those in the 12% bracket (those earning $19,050 to $77,400) would subtract $120 (and those earning less than $19,050 are unlikely to itemize).

[Side Note:  Why do the rich receive what is in effect a larger subsidy from the government than the poor do for what they itemize, such as for contributions to charities?  That is, why do the rich effectively pay just $630 for their contribution to a charity ($1,000 minus $370), while the poor pay $880 ($1,000 minus $120) for their contribution to possibly the exact same charity?  There really is no economic, much less moral, reason for this, but that is in fact how the US tax code is currently written.  As discussed in an earlier post on this blog, the government subsidy for such deductions could instead be set to be the same for all, at say a rate of 20% or so.  There is no reason why the rich should receive a bigger subsidy than the poor receive for the contributions they make.]

Another area where the information the IRS would not have complete information to compute taxes due would be where the tax filer had sold a capital asset which had been purchased before 2010.  The IRS only started in 2010 to require that financial institutions report the cost basis for assets sold, and this cost basis is needed to compute capital gains (or losses).  But as time passes, a smaller and smaller share of assets sold will have been purchased before 2010.  The most important, for most people, will likely be the cost of the home they bought if before 2010, but such a sale will happen only once (unless they owned multiple real estate assets in 2010).

But a simple adjustment could be made to reflect the cost basis of such assets, similar to the adjustment for itemized deductions.  The draft tax forms filled in by the IRS would leave as blank (zero) the cost basis of the assets sold in the year for which it did not have a figure reported.  The tax filer would then determine what the cost basis of all such assets should be (as they do now), add them up, and then subtract 20% of that total cost basis from the taxes due (for those in the 20% bracket for long term capital gains, as most people with capital gains are, or use 15% or 0% if those tax brackets apply in their particular cases).

There will still be a few tax filers with more complex situations where the IRS draft computations are not helpful, who will want to do their own forms.  This is fine – there would always be that option.  But such individuals would still be no worse off than what is required now.  And their number is likely to be very small.  While a guess, I would say that what the IRS could provide to tax filers would be fully sufficient and accurate for 80 to 85% of Americans, and that simple additions or subtractions to the draft forms (as described above) would work for most of the rest.  Probably less than 5% of filers would need to complete a full set of forms themselves, and possibly less than 1%.

D. Final Remarks

Such an approach would be new for the US.  But there is nothing revolutionary about it.  Indeed, it is common elsewhere in the world.  Much of Western Europe already follows such an approach or some variant of it, in particular all of the Scandinavian countries as well as Germany, Spain, and the UK, and also Japan.  Small countries, such as Chile and Estonia, have it, as do large ones.

It has also often been proposed for the US.  Indeed, President Reagan proposed it as part of his tax reduction and simplification bill in 1985, then candidate Barack Obama proposed it in 2007 in a speech on middle class tax fairness, a presidential commission in 2010 included it as one of the proposals in its report on simplifying the tax system, and numerous academics and others have also argued in its favor.

It would also likely save money at the IRS.  The IRS collects already most of the information needed.  But that information is not then sent back to us in fully or partially filled in tax forms, but rather is used by the IRS after we file to check to see whether we got anything wrong.  And if we did, we then face a fine or possibly worse.  Completing our tax returns should not be a game of “gotcha” with the IRS, but rather an effort to ensure we have them right.

Such a reform has, however, been staunchly opposed by narrow interests who benefit from the current frustrating system.  Intuit, the seller of TurboTax software, has been particularly aggressive through its congressional lobbying and campaign contributions in using Congress to block the IRS from pursuing this, as has H&R Block.  They of course realize that if tax filing were easy, with the IRS completing most or all of the forms for us, there would be no need to spend what comes to billions of dollars for software from Intuit and others.  But the morality of a business using its lobbying and campaign contributions to ensure life is made particularly burdensome for the citizenry, so that it can then sell a product to make it easier, is something to be questioned.

One can, however, understand the narrow commercial interests of Intuit and the tax software companies.  One can also, sadly, understand the opposition of a number of conservative political activists, with Grover Norquist the most prominent and in the lead. They have also aggressively lobbied Congress to block the IRS from making tax filing simpler.  They are ideologically opposed to taxes, and see the burden and difficulty in figuring out one’s taxes as a positive, not as a negative.  The hope is that with more people complaining about how difficult it is to fill in their tax forms, the more people will be in favor of cutting taxes.  While that view on how people see taxes might well be accurate, what many may not realize is that the tax cuts of recent decades have led to greater complexity and difficulty, not less.  With new loopholes for certain narrow interests, and with income taxed differently depending on the source of that income (with income from wealth taxed at a much lower rate than income from labor), the system has become more complex while generating less revenue overall.

But it is perverse that Congress should legislate in favor of making life more difficult.  The tax system is indeed necessary and crucial, as Reagan correctly noted in his 1985 speech on tax reform, but as he also noted in that speech, there is no need to make them difficult.  Most Americans, Reagan argued, should be able, and would be able under his proposals, to use what he called a “return-free” system, with the IRS working out the taxes due.

The system as proposed above would do this.  It would also be voluntary.  If one disagreed with the pre-filled in forms sent by the IRS, and could not make the simple adjustments (up or down) to the taxes due through the measures as discussed above, one could always fill in the entire set of forms oneself.  But for that small number of such cases this would just be the same as is now required for all.  Furthermore, if one really was concerned about the IRS filling in one’s forms for some reason (it is not clear what that might be), one could easily have a system of opting-out, where one would notify the IRS that one did not want the service.

The tax code itself should still be simplified.  There are many reforms that can and should be implemented, if there was the political will.  The 2010 presidential commission presented numerous options for what could be done.  But even with the current complex system, or rather especially because of the current complex system, there is no valid reason why figuring out and filing our taxes should be so difficult.  Let the IRS do it for us.

Taxes on Corporate Profits Have Continued to Collapse

 

The Bureau of Economic Analysis (BEA) released earlier today its second estimate of GDP growth in the fourth quarter ot 2018.  (Confusingly, it was officially called the “third” estimate, but was only the second as what would have been the first, due in January, was never done due to Trump shutting down most agencies of the federal government in December and January due to his border wall dispute.)  Most public attention was rightly focussed on the downward revision in the estimate of real GDP growth in the fourth quarter, from a 2.6% annual rate estimated last month, to 2.2% now.  And current estimates are that growth in the first quarter of 2019 will be substantially less than that.

But there is much more in the BEA figures than just GDP growth.  The second report of the BEA also includes initial estimates of corporate profits and the taxes they pay (as well as much else).  The purpose of this note is to update an earlier post on this blog that examined what happened to corporate profit tax revenues following the Trump / GOP tax cuts of late 2017.  That earlier post was based on figures for just the first half of 2018.

We now have figures for the full year, and they confirm what had earlier been found – corporate profit tax revenues have indeed plummeted.  As seen in the chart at the top of this post, corporate profit taxes were in the range of only $150 to $160 billion (at annual rates) in the four quarters of 2018.  This was less than half the $300 to $350 billion range in the years before 2018.  And there is no sign that this collapse in revenues was due to special circumstances of one quarter or another.  We see it in all four quarters.

The collapse shows through even more clearly when one examines what they were as a share of corporate profits:

 

The rate fell from a range of generally 15 to 16%, and sometimes 17%, in the earlier years, to just 7.0% in 2018.  And it was an unusually steady rate of 7.0% throughout the year.  Note that under the Trump / GOP tax bill, the standard rate for corporate profit tax was cut from 35% previously to a new headline rate of 21%.  But the actual rate paid turned out (on average over all firms) to come to just 7.0%, or only one-third as much.  The tax bill proponents claimed that while the headline rate was being cut, they would close loopholes so the amount collected would not go down.  But instead loopholes were not only kept, but expanded, and revenues collected fell by more than half.

If the average corporate profit tax rate paid in 2018 had been not 7.0%, but rather at the rate it was on average over the three prior fiscal years (FY2015 to 2017) of 15.5%, an extra $192.2 billion in revenues would have been collected.

There was also a reduction in personal income taxes collected.  While the proportional fall was less, a much higher share of federal income taxes are now borne by individuals than by corporations.  (They were more evenly balanced decades ago, when the corporate profit tax rates were much higher – they reached over 50% in terms of the amount actually collected in the early 1950s.)  Federal personal income tax as a share of personal income was 9.2% in 2018, and again quite steady at that rate over each of the four quarters.  Over the three prior fiscal years of FY2015 to 2017, this rate averaged 9.6%.  Had it remained at that 9.6%, an extra $77.3 billion would have been collected in 2018.

The total reduction in tax revenues from these two sources in 2018 was therefore $270 billion.  While it is admittedly simplistic to extrapolate this out over ten years, if one nevertheless does (assuming, conservatively, real growth of 1% a year and price growth of 2%, for a total growth of about 3% a year), the total revenue loss would sum to $3.1 trillion.  And if one adds to this, as one should, the extra interest expense on what would now be a higher public debt (and assuming an average interest rate for government borrowing of 2.6%), the total loss grows to $3.5 trillion.

This is huge.  To give a sense of the magnitude, an earlier post on this blog found that revenues equal to the original forecast loss under the Trump / GOP tax plan (summing to $1.5 trillion over the next decade, and then continuing) would suffice to ensure the Social Security Trust Fund would be fully funded forever.  As things are now, if nothing is done the Trust Fund will run out in about 2034.  And Republicans insist that the gap is so large that nothing can be done, and that the system will have to crash unless retired seniors accept a sharp reduction in what are already low benefits.

But with losses under the Trump / GOP tax bill of $3.1 trillion over ten years, less than half of those losses would suffice to ensure Social Security could survive at contracted benefit levels.  One cannot argue that we can afford such a huge tax cut, but cannot afford what is needed to ensure Social Security remains solvent.

In the nearer term, the tax cuts have led to a large growth in the fiscal deficit.  Even the US Treasury itself is currently forecasting that the federal budget deficit will reach $1.1 trillion in FY2019 (5.2% of GDP), up from $779 billion in FY2018.  It is unprecedented to have such high fiscal deficits at a time of full employment, other than during World War II.  Proper fiscal management would call for something closer to a balanced budget, or even a surplus, in those periods when the economy is at full employment, while deficits should be expected (and indeed called for) during times of economic downturns, when unemployment is high.  But instead we are doing the opposite.  This will put the economy in a precarious position when the next economic downturn comes.  And eventually it will, as it always has.

End Gerrymandering by Focussing on the Process, Not on the Outcomes

A.  Introduction

There is little that is as destructive to a democracy as gerrymandering.  As has been noted by many, with gerrymandering the politicians are choosing their voters rather than the voters choosing their political representatives.

The diagrams above, in schematic form, show how gerrymandering works.  Suppose one has a state or region with 50 precincts, with 60% that are fully “blue” and 40% that are fully “red”, and where 5 districts need to be drawn.  If the blue party controls the process, they can draw the district lines as in the middle diagram, and win all 5 (100%) of the districts, with just 60% of the voters.  If, in contrast, the red party controls the process for some reason, they could draw the district boundaries as in the diagram on the right.  They would then win 3 of the 5 districts (60%) even though they only account for 40% of the voters.  It works by what is called in the business “packing and cracking”:  With the red party controlling the process, they “pack” as many blue voters as possible into a small number of districts (two in the example here, each with 90% blue voters), and then “crack” the rest by scattering them around in the remaining districts, each as a minority (three districts here, each with 40% blue voters and 60% red).

Gerrymandering leads to cynicism among voters, with the well-founded view that their votes just do not matter.  Possibly even worse, gerrymandering leads to increased polarization, as candidates in districts with lines drawn to be safe for one party or the other do not need to worry about seeking to appeal to voters of the opposite party.  Rather, their main concern is that a more extreme candidate from their own party will not challenge them in a primary, where only those of their own party (and normally mostly just the more extreme voters in their party) will vote.  And this is exactly what we have seen, especially since 2010 when gerrymandering became more sophisticated, widespread, and egregious than ever before.

Gerrymandering has grown in recent decades both because computing power and data sources have grown increasingly sophisticated, and because a higher share of states have had a single political party able to control the process in full (i.e. with both legislative chambers, and the governor when a part of the process, all under a single party’s control).  And especially following the 2010 elections, this has favored the Republicans.  As a result, while there has been one Democratic-controlled state (Maryland) on common lists of the states with the most egregious gerrymandering, most of the states with extreme gerrymandering were Republican-controlled.  Thus, for example, Professor Samuel Wang of Princeton, founder of the Princeton Gerrymandering Project, has identified a list of the eight most egregiously gerrymandered states (by a set of criteria he has helped develop), where one (Maryland) was Democratic-controlled, while the remaining seven were Republican.  Or the Washington Post calculated across all states an average of the degree of compactness of congressional districts:  Of the 15 states with the least compact districts, only two (Maryland and Illinois) were liberal Democratic-controlled states.  And in terms of the “efficiency gap” measure (which I will discuss below), seven states were gerrymandered following the 2010 elections in such a way as to yield two or more congressional seats each in their favor.  All seven were Republican-controlled.

With gerrymandering increasingly common and extreme, a number of cases have gone to the Supreme Court to try to stop it.  However, the Supreme Court has failed as yet to issue a definitive ruling ending the practice.  Rather, it has so far skirted the issue by resolving cases on more narrow grounds, or by sending cases back to lower courts for further consideration.  This may soon change, as the Supreme Court has agreed to take up two cases (affecting lines drawn for congressional districts in North Carolina and in Maryland), with oral arguments scheduled for March 26, 2019.  But it remains to be seen if these cases will lead to a definitive ruling on the practice of partisan gerrymandering or not.

This is not because of a lack of concern by the court.  Even conservative Justice Samuel Alito has conceded that “gerrymandering is distasteful”.  But he, along with the other conservative justices on the court, have ruled against the court taking a position on the gerrymandering cases brought before it, in part, at least, out of the concern that they do not have a clear standard by which to judge whether any particular case of gerrymandering was constitutionally excessive.  This goes back to a 2004 case (Vieth v. Jubelirer) in which the four most conservative justices of the time, led by Justice Antonin Scalia, opined that there could not be such a standard, while the four liberal justices argued that there could.  Justice Anthony Kennedy, in the middle, issued a concurring opinion with the conservative justices there was not then an acceptable such standard before them, but that he would not preclude the possibility of such a standard being developed at some point in the future.

Following this 2004 decision, political scientists and other scholars have sought to come up with such a standard.  Many have been suggested, such as a set of three tests proposed by Professor Wang of Princeton, or measures that focus on the share of seats won to the share of the votes cast, and more.  Probably most attention in recent years has been given to the “efficiency gap” measure proposed by Professor Nicholas Stephanopoulos and Eric McGhee.  The efficiency gap is the gap between the two main parties in the “wasted votes” each party received in some past election in the state (as a share of total votes in the state), where a wasted vote is the sum of all the votes for a losing candidate of that party, plus the votes in excess of 50% when that party’s candidate won.  This provides a direct measure of the two basic tactics of gerrymandering, as described above, of “packing” as many voters of one party as possible in a small number of districts (where they might receive 80 or 90% of the votes, but with all those above 50% “wasted”), and “cracking” (by splitting up the remaining voters of that party into a large number of districts where they will each be in a minority and hence will lose, with those votes then also “wasted”).

But there are problems with each of these measures, including the widely touted efficiency gap measure.  It has often been the case in recent years, in our divided society, that like-minded voters live close to each other, and particular districts in the state then will, as a result, often see the winner of the district receive a very high share of the votes.  Thus, even with no overt gerrymandering, the efficiency gap as measured will appear large.  Furthermore, at the opposite end of this spectrum, the measure will be extremely sensitive if a few districts are close to 50/50.  A shift of just a few percentage points in the vote will then lead one party or the other to lose and hence will then see a big jump in their share of wasted votes (the 49% received by one party or the other).

There is, however, a far more fundamental problem.  And that is that this is simply the wrong question to ask.  With all due respect to Justice Kennedy, and recognizing also that I am an economist and not a lawyer, I do not understand why the focus here is on the voting outcome, rather than on the process by which the district lines were drawn.  The voting outcome is not the standard by which other aspects of voter rights are judged.  Rather, the focus is on whether the process followed was fair and unbiased, with the outcome then whatever it is.

I would argue that the same should apply when district lines are drawn.  Was the process followed fair and unbiased?  The way to ensure that would be to remove the politicians from the process (both directly and indirectly), and to follow instead an automatic procedure by which district lines are drawn in accord with a small number of basic principles.

The next section below will first discuss the basic point that the focus when judging fairness and lack of bias should not be on whether we can come up with some measure based on the vote outcomes, but rather on whether the process that was followed to draw the district lines was fair and unbiased or not.  The section following will then discuss a particular process that illustrates how this could be done.  It would be automatic, and would produce a fair and unbiased drawing of voting district lines that meets the basic principles on which such a map should be based (districts of similar population, compactness, contiguity, and, to the extent consistent with these, respect for the boundaries of existing political jurisdictions such as counties or municipalities).  And while I believe this particular process would be a good one, I would not exclude that others are possible.  The important point is that the courts should require the states to follow some such process, and from the example presented we see that this is indeed feasible.  It is not an impossible task.

The penultimate section of the post will then discuss a few points that arise with any such system, and their implications, and end with a brief section summarizing the key points.

B.  A Fair Voting System Should Be Judged Based on the Process, Not on the Outcomes

Voting rights are fundamental in any democracy.  But in judging whether some aspect of the voting system is proper, we do not try to determine whether or not (by some defined specific measure) the resulting outcomes were improperly skewed or not.

Thus, for example, we take as a basic right that our ballot may be cast in secret.  No government official, nor anyone else for that matter, can insist on seeing how we voted.  Suppose that some state passed a law saying a government-appointed official will look over the shoulder of each of us as we vote, to determine whether we did it “right” or not.  We would expect the courts to strike this down, as an inappropriate process that contravenes our basic voting rights.  We would not expect the courts to say that they should look at the subsequent voting outcomes, and try to come up with some specific measure which would show, with certainty, whether the resulting outcomes were excessively influenced or not.  That would of course be absurd.

As another absurd example, suppose some state passed a law granting those registered in one of the major political parties, but not those registered in the other, access to more early days of voting than the other.  This would be explicitly partisan, and one would assume that the courts would not insist on limiting their assessment to an examination of the later voting outcomes to see whether, by some proposed measure, the resulting outcomes were excessively affected.  The voting system, to be fair, should not lead to a partisan advantage for one party or the other.  But gerrymandering does precisely that.

Yet the courts have so far asked declined to issue a definitive ruling on partisan gerrymandering, and have asked instead whether there might be some measure to determine, in the voting outcomes, whether gerrymandering had led to an excessive partisan advantage for the party drawing the district lines.  And there have been open admissions by senior political figures that district borders were in fact drawn up to provide a partisan advantage.  Indeed, principals involved in the two cases now before the Supreme Court have openly said that partisan advantage was the objective.  In North Carolina, David Lewis, the Republican chair of the committee in the state legislature responsible for drawing up the district lines, said during the debate that “I think electing Republicans is better than electing Democrats. So I drew this map to help foster what I think is better for the country.”

And in the case of Maryland, the Democratic governor of the state in 2010 at the time the congressional district lines were drawn, Martin O’Malley, spoke out in 2018 in writing and in interviews openly acknowledging that he and the Democrats had drawn the district lines for partisan advantage.  But he also now said that this was wrong and that he hoped the Supreme Court would rule against what they had done.

But how to remove partisanship when district lines are drawn?  As long as politicians are directly involved, with their political futures (and those of their colleagues) dependent on the district lines, it is human nature that biases will enter.  And it does not matter whether the biases are conscious and openly expressed, or unconscious and denied.  Furthermore, although possibly diminished, such biases will still enter even with independent commissions drawing the district lines.  There will be some political process by which the commissioners are appointed, and those who are appointed, even if independent, will still be human and will have certain preferences.

The way to address this would rather be to define some automatic process which, given the data on where people live and the specific principles to follow, will be able to draw up district lines that are both fair (follow the stated principles) and unbiased (are not drawn up in order to provide partisan advantage to one party).  In the next section I will present a particular process that would do this.

C.  An Automatic Process to Draw District Lines that are Fair and Unbiased

The boundaries for fair and unbiased districts should be drawn in accord with the following set of principles (and no more):

a)  One Person – One Vote:  Each district should have a similar population;

b)  Contiguity:  Each district must be geographically contiguous.  That is, one continuous boundary line will encompass the entire district and nothing more;

c)  Compactness:  While remaining consistent with the above, districts should be as compact as possible under some specified measure of compactness.

And while not such a fundamental principle, a reasonable objective is also, to the extent possible consistent with the basic principles above, that the district boundaries drawn should follow the lines of existing political jurisdictions (such as of counties or municipalities).

There will still be a need for decisions to be made on the basic process to follow and then on a number of the parameters and specific rules required for any such process.  Individual states will need to make such decisions, and can do so in accordance with their traditions and with what makes sense for their particular state.  But once these “rules of the game” are fully specified, there should then be a requirement that they will remain locked in for some lengthy period (at least to beyond whenever the next decennial redistricting will be needed), so that games cannot be played with the rules in order to bias a redistricting that may soon be coming up.  This will be discussed further below.

Such specific decisions will need to be made in order to fully define the application of the basic principles presented above.  To start, for the one person – one vote principle the Supreme Court has ruled that a 10% margin in population between the largest and smallest districts is an acceptable standard.  And many states have indeed chosen to follow this standard.  However, a state could, if it wished, choose to use a tighter standard, such as a margin in the populations between the largest and smallest districts of no more than 8%, or perhaps 5% or whatever.  A choice needs to be made.

Similarly, a specific measure of compactness will need to be specified.  Mathematically there are several different measures that could be used, but a good one which is both intuitive and relatively easy to apply is that the sum of the lengths of all the perimeters of each of the districts in the state should be minimized.  Note that since the outside borders of the state itself are fixed, this sum can be limited just to the perimeters that are internal to the state.  In essence, since states are to be divided up into component districts (and exhaustively so), the perimeter lines that do this with the shortest total length will lead to districts that are compact.  There will not be wavy lines, nor lines leading to elongated districts, as such lines will sum to a greater total length than possible alternatives.

What, then, would be a specific process (or algorithm) which could be used to draw district lines?  I will recommend one here, which should work well and would be consistent with the basic principles for a fair and unbiased set of district boundaries.  But other processes are possible.  A state could choose some such alternative (but then should stick to it).  The important point is that one should define a fully specified, automatic, and neutral process to draw such district lines, rather than try to determine whether some set of lines, drawn based on the “judgment” of politicians or of others, was “excessively” gerrymandered based on the voting outcomes observed.

Finally, the example will be based on what would be done to draw congressional district lines in a state.  But one could follow a similar process for drawing other such district lines, such as for state legislative districts.

The process would follow a series of steps:

Step 1: The first step would be to define a set of sub-districts within each county in a state (parish in Louisiana) and municipality (in those states where municipalities hold similar governmental responsibilities as a county).  These sub-districts would likely be the districts for county boards or legislative councils in most of the states, and one might typically have a dozen or more of these in such jurisdictions.  When those districts are also being redrawn as part of the decennial redistricting process, then they should be drawn first (based on the principles set out here), before the congressional district lines are drawn.

Each state would define, as appropriate for the institutions of that specific state, the sub-districts that will be used for the purpose of drawing the congressional district lines.  And if no such sub-jurisdictions exist in certain counties of certain states, one could draw up such sub-districts, purely for the purposes of this redistricting exercise, by dividing such counties into compact (based on minimization of the sum of the perimeters), equal population, districts.  While the number of such sub-districts would be defined (as part of the rules set for the process) based on the population of the affected counties, a reasonable number might generally be around 12 or 15.

These sub-districts will then be used in Step 4 below to even out the congressional districts.

Step 2:  An initial division of each state into a set of tentative congressional districts would then be drawn based on minimizing the sum of the lengths of the perimeter lines for all the districts, and requiring that all of the districts in the state have exactly the same population.  Following the 2010 census, the average population in a congressional district across the US was 710,767, but the exact number will vary by state depending on how many congressional seats the state was allocated.

Step 3: This first set of district lines will not, in general, follow county and municipal lines.  In this step 3, the initial set of district lines would then be shifted to the county or municipal line which is geographically closest to it (as defined by minimizing the geographic area that would be shifted in going to that county or city line, in comparison to whatever the alternative jurisdiction would be).  If the populations in the resulting congressional districts are then all within the 10% margin for the populations (or whatever percent margin is chosen by the state) between the largest and the smallest districts, then one is finished and the map is final.

Step 4:  But in general, there may be one or more districts where the resulting population exceeds or falls short of the 10% limit.  One would then make use of the political subdivisions of the counties and municipalities defined in Step 1 to bring them into line.  A specific set of rules for that process would need to be specified.  One such set would be to first determine which congressional district, as then drawn, deviated most from what the mean population should be for the districts in that state.  Suppose that district had too large of a population.  One would then shift one of the political subdivisions in that district from it to whichever adjacent congressional district had the least population (of all adjacent districts).  And the specific political subdivision shifted would then be the one which would have the least adverse impact on the measure of compactness (the sum of perimeter lengths).  Note that the impact on the compactness measure could indeed be positive (i.e. it could make the resulting congressional districts more compact), if the political subdivision eligible to be shifted were in a bend in the county or city line.

If the resulting congressional districts were all now within the 10% population margin (or whatever margin the state had chosen as its standard), one would be finished.  But if this is not the case, then one would repeat Step 4 over and over as necessary, each time for whatever district was then most out of line with the 10% margin.

That is it.  The result would be contiguous and relatively compact congressional districts, each with a similar population (within the 10% margin, or whatever margin is decided upon), and following borders of counties and municipalities or of political sub-divisions within those entities.

This would of course all be done on the computer, and can be once the rules and parameters are all decided as there will no longer be a role for opinion nor an opportunity for political bias to enter.  And while the initial data entry will be significant (as one would need to have the populations and perimeter lengths of each of the political subdivisions, and those of the counties and municipalities that they add up to), such data are now available from standard sources.  Indeed, the data entry needed would be far less than what is typically required for the computer programs used by our politicians to draw up their gerrymandered maps.

D.  Further Remarks

A few more points:

a)  The Redistricting Process, Once Decided, Should be Locked In for a Long Period:  As was discussed above, states will need to make a series of decisions to define fully the specific process it chooses to follow.  As illustrated in the case discussed above, states will need to decide on matters such as what will be the maximum margin of the populations between the largest and smallest districts (no more than 10%, by Supreme Court decision, but it could be less).  And rules will need to be set on, also as in the case discussed above, what measure of compactness to use, or the criterion on which district should be chosen first to have a shift of a sub-district in order to even out the population differences, and so on.

Such decisions will have an impact on the final districts arrived at.  And some of those districts will favor Republicans and some will favor Democrats, just by random.  There would then be a problem if the redistricting were controlled by one party in the state, and that party (through consultants who specialize in this) tried out dozens if not hundreds of possible choices on the parameters to see which would turn out to be most advantageous to it.  While the impact would be far less than what we have now with the deliberate gerrymandering, there could still be some effect.

To stem this, one should require that once choices are made on the process to follow and on the rules and other parameters needed to implement that process, there could not then be a change in that process for the immediately upcoming decennial redistricting.  They would only apply to those following.  While this would not be possible for the very first application of the system, there will likely be a good deal of attention paid by the public to these issues initially so such an attempt to bias the system would be difficult.

As noted, this is not likely to be a major problem, and any such system will not introduce the major biases we have seen in the deliberately gerrymandered maps of numerous states following the 2010 census.  But by locking in any decisions made for a long period, where any random bias in favor of one party in a map might well be reversed following the next census, there will be less of a possibility to game the system by changing the rules, just before a redistricting is due, to favor one party.

b)  Independent Commissions Do Not Suffice  – They Still Need to Decide How to Draw the District Maps:  A reform that has been increasingly advocated by many in recent years is to take the redistricting process out of the hands of the politicians, and instead to appoint independent commissions to draw up the maps.  There are seven states currently with non-partisan or bipartisan, nominally independent, commissions that draw the lines for both congressional and state legislative districts, and a further six who do this for state legislative districts only.  Furthermore, several additional states will use such commissions starting with the redistricting that follows the 2020 census.  Finally, there is Iowa.  While technically not an independent commission, district lines in Iowa are drawn up by non-partisan legislative staff, with the state legislature then approving it or not on a straight up or down vote.  If not approved, the process starts over, and if not approved after three votes it goes to the Iowa Supreme Court.

While certainly a step in the right direction, a problem with such independent commissions is that the process by which members are appointed can be highly politicized.  And even if not overtly politicized, the members appointed will have personal views on who they favor, and it is difficult even with the best of intentions to ensure such views do not enter.

But more fundamentally, even a well-intentioned independent commission will need to make choices on what is, and what is not, a “good” district map.  While most states list certain objectives for the redistricting process in either their state constitutions or in legislation, these are typically vague, such as saying the maps should try to preserve “communities of interest”, but with no clarity on what this in practice means.  Thirty-eight states also call for “compactness”, but few specify what that really means.  Indeed, only two states (Colorado and Iowa) define a specific measure of compactness.  Both states say that compactness should be measured by the sum of the perimeter lines being minimized (the same measure I used in the process discussed above).  However, in the case of Iowa this is taken along with a second measure of compactness (the absolute value of the difference between the length and the width of a district), and it is not clear how these two criteria are to be judged against each other when they differ.  Furthermore, in all states, including Colorado and Iowa, the compactness objective is just one of many objectives, and how to judge tradeoffs between the diverse objectives is not specified.

Even a well-intentioned independent commission will need to have clear criteria to judge what is a good map and what is not.  But once these criteria are fully specified, there is then no need for further opinion to enter, and hence no need for an independent commission.

c)  Appropriate and Inappropriate Principles to Follow: As discussed above, the basic principles that should be followed are:  1) One person – One vote, 2) Contiguity, and 3) Compactness.  Plus, to the extent possible consistent with this, the lines of existing political jurisdictions of a state (such as counties and municipalities) should be respected.

But while most states do call for this (with one person – one vote required by Supreme Court decision, but decided only in 1964), they also call for their district maps to abide by a number of other objectives.  Examples include the preservation of “communities of interest”, as discussed above, where 21 states call for this for their state legislative districts and 16 for their congressional districts (where one should note that congressional districting is not relevant in 7 states as they have only one member of Congress).  Further examples of what are “required” or “allowed” to be considered include preservation of political subdivision lines (45 states); preservation of “district cores” (8 states); and protection of incumbents (8 states).  Interestingly, 10 states explicitly prohibit consideration of the protection of incumbents.  And various states include other factors to consider or not consider as well.

But many, indeed most, of these considerations are left vague.  What does it mean that “communities of interest” are to be preserved where possible?  Who defines what the relevant communities are?  What is the district “core” that is to be preserved?  And as discussed above, there is a similar issue with the stated objective of “compactness”, as while 38 states call for it, only Colorado and Iowa are clear on how it is defined (but then vague on what trade-offs are to be accepted against other objectives).

The result of such multiple objectives, mostly vaguely defined and with no guidance on trade-offs, is that it is easy to come up with the heavily gerrymandered maps we have seen and the resulting strong bias in favor of one political party over the other.  Any district can be rationalized in terms of at least one of the vague objectives (such as preserving a “community of interest”).  These are loopholes which allow the politicians to draw maps favorable to themselves, and should be eliminated.

d)  Racial Preferences: The US has a long history of using gerrymandering (as well as other measures) to effectively disenfranchise minority groups, in particular African-Americans.  This has been especially the case in the American South, under the Jim Crow laws that were in effect through to the 1960s.  The Voting Rights Act of 1965 aimed to change this.  It required states (in particular under amendments to Section 2 passed in 1982 when the Act was reauthorized) to ensure minority groups would be able to have an effective voice in their choice of political representatives, including, under certain circumstances, through the creation of congressional and other legislative districts where the previously disenfranchised minority group would be in the majority (“majority-minority districts”).

However, it has not worked out that way.  Indeed, the creation of majority-minority districts, with African-Americans packed into as small a number of districts as possible and with the rest then scattered across a large number of remaining districts, is precisely what one would do under classic gerrymandering (packing and cracking) designed to limit, not enable, the political influence of such groups.  With the passage of these amendments to the Voting Rights Act in 1982, and then a Supreme Court decision in 1986 which upheld this (Thornburg v. Gingles), Republicans realized in the redistricting following the 1990 census that they could then, in those states where they controlled the process, use this as a means to gerrymander districts to their political advantage.  Newt Gingrich, in particular, encouraged this strategy, and the resulting Republican gains in the South in 1992 and 1994 were an important factor in leading to the Republican take-over of the Congress following the 1994 elections (for the first time in 40 years), with Gingrich then becoming the House Speaker.

Note also that while the Supreme Court, in a 5-4 decision in 2013, essentially gutted a key section of the Voting Rights Act, the section they declared to be unconstitutional was Section 5.  This was the section that required pre-approval by federal authorities of changes in voting statutes in those jurisdictions of the country (mostly the states of the South) with a history of discrimination as defined in the statute.  Left in place was Section 2 of the Voting Rights Act, the section under which the gerrymandering of districts on racial lines has been justified.  It is perhaps not surprising that Republicans have welcomed keeping this Section 2 while protesting Section 5.

One should also recognize that this racial gerrymandering of districts in the South has not led to most African-Americans in the region being represented in Congress by African-Americans.  One can calculate from the raw data (reported here in Ballotpedia, based on US Census data), that as of 2015, 12 of the 71 congressional districts in the core South (Louisiana, Mississippi, Alabama, Georgia, South Carolina, North Carolina, Virginia, and Tennessee) had a majority of African-American residents.  These were all just a single district in each of the states, other than two in North Carolina and four in Georgia.  But the majority of African Americans in those states did not live in those twelve districts.  Of the 13.2 million African-Americans in those eight states, just 5.0 million lived in those twelve districts, while 8.2 million were scattered around the remaining districts.  By packing as many African-Americans as possible in a small number of districts, the Republican legislators were able to create a large number of safe districts for their own party, and the African-Americans in those districts effectively had little say in who was then elected.

The Voting Rights Act was an important measure forward, drafted in reaction to the Jim Crow laws that had effectively undermined the right to vote of African-Americans.  And defined relative to the Jim Crow system, it was progress.  However, relative to a system that draws up district lines in a fair and unbiased manner, it would be a step backwards.  A system where minorities are packed into a small number of districts, with the rest then scattered across most of the districts, is just standard gerrymandering designed to minimize, not to ensure, the political rights of the minority groups.

E.  Conclusion

Politicians drawing district lines to favor one party and to ensure their own re-election fundamentally undermines democracy.  Supreme Court justices have themselves called it “distasteful”.  However, to address gerrymandering the court has sought some measure which could be used to ascertain whether the resulting voting outcomes were biased to a degree that could be considered unconstitutional.

But this is not the right question.  One does not judge other aspects of whether the voting process is fair or not by whether the resulting outcomes were by some measure “excessively” affected or not.  It is not clear why such an approach, focused on vote outcomes, should apply to gerrymandering.  Rather, the focus should be on whether the process followed was fair and unbiased or not.

And one can certainly define a fair and unbiased process to draw district lines.  The key is that the process, once established, should be automatic and follow the agreed set of basic principles that define what the districts should be – that they should be of similar population, compact, contiguous, and where possible and consistent with these principles, follow the lines of existing political jurisdictions.

One such process was outlined above.  But there are other possibilities.  The key is that the courts should require, in the name of ensuring a fair vote, that states must decide on some such process and implement it.  And the citizenry should demand the same.

Market Competition as a Path to Making Medicare Available for All

A.  Introduction

Since taking office just two years ago, the Trump administration has done all it legally could to undermine Obamacare.  The share of the US population without health insurance had been brought down to historic lows under Obama, but they have now moved back up, with roughly half of the gains now lost.  The chart above (from Gallup) traces its path.

This vulnerability of health cover gains to an antagonistic administration has led many Democrats to look for a more fundamental reform that would be better protected.  Many are now calling for an expansion of the popular and successful Medicare program to the full population – it is currently restricted just to those aged 65 and above.  Some form of Medicare-for-All has now been endorsed by most of the candidates that have so far announced they are seeking the Democratic nomination to run for president in 2020, although the specifics differ.

But while Medicare-for-All is popular as an ultimate goal, the path to get there as well as specifics on what the final structure might look like are far from clear (and differ across candidates, even when different alternatives are each labeled “Medicare-for-All”).  There are justifiable concerns on whether there will be disruptions along the way.  And the candidates favoring Medicare-for-All have yet to set out all the details on how that process would work.

But there is no need for the process to be disruptive.  The purpose of this blog post is to set out a possible path where personal choice in a system of market competition can lead to a health insurance system where Medicare is at least available for all who desire it, and where the private insurance that remains will need to be at least as efficient and as attractive to consumers as Medicare.

The specifics will be laid out below, but briefly, the proposal is built around two main observations.  One is that Medicare is a far more efficient, and hence lower cost, system than private health insurance is in the US.  As was discussed in an earlier post on this blog, administrative expenses account for only 2.4% of the cost of traditional Medicare.  All the rest (97.6%) goes to health care providers.  Private health insurers, in contrast, have non-medical expenses of 12% of their total costs, or five times as much.  Medicare is less costly to administer as it is a simpler system and enjoys huge economies of scale.  Private health insurers, in contrast, have set up complex systems of multiple plans and networks of health care providers, pay very generous salaries to CEOs and other senior staff who are skilled at operating in the resulting highly fragmented system, and pay out high profits as well (that in normal years account for roughly one-quarter of that 12% margin).

With Medicare so much more efficient, why has it not pushed out the more costly private insurance providers?  The answer is simple:  Congress has legislated that Medicare is not allowed to compete with them.  And that is the second point:  Remove these legislated constraints, and allow Medicare-managed plans to compete with the private insurance companies (at a price set so that it breaks even).  Americans will then be able to choose, and in this way transition to a system where enrollment in Medicare-managed insurance services is available to all.  And over time, such competition can be expected to lead most to enroll in the Medicare-managed options.  They will be cheaper for a given quality, due to Medicare’s greater efficiency.

There will still be a role for private insurance.  For those competing with Medicare straight on, the private insurers that remain will have to be able to provide as good a product at as good a cost.  But also, private insurers will remain to offer insurance services that supplement what a Medicare insurance plan would provide.  Such optional private insurance would cover services (such as dental services) or costs (Medicare covers just 80% after the deductible) that the basic Medicare plan does not cover.  Medicare will then be the primary insurer, and the private insurance the secondary.  And, importantly, note that in this system the individual will still be receiving all the services that they receive under their current health plans.  This addresses the concern of some that a Medicare-like plan would not be as complete or as comprehensive as what they might have now.  With the optional supplemental, their insurance could cover exactly what they have now, or even more.  Medicare would be providing a core level of coverage, and then, for those who so choose, supplemental private plans can bring the coverage to all that they have now.  But the cost will be lower, as they will gain from the low cost of Medicare for those core services.

More specifically, how would this work?

B.  Allow Medicare to Compete in the Market for Individual Health Insurance Plans

A central part of the Obamacare reforms was the creation of a marketplace where individuals, who do not otherwise have access to a health insurance plan (such as through an employer), could choose to purchase an individual health insurance plan.  As originally proposed, and indeed as initially passed by the House of Representatives, a publicly managed health insurance plan would have been made available (at a premium rate that would cover its full costs) in addition to whatever plans were offered by private insurers.  This would have addressed the problem in the Obamacare markets of often excessive complexity (with constantly changing private plans entering or leaving the different markets), as well as limited and sometimes even no competition in certain regions.  A public option would have always been available everywhere.  But to secure the 60 votes needed to pass in the Senate, the public option had to be dropped (at the insistence of Senator Joe Lieberman of Connecticut).

It could, and should, be introduced now.  Such a public option could be managed by Medicare, and could then piggy-back on the management systems and networks of hospitals, doctors, and other health care providers who already work with Medicare.  However, the insurance plan itself would be broader than what Medicare covers for the elderly, and would meet the standards for a comprehensive health care plan as defined under Obamacare.  Medicare for the elderly is, by design, only partial (for example, it covers only 80% of the cost, after a modest deductible), plus it does not cover services such as for pregnancies.  A public option plan administered by Medicare in the Obamacare marketplace would rather provide services as would be covered under the core “silver plan” option in those markets (the option that is the basis for the determination of the subsidies for low-income households).  And one might consider offering as options plans at the “bronze” and “gold” levels as well.

Such a Medicare-managed public option would provide competition in the Obamacare exchanges.  An important difficulty, especially in the Republican states that have not been supportive of offering such health insurance, is that in certain states (or counties within those states) there have been few health insurers competing with each other, and indeed often only one.  The exchanges are organized by state, and even when insurers decide to offer insurance cover within some state, they may decide to offer it only to residents of certain counties within that state.  The private insurers operate with an expensive business model, built typically around organizing networks of doctors with whom they negotiate individual rates for health care services provided.  It is costly to set this up, and not worthwhile unless they have a substantial number of individuals enrolled in their particular plan.

But one should also recognize that there is a strong incentive in the current Obamacare markets for an individual insurer to provide cover in a particular area if no other insurer is there to compete with them.  That is because the federal subsidy to a low-income individual subscribing to an insurance plan depends on the difference between what insurers charge for a silver-level plan (specifically the second lowest cost for such a plan, if there are two or more insurers in the market) and some given percentage of that individual’s household income (with that share phased out for higher incomes).  What that means is that with no other insurer providing competition in some locale, the one that is offering insurance can charge very high rates for their plans and then receive high federal subsidies.  The ones who then lose in this (aside from the federal taxpayer) are households of middle or higher income who would want to purchase private health insurance, but whose income is above the cutoff for eligibility for the federal subsidies.

The result is that the states with the most expensive health insurance plan costs are those that have sought to undermine the Obamacare marketplace (leading to less competition), while the lowest costs are in those states that have encouraged the Obamacare exchanges and thus have multiple insurers competing with each other.  For example, the two states with the most expensive premium rates in 2019 (average for the benchmark silver plans) were Wyoming (average monthly premium for a 40-year-old of $865, before subsidies) and Nebraska (premium of $838).  Each had only one health insurer provider on the exchanges.  At the other end, the five states with the least expensive average premia, all with multiple providers, were Minnesota ($326), Massachusetts ($332), Rhode Island ($336), Indiana ($339), and New Jersey ($352).  These are not generally considered to be low-cost states, but the cost of the insurance plans in Wyoming and Nebraska were two and a half times higher.

The competition of a Medicare-managed public provider would bring down those extremely high insurance costs in the states with limited or no competition.  And at such lower rates, the total being spent by the federal government to support access by individuals to health insurance will come down.  But to achieve this, Congress will have to allow such competition from a public provider, and management through Medicare would be the most efficient way to do this.  One would still have any private providers who wish to compete.  But consumers would then have a choice.

C.  Allow Medicare to Compete in the Market for Employer-Sponsored Health Insurance Cover

While the market for individual health insurance cover is important to extending the availability of affordable health care to those otherwise without insurance cover, employer-sponsored health insurance plans account for a much higher share of the population.  Excluding those with government-sponsored plans via Medicare, Medicaid, and other such public programs, employer-sponsored plans accounted for 76% of the remaining population, individual plans for 11%, and the uninsured for 14%.

These employer-sponsored plans are dominant in the US for historical reasons.  They receive special tax breaks, which began during World War II.  Due to the tax breaks, it is cheaper for the firm to arrange for employee health insurance through the firm (even though it is in the end paid for by the employee, as part of their total compensation package), than to pay the employee an overall wage with the employee then purchasing the health insurance on his or her own.  The employer can deduct it as a business expense.  But this has led to the highly fragmented system of health insurance cover in the US, with each employer negotiating with private insurers for what will be provided through their firm, with resulting high costs for such insurance.

As many have noted, no one would design such a health care funding system from scratch.  But it is what the US has now, and there is justifiable concern over whether some individuals might encounter significant disruptions when switching over to a more rational system, whether Medicare-for-All or anything else.  It is a concern which needs to be respected, as we need health care treatment when we need it, and one does not want to be locked out of access, even if temporarily, during some transition.  How can this risk be avoided?

One could manage this by avoiding a compulsory switch in insurance plans, but rather provide as an option insurance through a Medicare-managed plan.  That is, a Medicare-managed insurance plan, similar in what is covered to current Medicare, would be allowed to compete with current insurance providers, and employers would have the option to switch to that Medicare plan, either immediately or at some later point, as they wish, to manage health insurance for their employees.

Furthermore, this Medicare-managed insurance could serve as a core insurance plan, to be supplemented by a private insurance plan which could cover costs and health care services that Medicare does not cover (such as dental and vision).  These could be similar to Medicare Supplement plans (often called a Medigap plan), or indeed any private insurance plan that provides additional coverage to what Medicare provides.  Medicare is then the primary insurer, while the private supplemental plan is secondary and covers whatever costs (up to whatever that supplemental plan covers) that are not paid for under the core Medicare plan.

In this way, an individual’s effective coverage could be exactly the same as what they receive now under their current employer-sponsored plan.  Employers would still sponsor these supplemental plans, as an addition to the core Medicare-managed plan that they would also choose (and pay for, like any other insurance plan).  But the cost of the Medicare-managed plus private supplemental plans would typically be less than the cost of the purely private plans, due to the far greater efficiency of Medicare.  And with this supplemental coverage, one would address the concern of many that what they now receive through their employer-sponsored plan is a level of benefits that are greater than what Medicare itself covers.  They don’t want to lose that.  But with such supplemental plans, one could bring what is covered up to exactly what they are covering now.

This is not uncommon.  Personally, I am enrolled in Medicare, while I have (though my former employer) additional cover by a secondary private insurer.  And I pay monthly premia to Medicare and through my former employer to the private insurer for this coverage (with those premia supplemented by my former employer, as part of my retirement package).  With the supplemental coverage, I have exactly the same health care services and share of costs covered as what I had before I became eligible for Medicare.  But the cost to me (and my former employer) is less.  One should recognize that for retirees this is in part due to Medicare for the elderly receiving general fiscal subsidies through the government budget.  But the far greater efficiency of Medicare that allows it to keep its administrative costs low (at just 2.4% of what it spends, with the rest going to health care service providers, as compared to a 12% cost share for private insurance) would lead to lower costs for Medicare than for private providers even without such fiscal support.

Such supplemental coverage is also common internationally.  Canada and France, for example, both have widely admired single-payer health insurance systems (what Medicare-for-All would be), and in both one can purchase supplemental coverage from private insurers for costs and services that are not covered under the core, government managed, single-payer plans.

Under this proposed scheme for the US, the decision by a company of whether to purchase cover from Medicare need not be compulsory.  The company could, if it wished, choose to remain with its current private insurer.  But what would be necessary would be for Congress to remove the restriction that prohibits Medicare from competing with private insurance providers.  Medicare would then be allowed to offer such plans at a price which covers its costs.  Companies could then, if they so chose, purchase such core cover from Medicare and additionally, to supplement such insurance with a private secondary plan.  One would expect that given the high cost of medical services everywhere (but especially in the US) they will take a close look at the comparative costs and value provided, and choose the plan (or set of plans) which is most advantageous to them.

Over time, one would expect a shift towards the Medicare-managed plans, given its greater efficiency.  And private plans, in order to be competitive for the core (primary) insurance available from Medicare, would be forced to improve their own efficiency, or face a smaller and smaller market share.  If they can compete, that is fine.  But given their track record up to now, one would expect that they will leave that market largely to Medicare, and focus instead on providing supplemental coverage for the firms to select from.

D.  Avoiding Cherry-Picking by the Private Insurers

An issue to consider, but which can be addressed, is whether in such a system the private insurers will be able to “cherry-pick” the more lucrative, lower risk, population, leaving those with higher health care costs to the Medicare-managed options.  The result would be higher expenses for the public options, which would require them either to raise their rates (if they price to break even) or require a fiscal subsidy from the general government budget.  And if the public options were forced to raise their rates, there would no longer be a level playing field in the market, effective competition would be undermined, and lower-efficiency private insurers could then remain in the market, raising our overall health system costs.

This is an issue that needs to be addressed in any insurance system, and was addressed for the Obamacare exchanges as originally structured.  While the Trump administration has sought to undermine these, they do provide a guide to what is needed.

Specifically, all insurers on the Obamacare exchanges are required to take on anyone in the geographic region who chooses to enroll in their particular plan, even if they have pre-existing conditions.  This is the key requirement which keeps private insurers from cherry-picking lower-cost enrollees, and excluding those who will likely have higher costs.  However, this then needs to be complemented with: 1) the individual mandate; 2) minimum standards on what constitutes an adequate health insurance plan; and 3) what is in essence a reinsurance system across insurers to compensate those who ended up with high-cost enrollees, by payments from those insurers with what turned out to be a lower cost pool (the “risk corridor” program).  These were all in the original Obamacare system, but: 1) the individual mandate was dropped in the December 2017 Republican tax cut (after the Trump administration said they would no longer enforce it anyway);  2) the Trump administration has weakened the minimum standards; and 3) Senator Marco Rubio was able in late 2015 to insert a provision in a must-pass budget bill which blocked any federal spending to even out payments in the risk corridor program.

Without these measures, it will be impossible to sustain the requirement that insurers provide access to everyone, at a price which reflects the health care risks of the population as a whole. With no individual mandate, those who are currently healthy could choose to free-ride on the system, and enroll in one of the health care plans only when they might literally be on the way to the hospital, or, in a less extreme example, only aim to enroll at the point when they know they will soon have high medical expenses (such as when they decide to have a baby, or to have some non-urgent but expensive medical procedure done).  The need for good minimum standards for health care plans is related to this.  Those who are relatively healthy might decide to enroll in an insurance plan that covers little, but, when diagnosed with say a cancer or some other such medical condition, then and only then enroll in a medical insurance plan that provides good cover for such treatments.  The good medical insurance plans would either soon go bankrupt, or be forced also to reduce what they cover in a race to the bottom.

Finally, risk sharing across insurers is in fact common (it is called reinsurance), and was especially important in the new Obamacare markets as the mix of those who would enroll in the plans, especially in the early years, could not be known.  Thus, as part of Obamacare, a system of “risk corridors” was introduced where insurers who ended up with an expensive mix of enrollees (those with severe medical conditions to treat) would be compensated by those with an unexpectedly low-cost mix of enrollees, with the federal government in the middle to smooth out the payments over time.  The Congressional Budget Office estimated in 2014 that while the payment flows would be substantial ($186 billion over ten years) the inflows would match the outflows, leaving no net budgetary cost.  However, Senator Rubio’s amendment effectively blocked this, as he (incorrectly) characterized the risk corridor program to be a “bailout” fund for the insurers.  But the effect of Rubio’s amendment was to lead smaller insurers and newly established health care coops to exit the market (as they did not have the financial resources to wait for inflows and outflows to even out), reducing competition by leaving only a limited number of the large, deep pocket, insurers who could survive such a wait, and then, with the more limited competition, jack up the insurance premia rates.  The result, as we will discuss immediately below, was to increase, not decrease, federal budgetary costs, while pricing out access to the markets of those with incomes too high to receive the federal subsidies.

Despite these efforts to kill Obamacare and block the extension of health insurance coverage to those Americans who have not had it, another provision in the Obamacare structure has allowed it to survive, at least so far and albeit in a more restrictive (but higher cost) form.  And that is due to the way the system of federal subsidies are provided to those of lower-income households in order to make it possible for them to purchase health insurance at a price they can afford.  As discussed above, these federal subsidies cover the difference between some percentage of a household’s income (with that percentage depending on their income) and the cost of a benchmark silver-level plan in their region.

More specifically, those with incomes up to 400% of the federal poverty line (400% would be $49,960 for an individual in 2019, or $103,000 for a family of four) are eligible to receive a federal subsidy to purchase a qualifying health insurance plan.  The subsidy is equal to the difference between the cost of the benchmark silver-level plan and a percentage of their income, on a sliding scale that starts at 2.08% of income for those earning 133% of the poverty line, and goes up to 9.86% for those earning 400%.  The mathematical result of this is that if the cost of the benchmark health insurance plan goes up by $1, they will receive an extra $1 of subsidy (as their income, and hence their contribution, is still the same).

The result is that measures such as the blocking of the risk corridor program by Senator Rubio’s amendment, or the Trump administration’s decision not to enforce (and then to remove altogether) the individual mandate, or the weakening the standards of what has to be covered in a qualifying health insurance plan, have all had the effect of the insurance companies being forced to raise the insurance premium rates sharply.  While those with incomes up to 400% of the poverty line were not affected by this (they pay the same share of their income), those with incomes higher than the 400% limit have been effectively priced out of these markets.  Only those (whose incomes are above that 400%) with some expensive medical condition might remain, but this then further biases the risk pool to those with high medical expenses.  Finally and importantly, these measures to undermine the markets have led to higher, not lower, federal budgetary costs, as the federal subsidies go up dollar for dollar with the higher premium rates.

So we know how to structure the markets to ensure there will be no cherry-picking of low risk, low cost, enrollees, leaving the high-cost patients for the Medicare-managed option.  But it needs to be done.  The requirement that all the insurance plans accept any enrollee will stop this.  This then needs to be complemented with the individual mandate, minimum standards for the health insurance plans, and some form of risk corridors (reinsurance) program.  The issue is not that this is impossible to do, but rather that the Trump administration (and Republicans in Congress) have sought to undermine it.

This discussion has been couched in terms of the market for individual insurance plans, but the same principles apply in the market for employer-sponsored health insurance.  While not as much discussed, the Affordable Care Act also included an employer mandate (phased in over time), with penalties for firms with 50 employees or more who do not offer a health insurance plan meeting minimum standards to their employees.  There were also tax credits provided to smaller firms who offer such insurance plans.

But the cherry-picking concern is less of an issue for such employer-based coverage than it is for coverage of individuals.  This is because there will be a reasonable degree of risk diversification across individuals (the mix of those with more expensive medical needs and those with less) even with just 100 employees or so.  And smaller firms can often subscribe together with others in the industry to a plan that covers them as a group, thus providing a reasonable degree of diversification.  With the insurance covering everyone in the firm (or group of firms), there will be less of a possibility of trying to cherry-pick among them.

The possibility of cherry-picking is therefore something that needs to be considered when designing some insurance system.  If not addressed, it could lead to a loading of the more costly enrollees onto a public option, thus increasing its costs and requiring either higher premia to subscribe to it or government budget support.  But we know how to address the issue.  The primary tool, which we should want in any case, is to require health insurers to be open to any enrollees, and not block those with pre-existing conditions.  But this then needs to be balanced with the individual mandate, minimum standards for what qualifies as a genuine health insurance plan, and means to reinsure exceptional risks across insurers.  The Obamacare reforms had these, and one cannot say that we do not know how to address the issue.

E.  Conclusion

These proposals are not radical.  And while there has been much discussion of allowing a public option to provide competition for insurance plans in the Obamacare markets, I have not seen much discussion of allowing a Medicare-managed option in the market for employer-sponsored health insurance plans.  Yet the latter market is far larger than the market for private, individual, plans, and a key part of the proposal is to allow such competition here as well.

Allowing such options would enable a smooth transition to Medicare-managed health insurance that would be available to all Americans.  And over time one would expect many if not most to choose such Medicare-managed options. Medicare has demonstrated that it is managed with far great efficiency than private health insurers, and thus it can offer better plans at lower cost than private insurers currently do.  If the private insurers are then able to improve their competitiveness by reducing their costs to what Medicare has been able to achieve, then they may remain.  But I expect that most of them will choose to compete in the markets for supplemental coverage, offering plans that complement the core Medicare-managed plan and which would offer a range of options from which employers can choose for their employer-sponsored health insurance cover.

Conservatives may question, and indeed likely will question, whether government-managed anything can be as efficient, much less more efficient, than privately provided services.  While the facts are clear (Medicare does exist, we have the data on what it costs, and we have the data on what private health insurance costs), some will still not accept this.  However, with such a belief, conservatives should not then be opposed to allowing Medicare-managed health insurance options to compete with the private insurers.  If what they believe is true, the publicly-managed options would be too expensive for an inferior product, and few would enroll in it.

But I suspect that the private insurers realize they would not be able to compete with the Medicare-managed insurance options unless they were able to bring their costs down to a comparable level.  And they do not want to do this as they (and their senior staff) benefit enormously from the current fragmented, high cost, system.  That is, there are important vested interests who will be opposed to opening up the system to competition from Medicare-managed options.  It should be no surprise that they, and the politicians they contribute generously to, will be opposed.