The Human Condition:

Thoughts on Public Health Care (I) – November 18, 2012

My conservative friends are not going to like this posting. Come to think of it, neither will my progressive friends. But here are my thoughts, from what I conceive to be the political center, on the issue of public health care.

With the Supreme Court decision in June and results of the national election this month, the Patient Protection and Affordable Care Act of 2010 will stand as the law of the land. Whatever the details of this legislation, some of which are still being worked out, its import and I believe its intention are to move the country to a single-payer system of publicly provided medicine. (More next week on how a bill requiring universal insurance coverage achieves this.) This will align the United States with the health care systems used in most of the rest of the world—if those governments address the issue at all. But is this really such a huge departure for our country?

1. We Publicly Pay for Education

Health care and education are two of the personal-, family-, and society-level necessities that cannot be supplied on an ad hoc basis—not in the way that you can build houses one at a time, or provide dry clothes and hot meals adequately from any number of possible supply channels. Unless you’re a parent providing home schooling, or a tribe wandering in the wilderness and teaching its young to hunt and fish, you need a complex infrastructure to educate a generation of people: locally situated classrooms, a plethora of introductory books on different subjects, teachers trained to present those subjects, and a supporting network that extends beyond the purely local level to decide on and shape the curriculum, approve the books, train the teachers, and accredit the institutions.

From our earliest days, Americans have agreed that providing basic schooling is a community function, because raising a generation of literate adults with a common base of knowledge is essential to democratic government. Grammar schools and high schools have always been public. Yes, certain communities also host parochial schools and elite “preparatory” schools, which are privately funded, but these have functioned in addition to the local public school for families who felt the need of something different or better. And those families still pay the taxes that support the public school. Higher education—at the college level and above—started out as a collection of privately or religiously backed institutions. In the middle of the 19th century, however, states began funding land-grant colleges and “normal schools” (i.e., for educating teachers) as public institutions.1 Today, public colleges and universities are a big part of the education mix.

Health care is not that different from education, in that it requires a complex infrastructure of hospitals, clinics, testing laboratories and services, primary care physicians, treatment specialists, researchers, medical technicians, nurses and orderlies, and administrators, as well as pharmaceuticals and medical supplies, and a supporting network of teaching hospitals, training programs, accreditation, and drug and equipment manufacturing. Even if personal health is not properly a community concern—as we agree that educating the young should be—providing for people’s various health requirements and caring for them when they are sick and injured is a hugely complex business. And we can all agree, I think, that any society functions better when people are healthy and strong.

At current reckoning, health care in the U.S. represents at least 14% of the national economy. That’s why many people hesitate to turn our patchwork of private providers (doctors’ practice groups, for-profit and religiously funded hospital organizations, insurance companies, and drug and equipment companies) and public providers (Medicare and Medicaid funding, community hospitals, county clinics, and various national institutes managing health issues) over to a single government system operated at the federal level. When all the eggs are in one basket—who’s watching that basket?

2. You Get What You Pay For

Right now, the average American experiences health care in a world of choice—well, some choice. Most people get health care through their employer, and most employers offer a variety of insurance plans with varying options about coverage, copays, premium costs, provider lists, and other pieces of the puzzle. If you don’t have coverage, then you take whatever you can get from an emergency room visit or the charity of a public hospital. No one, except the truly rich, gets everything he or she wants.

When you buy something for yourself, you get to make choices. You weigh what you’ll pay against what you ideally want, what you actually need, and what makes the most sense to you. When you get insurance through an employer, the company is paying part of the insurance premium, so the options are more limited. You can buy the gold-plated health plan with all the bells and whistles, but you’ll pay more out of your own pocket for it.

Advocates of a single-payer system—universal health care provided at government expense—tend to forget about the choice side of the equation. When a third party pays the piper, you dance to his tune. When you eat at public expense, you don’t get to choose between steak and chicken. Under a single-payer system, you may have a strong, bonded relationship with a certain doctor, but he or she might be assigned elsewhere. You may be in pain and need a new hip now, but resources are limited and so you might have to wait. You may be a vigorous, healthy 78-year-old with a lot still to contribute, but if you get cancer after the mandated cutoff age for aggressive treatment, then you will get only palliatives and hospice care.

If health care in America follows the British model, where private insurance and religious-based or for-profit medical providers are permitted to function, then you will be able to exercise choice by paying more outside the system—in the same way that parents can choose to put their children in a parochial or private school. If the system limits competition from private resources, as in the Canadian model, then you will accept the choices made for you by bureaucrats in the state capital or in Washington, DC.

3. Insurance Isn’t the Right Business Model

In the early days, certainly at the beginning of the 19th century, only two kinds of people went to doctors: the rich and royalty. With their theories about the four humors, bleeding, and black bile, the most that doctors were really good for was setting broken bones or stitching up wounds. For the rest of human ailments, their practice involved administering placebos, watching, and waiting. Oh, surgeons could cut out a large tumor or amputate a mangled limb in extremis. And hospitals were filthy places where you went to recuperate or die—and usually the latter.

Not until the melding of science and medicine—which began with the germ theory of disease in the late 19th century and progressed to antibiotics with the discovery of penicillin in 1929—did medicine gain a solid and respectable footing. But still, for most people, health was a matter of good luck and a strong constitution. Serious illness and injuries were considered natural catastrophes, and death usually followed quickly.

It made sense, in this medical environment, to insure yourself against the catastrophes. You took out hospital insurance, also called “major medical,” against surgery or an acute condition like cancer involving a long hospital stay. But for routine aches and pains, fevers, and the occasional broken bone, a family would pay out of pocket to visit their neighborhood doctor.2

In the 1930s, while building what became Hoover Dam, the Henry Kaiser organization set up the first comprehensive health care system—Kaiser Permanente—which tended to their workers’ complete medical needs through a network of dedicated facilities and physicians that was paid for through insurance-style premiums. Providing complete medical services through insurance at the workplace became an employment perk during World War II, because health insurance was not subject to federally imposed wage caps. Gradually, with the rise of Health Maintenance Organizations in the 1970s, the emphasis moved from catastrophic coverage to complete medical coverage. This was generally a good thing, because regular checkups and preventive care were now encouraged and paid for, but the transition had some bad consequences, too.

Insurance is a means of alleviating the potentially crippling costs of an unexpected catastrophe, such as your house burning down, crashing the car, or a family member becoming badly hurt and needing surgery and months of therapy. Insurance is not meant to pay for routine and expected expenses like replacing the roof, painting the porch, or changing the oil and tires. To be sure, medical checkups, routine diagnostics, booster shots, and birth control all have to do with a person’s health, but they are not unexpected expenses and therefore not properly paid for by insurance. If you insist on paying for these services through insurance premiums, then you are no longer making an actuarial bet—like betting that the house won’t burn or that you’ll stay healthy—but instead you are simply structuring your monthly expenses through an exotically complex and costly method of payment.

And even if you are perfectly healthy, eventually you will die. In the early days, this was a fairly uncomplicated process. Some part of the organism was damaged by disease or injury, began to break down, the patient failed to thrive, and—on a time scale dictated by the nature of the disease or damage—further living became impossible and you said good-bye. Now, through the miracles of modern medicine, we can hold off that final act for an unnaturally long time and can even keep a brain-dead shell breathing and pumping blood in perpetuity. If you had to pay for such “heroic measures” out of pocket, you would eventually call it quits and succumb to the inevitable. But so long as someone else is footing the bill out of premiums you’ve been paying all along—why spare the expense?

It’s commonly understood that about 50% of the medical costs a person consumes are incurred in the last six months of life. From a personal point of view, the natural feeling is “Why not stick it to the insurance company? We’ve been paying the bloodsuckers all these years!” But from a societal point of view, “Why spend these resources on the dying, who ultimately can’t be helped? There are more important medical needs to be served!”

Insurance is a bad business model for paying out monthly expenses. It’s a really lousy model for trying to delay or override the inevitable.

So that’s the background to my thoughts on public health care. Next week I’ll explore where I think medicine is going in the future and why a better system of paying for it will become inevitable.

1. I went to a former land-grant university, Penn State.

2. This was back in the days when a doctor, lawyer, or other trained professional expected to earn a solidly middle-class living. Doctors still made house calls and charged for their services according to the wealth of the community in which they lived. Training as a physician was not considered a sure route to a six-figure income, high society, and a vacation home in the Hamptons. Hospital stays cost a lot back then—more than a local hotel room, certainly—but they were not like moving into the Waldorf Astoria with concierge and room service.