[Ed: Here's an old post from the 2012 election cycle which, sadly, remains very applicable today.]
THE ELEPHANT IN THE DEBATE ROOM
Eliot
Spitzer, of all people, nailed the reason why Obama is deadlocked in the polls
against a uncharismatic Wall Street tycoon despite a relatively productive
first term: Obama’s accomplishments all relate to stabilizing the economy back onto the path it was on before the
financial crash of 2008. It’s a good thing that President Obama bailed out the
auto industry, pushed for an $800 billion stimulus, and started to draw down
troops from the Middle East. Had he not done so, our economy would currently be
in shambles, and so Obama deserves full credit for averting a short-term
disaster. None of Obama’s achievements, though, offer significant hope of changing the prevailing economic trend:
declining real purchasing power, declining investment power, and declining job
security for America’s middle classes. Today, in 2012, we’re every bit as much
on course for a long-term economic disaster as we were in 2008. Obama doesn’t
even seem to recognize that problem, let alone offer concrete policy solutions
that might help to address it. Consciously or unconsciously, people realize
that Obama’s not addressing their biggest worries, and that’s why he’s not
winning in the polls.
Spitzer’s
post was very short, so let me elaborate a bit on the logic that we apparently
share. Because of steadily improving technology and steadily globalizing
markets, there is far less demand for the typical American worker than ever
before. Technology creates wealth, in the abstract, because it lets us produce
more goods with fewer resources. For the very same reason, though, technology
destroys jobs. If one engineer and one robot
(suppose it takes another engineer to build and maintain the robot) can
build as many widgets as ten men, then that’s two jobs where there used to be
ten jobs. Even if the better technology drives the price of widgets so far that
demand for widgets doubles, so we have two robots and two engineers, that’s
still only four jobs where there used to be ten. There are inherent limits on consumer
demand based on the planet’s natural resources and on common sense – nobody
wants seventeen skateboards, even if they only cost $1 each; they’d just
clutter up your garage. There are no inherent limits on the ability of
technology to get more work done with fewer people – 70 years ago, a farmer
could feed ten people; 40 years ago, a farmer could feed 400; today, a farmer
can feed 4,000 people. By contrast, even though people are eating more food
today than they were in 1940, they’re not eating 100 times more food, nor are
there 100 times more people. On balance, the rate at which we can make food
vastly exceeds the rate at which we want to eat food. There’s no reason to
think this trend will stop, and so there are fewer and fewer jobs available for
the same or larger numbers of people. By the law of supply and demand, that
drives down people’s real wages. When there are six or seven people applying
for and competing for every one new job, there’s no reason for employers to
offer a high salary – you can attract competent, diligent workers just by posting
an ad on Craig’s List.
Similarly,
weakening barriers to trade and the growth of the educated global work force
are expanding the pool of people who can potentially get a job done well. When
99% of the certified engineers lived in America, Western Europe, Japan, and
Russia, those engineers were in high demand – if they didn’t get good terms at
home, they could go overseas to South America, Asia, Oceania, etc. and enjoy
good living conditions and pay there. These regions didn’t educate nearly
enough skilled technicians to meet local demand. That meant US employers had to
offer premium wages to keep smart people at home. Today, the situation is
inverted: other regions of the world are educating more professionals than they
themselves have an economic need for. This means that US employers can get away
with offering discount wages to American professionals – if the professionals
won’t accept low wages, the employers can always outsource the jobs to
professionals in other countries who will see even a low US wage as unusually
generous. Note that this has nothing to do with allegations of ‘cheating’ or
‘currency manipulation’ often tossed about by xenophobic populists. There are
real problems associated with global free trade, but even if we completely
eliminated all of those problems, we should still
expect that engaging in meaningful commerce with nations who have a much
lower GDP per capita and who educate
a surplus of professionals relative to local demand will tend to push down US
wages and relocate jobs from the US to other countries.
Another
way that middle-class and upper-middle-class people used to help themselves
economically is by saving. They would put money into savings accounts and CDs
and bonds, they would accumulate equity in their homes, and they would enroll
themselves or their kids in formal education programs so they could get
promotions and earn higher salaries. None of these paths to accumulating wealth
are working anymore for the median American. The interest rate on savings has
been stuck at 1% for years; long-term bonds might pay off at 3%, but the rate
of bond failures among even bonds that are currently rated AA is probably high
enough to make up the 2% difference.
Home
equity can still offer high returns to some investors, but those returns are
more volatile than ever before, because the bulk of the value of a home is now
in the real estate (and building permit) underneath it, rather than in the
construction itself. Decades of NIMBY squabbling (and, to be fair, increased
interest in conservation and urban planning) have made the right to build on a
piece of property far more valuable than the lumber and concrete actually
erected on the property. The exact value of those rights, however, is difficult
to assess. They depend on how desirable the area is as a place to live, which
in turn depends on the quality of schools, roads, police forces, the
availability of local jobs, and even the ethnic composition of the neighborhood
– all of which are subject to change over the 10 or 30-year time frame of a
mortgage. Worse, the fact that there is now a robust market in residential real
estate investment – including buying and selling of partial stakes and second
mortgages – means that some people will buy up real estate not for future use
or even for future sale to a new generation of homeowners, but because they
hope to sell the house to other investors. This makes the real estate market
inherently vulnerable to speculative bubbles. Realtors tend to estimate the
value of a piece of property in large part based on what neighboring properties
have recently sold for, but there’s no guarantee that those sales were an
accurate estimate of the homes’ real value: even one mistake can create a
positive feedback spiral that drastically overvalues the real estate for an
entire neighborhood – and then anyone who buys into that neighborhood is just
not going to see significant returns on their real estate savings, because
they’ve seriously overpaid for their building permit. Because even one house
can consume the bulk of a middle-class family’s savings, the volatility in home
prices is a huge problems for families who are trying to generate wealth by
saving and investing.
Finally,
certain kinds of education used to be a pretty reliable route to higher income.
Thanks to subsidized public universities and restrained pricing behavior on the
part of private universities, the tuition for an entire degree used to be
roughly comparable to a year’s salary. Now that tuition is closer to three
years’ salary, making the economic payoff of an advanced degree much less of a
sure thing. If you earn an extra $10,000 a year for the next 30 years, is that
worth borrowing $100,000, paying interest on it at 8% for the next 10 years,
and giving up two years of income? Maybe it is and maybe it isn’t. There’s not
necessarily any way to know in advance when you enroll in a program whether it
will pay off for you. Obviously degrees like computer science are likely to
have a higher value than English literature… but even degrees that were once
considered safe and responsible, like law and graphic design, are turning out
massive unemployment. 53% of college graduates this year are not working in
jobs that require a college degree. Americans increasingly face the dilemma
that they can’t support the lifestyle they’ve been used to in their present
income bracket – but their expected value from moving up into the next bracket
is roughly zero after accounting for tuition, loans, and the risk of post-graduate
under-employment.
What,
if anything, in the Obama agenda will begin to ease this dilemma? There is a
small but significant difference in terms of Pell grants, stimulus, grants for
job retaining, etc. in the Democratic Platform vs. the Republican platform.
None of these programs, though, is new, and not one of them does more that
treat the symptoms. They are old programs from an old economy. They do not help
fix the problem of a systematic, structural, rolling decline in the number of
good jobs and good savings plans available to Americans, because they were
originally conceived as Keynesian Band-Aids to help shepherd a basically
healthy economy through temporary recessions, or to help vulnerable
subpopulations break through social barriers to take their place in a thriving
middle class. We should not be surprised that these anti-recession programs do
little to create an increase in the
number of jobs available (rather than just slightly slow the decline of jobs),
any more than we would be surprised that filling a car’s gas tank failed to
stop the transmission from wearing out.
The
only programs with a chance of seriously addressing the economic decline of
America’s middle classes are the ones that confront head-on the reality that
there are fewer man-hours of work than before, and that there will be even
fewer man-hours demanded next year. Given that fact, how should we respond? One
obvious answer is to reduce the amount of man-hours associated with each job.
Work is an essential part of human dignity, but there’s no psychological law
saying that each worker needs to have 3,000 or even 2,000 hours of work each
year. People in France and Scandinavia seem perfectly content working 1,500
hour years, and we might be fine working as little as, say, 20 hours a week for
40 weeks a year. There would be inefficiencies from needing to train more
people to perform each task – if you need 10 firemen to staff the local station
instead of just 3, the extra 7 firemen have to get a year of EMT training each.
But that’s nothing compared to the inefficiency of taking healthy, alert adult
workers and consigning them to long-term unemployment. Each worker who sits and
does nothing is wasting a minimum of 18 years of nurturing and education –
better to spend one more year of training to salvage that investment than to
let the whole thing go to waste. Besides, the unemployed often receive welfare,
food stamps, unemployment benefits – and sometimes run into health problems
because of drug abuse, depression, etc. If we can take an unemployed person and
take them totally off the public dole by incurring the social loss of one or
two years of otherwise unnecessary training, society still comes out way ahead.
Many of the new skilled workers could probably be trained by borderline retirees
whose labor would otherwise be wasted (and who would otherwise be extra idle
mouths to feed themselves). Drastically cutting the hours worked by each person
would probably cause the demand for ‘convenience’ services like fast food and
wash-n-fold laundry to plummet, and that would put some people out of work, but
again – this is the kind of effect that would be more than offset by tripling
the number of ‘jobs’ available in categories where demand didn’t plummet.
Convenience jobs, even after the multiplier effect, don’t account for 70% of
the workforce. Besides, demand would be stimulated for the goods that people
would once again use to perform erstwhile convenience activities at home, such
as cooking equipment and washing machines. Longer vacation times would also
promote demand in the tourism and travel industries.
Why
don’t we naturally see this kind of labor distribution arising in the market,
if it’s so wonderful? Well, it’s good for society overall if we make jobs take
up fewer hours per year, but no one employer has an incentive to offer lots of
part-time jobs. Unless a worker is completely burned out, their productivity
per hour will probably be roughly comparable at 60 vs. 20 hours per week, and
so the company saves on fixed costs like office space and fringe benefits,
training, and the need to bring people up to speed on new teams and projects by
just having a few of the usual suspects get lots of work done. Similarly, even
though employees overall would be better off if they could all reduce their
hours, nobody wants to go first. Asking for part-time work sends a signal that one is less reliable or less
dedicated to the company’s mission than other employees, who are competing for
scarce raises, promotions, or (increasingly) for the privilege of not being
laid off. There’s basically a nationwide Prisoner’s Dilemma at work – everyone
would be better off if everyone had fewer hours, but anyone would be worse off
if they alone asked for them.
Shifting
society into this collectively better equilibrium thus requires collective
action. One way that society used to promote caps on the hours worked in
situations where long hours weren’t absolutely necessary was by requiring
overtime pay and instituting a large minimum wage. With mandatory time-and-a-half
for overtime, it becomes individually rational for an employer to hire 3
workers at 40 hours each instead of 2 workers at 60 hours each unless the cost
of training, office space, etc. is at least 50% of their salary. In principle,
that time-and-a-half penalty could kick in at 30 hours a week, or even 20 hours
a week. Instead, we’re letting it fade away into irrelevance by classifying
virtually all jobs as ‘professional’ or ‘managerial.’ When front-line retail
clerks are considered ‘associate sales managers,’ it’s not just the elegance of
the English language that loses out…the sales clerks lose their rights to
overtime pay. The handful of employers with jobs that are so obviously menial
as to remain subject to overtime tend to avoid overtime pay by fiat, simply
ordering their employees not to work more than 40 hours a week. Because these
jobs pay so low, though – because the legal minimum wage is much less than the
practical living wage – employees ordered to work only 40 hours a week simply
take on between 1.5 and 2.5 jobs, each of which involves no more than 40 hours
a week. Menial workers may be working 100 hour weeks after accounting for
commute times – but they’re still not being paid overtime, because their choice
to work 40 hours at each location is technically considered voluntary. Raising
the minimum wage would destroy some jobs – those jobs that truly are right on
the margin, where there is no more room to pay higher wages without making the
position unprofitable for the employer. Many menial jobs, though, involve a
huge difference in bargaining power. The restaurant owner, e.g., can replace or
do without an extra janitor for weeks on end with little problem, but the
janitor is depending on that week’s paycheck to pay the rent. When there is a large
difference in bargaining power, we should expect most of the surplus to go to
the employer – and so raising the minimum wage will more often tend to transfer
wages from the employer to the employee rather than simply destroying jobs.
Even better, if people can earn a living wage with only one job, they are much
less likely to search for 1.5, 2, or 2.5 jobs. Even if an increase in the
minimum wage reduced the supply of jobs, it would probably decrease the demand for jobs even more, thereby
lowering unemployment!
Another
obvious answer – this time to the problem of savings – is to remove the
obstacles that keep the US savings rate artificially low. Currently, the US
financial sector is occupied by a handful of “too-big-to-fail” banks that enjoy
the implicit and explicit guarantees of the federal government. So long as they
stay roughly within government regulations (and sometimes even when they
don’t), these banks are free to speculate as wildly as they please using a
minimum of sound retail deposits. Whereas a bank that was exposed to the
downside of its own risk would want to attract more depositors so that it would
have a cushion and a margin for error in case a string of lucrative commercial
investments went badly, a US-backed bank that wants to make more high-risk
investments can simply increase the amount of its leverage. They simply don’t
need depositors, which drives down the interest rate that’s paid to depositors.
Worse, banks in the US can borrow as much money as they want from the Federal
Reserve at about 0.75% interest, and then lend it out to very safe
customers at 3.25% interest. This tactic
is explicitly authorized as a way for banks to make up shortfalls in their
government-mandated portfolios of safe holdings. There’s no way 0.75% comes anywhere
near the true rate of inflation, no matter which index you use. Official
statistics suggest an inflation rate of around 2%, but obviously if you include
items like health care, gasoline, or tuition, it’s going to be much higher.
That means that banks can supplement their deposits by borrowing unlimited
amounts of money from the government at less
than the rate of inflation – again, they have no reason to seek (or compete
for) deposits because they can get all the money they want at a price that no
self-interested individual consumer would agree to accept. Trust-bust the banks
so that they’re small enough to be allowed to fail, raise the Federal Reserve
rate above the rate of inflation, and you’ll see a stampede of banks falling
over themselves to offer positive real interest rates to attract individual
American depositors. Corporations and cities, forced to compete with the new
higher interest rates for ordinary savings accounts, will likewise offer higher
spreads on their bonds. Then, once people realize what’s happened, a marked
increase in savings rates, and therefore of peoples’ ability to lift themselves
into a higher income bracket.
Will
there be short-term pain caused by a contraction in easy credit? Of course. But
if the change is announced a year or two in advance of when it’s actually going
to happen (which would be needed anyway to provide enough time to trustbust and
unwind the banking conglomerates), it won’t come with a flood of bankruptcies.
People will have a chance to cut back on their spending before their credit
cards cut them off, and the short-term pain will be promptly outweighed by a
long-term increase in useful investments – all that increased saving has to go
somewhere, and much of it will likely go to infrastructure, modernization,
efficiency projects, etc . that have been sorely lagging behind the available
technology. Because some of those projects involve public property, tax rates
may have to go up – but, actually, you could just put the whole extra tax
burden on savings. You would increase
savings slightly less that way, but you would still free up financial resources
that are currently being dedicated to shuffling paper on Wall Street and spend
them on badly needed bridges, trains, hospitals, power lines, sewer lines, etc.
You could also finance the infrastructure investments through increased fees
for services – it doesn’t matter if electricity costs twice as much in ten
years if houses, etc. are geared so that they only require half as much
electricity.
Somewhat
more creatively, we could think about what government initiatives could grow
the overall size of the economy, even in an environment where few new
industries are opening up and international competition is fierce. Two avenues
for real growth that have been little exploited and less discussed are
environmental reclamation, and sociological reclamation. Environmental
reclamation means taking polluted areas or damaged ecosystems and
rehabilitating them such that they become a sustainable source of economic
profits. A desert is almost worthless, but a healthy watershed provides
ecotourism, a modest surplus of clean fresh water, filtration and
storm-surge-containment capabilities, and terrain for harvesting semi-wild
plants and animals. A landfill full of heavy metals has negligible real estate
value, but if the metals are responsibly catalogued and neutralized, the same
site might be in high demand as a good place for new townhouses. Many of these
sites are not snapped up by private investors simply because they sit on public
land – perhaps a Navy base, or National Forest Land, or a county dump. A bill
that got the land out of public ownership and into mostly private hands in
order to improve its environmental status ought to be a bipartisan winner.
Other sites are not developed because there is no way to adequately insure the
investors against the risk that enough of the toxins remain to cause serious
damage to human health. The solution here is to offer a sort of qualified
immunity – if the developers broadly follow appropriate testing procedures, and
honestly and conspicuously advertise the results of those procedures and the
known (not the knowable) risks associated with those results, they are immune
from lawsuits based on the toxins. Anyone who chooses to live in the area (and
there will be plenty of people who will think it beats living in their parents’
basement or in a gang-ridden slum) is understood to have accepted the risk
that, despite the best efforts of well-meaning ecologists, something deadly
slipped through the cracks.
Sociologically,
we could try a return to the “War on Poverty” of the 1960s and 70s. The key
insight here is that it could be cheaper, in the long-run, for society to truly
rehabilitate criminals, the insane, and the seriously ill than it would be to
warehouse them for the rest of their lives or to shuttle them among an alphabet
soup of stop-gap welfare programs. Studies show that putting people in jail for
as long as 20 years actually makes those people likely to commit more crimes over the course of their
life than people who are paroled without jail time. Vanishingly few people
kill, maim, steal, or deal hard drugs because they enjoy it – most people
commit crimes because they are angry, desperate, or crazy. People who commit
crime for fun might in theory be deterred and brought in line by harsh
punishments, but people who commit crimes as a reaction to their circumstances
or brain chemistry just aren’t going to care what the sentence is. Rather than
stick to the delusion that long jail terms work, we should invest in mental
health services, remedial education, and electronic monitoring that will
increase the odds of detecting a
repeat offense – which studies show does actually tend to deter crime.
Even
people who haven’t actually done anything wrong could be served more
cost-effectively by addressing the root cause of their homelessness, chronic
illness, etc. When people can’t get medical care to fix their TB and poorly
healed broken legs, or can’t get into a drug rehab program, they wind up being
a burden on society (in terms of policing, security, emergency room services)
and also losing any chance they might have had to contribute to society.
Regardless of whether such people ‘deserve’ our help, a temporary ‘surge’ in
tax revenues assigned and designed to try to drag these people out of the
gutter and back into society by offering proactive, comprehensive care might
more than pay for itself. Our last war on poverty didn’t so much fail as run
out of political willpower – it cut the poverty rate in a nice steady linear
drop from 23% in 1960 down to 11% in 1973, and there’s no reason to think it
couldn’t have gone on further if we hadn’t started devoting 10% of GDP to the
Vietnam war in the midst of an oil shock. Although unpopular, deliberate
anti-poverty programs are demonstrably effective, and could increase the size
of the real economy by freeing up land (prisons, asylums, slums, etc.) and
labor (criminals, beggars, and, ultimately, social workers) for more productive
uses.
I’m not
myself an economist, and I don’t have access to rooms full of geniuses the way
major Presidential candidates do. I can’t be sure that all or even any of the
ideas I’ve sketched out would work. But hopefully I’ve highlighted what’s wrong
with the kind of thinking on display
in both parties’ economic platforms, and how much more effective a plan to fix
America’s economy could aspire to be. I understand that a candidate’s job is to
win an election, and not to lay out every detail of what they plan to do in
office. Some details – especially unpopular details – can be withheld or played
down. But voters have a right to know at least the general tenor of what a
President plans to do and how a President plans to do it. If either candidate
has any interest in fixing or even in studying the structural problems in the
American economy – they’ve yet to show it.