Editors’ note: This article is from NPQ’s spring 2015 edition, “Inequality’s Tipping Point and the Pivotal Role of Nonprofits.”
One hundred years ago, progressive thinkers and activists who called for women’s suffrage, an end to lynching, the right of workers to form unions, health and safety standards for workplaces, the eight-hour workday, a federal minimum wage, a progressive income tax, old-age insurance, and government-subsidized healthcare were considered impractical idealists, utopian dreamers, or dangerous socialists. Fifty years ago, those who called for women’s equality, laws protecting the environment, civil rights for gays and lesbians, and greater numbers of black and Hispanic/Latino elected officials were also considered clueless or hopelessly radical. Now we take all these ideas for granted. The radical ideas of one generation have become the common sense of the next.
Just three years ago, the idea of a $15/hour minimum wage was also considered a crazy notion; but in 2014, Seattle passed a citywide minimum wage at that level. This “radical” idea has now become almost mainstream, and in a growing number of cities, local elected officials are proposing similar policies. The dramatic change in so short a time didn’t happen by accident. It is the culmination of years of grassroots activism, changes in public opinion, and frustration with the political gridlock in Washington.
Significant changes come about when people dare to think beyond the immediate crisis, propose bold solutions, and work for steppingstone reforms that improve people’s lives and whet their appetites for further reform.
Helen Keller was once asked if there was anything that could have been worse than losing her sight. Keller replied: “Yes, I could have lost my vision.” Keller was a lifelong radical who participated in the great movements for social justice of her time. In her investigations into the causes of blindness she discovered that the poor were more likely than the rich to be blind, and she soon connected the mistreatment of the blind to the oppression of workers, women, and other groups, leading her to embrace socialism, feminism, and pacifism.1 In a 1924 letter to Senator Robert M. La Follette Sr., Keller wrote: “Superficial charities make smooth the way of the prosperous; but to advocate that all human beings should have leisure and comfort, the decencies and refinements of life, is a Utopian dream, and one who seriously contemplates its realization indeed must be deaf, dumb, and blind.”
Four decades later, Reverend Martin Luther King Jr. made a similar observation: “Philanthropy is commendable, but it must not cause the philanthropist to overlook the circumstances of economic injustice which make philanthropy necessary.”
Keller and King were both practical visionaries. They reflected a long-standing American tradition of radical reform. They wanted philanthropy to be bold and to challenge the system of economic exploitation and social injustice that created so much misery. But they also wanted to see immediate changes that would improve people’s lives today, without waiting for an overhaul of society.
Reformers and Radicals Confront Inequality
That radical reform tradition came of age in the late 1800s and early 1900s. At the time, America was a country dominated by rampant, unregulated capitalism, during what was sometimes called the “Gilded Age.” It was a period of merger mania, an increasing concentration of wealth among the privileged few, and growing political influence by corporate power brokers known as the “robber barons.” New technologies made possible new industries, which generated great riches for the fortunate few—but at the expense of workers, many of them immigrants, who worked long hours and under dangerous conditions for little pay.
American cities were a cauldron of seething problems—poverty, slums, child labor, epidemics, sweatshops, and ethnic conflict. Corruption was widespread. Businesses routinely bribed local officials to give favorite corporations private monopolies over key public services, which were typically run inefficiently. Cities were starved for cash but businesses paid little taxes.
Out of that turmoil, activists created a progressive movement, forging a coalition of immigrants, unionists, muckraking journalists, settlement-house workers, middle-class civic reformers and suffragists, and upper-class philanthropists; while these activists spoke many languages, the movement found its united voice through organizers, clergy, and sympathetic politicians.
Some wealthy Americans—mostly college-educated women—contributed their time, talent, and money to battles to improve the lives of the immigrant poor. Jane Addams, Alice Hamilton, Florence Kelley, Lillian Wald, and others founded the settlement-house movement—the nation’s first generation of community organizers—and embraced crusades for workers’ rights, public health, housing reform, women’s suffrage, civil rights, and peace. During the great “Uprising of the 20,000” in 1909 and 1910 (the largest strike by American women workers at the time), upper-class women affiliated with the Women’s Trade Union League (WTUL) raised money for the workers’ strike fund, lawyers, and bail money, and even joined the union members on picket lines. It was through her work with the WTUL that a young Eleanor Roosevelt was first exposed to the suffering of the poor, an experience that transformed her into a lifelong progressive. Frances Perkins was a recent college graduate working for the Consumers League in New York City when the Triangle Shirtwaist factory fire in March 1911 took the lives of 146 garment workers, most of them young immigrant girls. Perkins led the campaign to get New York State to adopt laws protecting workers from dangerous sweatshop conditions. When she became Secretary of Labor during FDR’s New Deal, she championed reforms such as the minimum wage, workers’ rights, and Social Security. Another ally was Anne Morgan, the daughter of Wall Street chieftain J. P. Morgan. She recruited other upper-class women—and a few men—to walk picket lines and raise money for families whose daughters were killed in the Triangle Shirtwaist fire. Some of them came to the picket lines in their fancy clothes, so union organizer Rose Schneiderman referred to them as the “mink brigade.”
One of the Progressive Era’s great crusades focused on improving living conditions of the urban poor. Jacob Riis’s book, How the Other Half Lives: Studies among the Tenements of New York (1890), helped catalyze campaigns to improve housing conditions. Philanthropists joined forces with civic reformers, immigrant activists, and liberal politicians to “clean up” the slums—physically, socially, economically, and even aesthetically.2 They were motivated by different values—religious faith, social idealism, noblesse oblige, and a concern for protecting or expanding the property of the affluent in city centers and adjacent areas. Some philanthropic reformers believed that cleaning up the slums required changing the behavior and the values of the poor themselves. Others sought to create philanthropy-sponsored “model tenements,” assuming that improving the physical conditions of housing in the slums would improve the lives of the inhabitants. A third group pushed to reform public policy to give the government a stronger role in regulating housing conditions and providing subsidies to house the poor.3
Ever since the Progressive Era, philanthropy, government, and intellectuals have debated those three approaches to addressing the problems of cities and the poor. In the 1960s, American foundations, catalyzed by civil rights protests and tenants’ rights activism, again focused attention on the problems of urban slums. The major goals of those efforts included providing job skills to the “hard-core” underclass; nurturing nonprofit community development organizations to build affordable housing; and empowering poor residents to gain a voice in urban renewal and other neighborhood improvement initiatives, challenge slumlords, and hold local politicians accountable.4
In the late 1970s and 1980s, foundation efforts (with notable exceptions) reflected the retreat from government activism and community organizing, focusing instead on neighborhood-based self-help initiatives. This approach was boosted in the 1990s by academic studies about the impacts of the concentration of poverty. As a consequence, philanthropic funders have devoted substantial resources to addressing poverty in specific geographic areas. The major focus of these recent efforts has been on “place-based” antipoverty initiatives. The most well-known example is the Harlem Children’s Zone, but there have been hundreds of others, documented in several reports by the Aspen Institute called Voices from the Field.5
Seeking to understand the lessons from these initiatives, in 2014 the University of Southern California’s Center on Philanthropy & Public Policy convened a series of meetings in New York, Los Angeles, and Washington, D.C., of academics, foundation staff, and policy practitioners to discuss urban poverty. Those provocative discussions led to the publication of a report, Place-Based Initiatives in the Context of Public Policy and Markets, that summarized the ideas generated during the gatherings and the current thinking about urban poverty and place.6 Those discussions and the report generally reflect the narrow perspective on poverty that, with some notable exceptions, mainstream philanthropy (as well as many policy-makers and academics) has applied to these issues over the past few decades. That thinking focuses on the poor rather than on the super-rich, and on places (geography) rather than on the larger economic system in which those places are embedded. Although the discussions and the report gave lip service to the problem of widening inequality, the prescriptions avoided any challenge to this reality.
Indeed, since the 1980s, most discussions within the philanthropic world of the “urban crisis” or of what to do about “ghetto poverty” miss the larger picture of economic inequality and the concentration of income, wealth, and political power. When most philanthropists and policy experts look at low-income neighborhoods, they miss the broader picture—that these places are part of a system of economic segregation resulting from government policies that embrace free-market ideas.7
Social scientists tend to study the “underclass,” but they pay much less attention to the “overclass.” The two are connected. That was one of the lessons of Occupy Wall Street. It is also one of the basic points of the book that I wrote with John Mollenkopf and Todd Swanstrom, Place Matters: Metropolitics for the Twenty-First Century.8 The book’s title indicates that we recognize the power of place in shaping the lives and destinies of people, but our focus is not simply on the people who live in areas of concentrated poverty but rather on the broader dynamics of geographic segregation by wealth, income, and race. Poor ghettos are the flip side of rich ghettos. Poverty is the flip side of super-wealth. The solution is shared prosperity, and that never happens without strong rules that limit market forces. It requires government—and government run by people who believe in the power of laws and rules—to change human behavior, institutions, and society.
Widening Wealth and Income Inequality
The problem of widening inequality has become a central issue in American politics and culture. The Occupy Wall Street movement, which began in New York City in September 2011 and quickly spread to cities and towns around the country, changed the national conversation. At kitchen tables, in coffee shops, in offices and factories, and in newsrooms, Americans are increasingly talking about economic inequality, corporate greed, and how America’s super-rich have damaged our economy and our democracy. Catch-phrases adopted by Occupy Wall Street—the “1 percent” and the “99 percent”—provided Americans with a language to explain the nation’s widening economic divide, the super-rich’s undue political influence, and the damage—a crashed economy and enormous suffering and hardship—triggered by Wall Street’s reckless behavior.
To those concerned with nuance, the Occupy Wall Street rhetoric may have seemed simplistic; but its basic message resonated with the American public and was soon being echoed by a growing number of elected officials and civic leaders. In 2006, five years before the Occupy movement, a survey conducted by psychologists at Duke and Harvard found that 92 percent of Americans preferred the wealth distribution of Sweden to that of the United States. In Sweden, the wealthiest fifth of the population has 36 percent of all wealth, compared to the United States, where the wealthiest fifth has 84 percent.9 The reality of widening inequality and declining living standards, the activism of low-wage workers and Occupy Wall Street radicals, and increasing media coverage of these matters solidified public opinion. Two months after Occupy Wall Street began, a poll from the Public Religion Research Institute found that 60 percent of Americans agreed that “[American] society would be better off if the distribution of wealth was more equal.” A Pew Research Center survey around the same time found that most Americans (77 percent)—including a majority (53 percent) of Republicans—agreed that “there is too much power in the hands of a few rich people and corporations.”10
Those attitudes have persisted. In a national survey conducted in 2014, Pew found that 60 percent of Americans—including 75 percent of Democrats, 60 percent of independents, and even 42 percent of Republicans—think that the economic system unfairly favors the wealthy. The poll discovered that 69 percent of Americans believe that the government should do “a lot” or “some” to reduce the gap between the rich and everyone else. Nearly all Democrats (93 percent) and large majorities of independents (83 percent) and Republicans (64 percent) said they favor government action to reduce poverty. Over half (54 percent) of Americans support “raising taxes on the wealthy and corporations in order to expand programs for the poor,” compared with one-third (35 percent), who believe that “lowering taxes on the wealthy to encourage investment and economic growth would be the more effective approach.” Overall, 73 percent of the public—including 90 percent of Democrats, 71 percent of independents, and 53 percent of Republicans—favor raising the federal minimum wage from its current level of $7.25 an hour to $10.10 an hour.11
The expanding number of Americans who constitute the “working poor” has stimulated growing concern among policy-makers, academics, and workers themselves. The majority of new jobs created since 2010 pay just $13.83 an hour or less, according to the National Employment Law Project.12 The Institute for Policy Studies, in a March 2014 report, found that the $26.7 billion in bonuses handed to 165,200 executives by Wall Street banks in 2013 would be enough to more than double the pay for all 1,085,000 Americans who work full time at the current federal minimum wage of $7.25 per hour.13 The low wages paid to employees of the ten largest fast-food chains cost taxpayers an estimated $3.8 billion a year by forcing employees to rely on public assistance to afford food, healthcare, and other basic necessities.14 Even after local officials had pushed Occupy protestors out of parks and public spaces, the movement’s excitement and energy were soon harnessed and co-opted by labor unions and community organizers. Not surprisingly, the past few years have seen an explosion of worker unrest (especially among Walmart employees, workers at fast-food chains, janitors, and hospital workers, demanding that employers pay them a living wage) and a growing number of cities and states adopting minimum wage laws significantly higher than the federal level of $7.25 an hour.15
Candidates for office and elected officials began echoing some of the same themes. Progressive mayors like Seattle’s Ed Murray, New York’s Bill de Blasio, Minneapolis’s Betsy Hodges, Newark’s Ras Baraka, Boston’s Marty Walsh, and Jackson, Mississippi’s Chokwe Lumumba (who died in 2014), and hundreds of city council and school board members, embraced the idea of using local government to address income inequality and low wages. In a major address in Kansas in December 2011, two months after the first Occupy protests, President Barack Obama criticized the “breathtaking greed” of the super-rich. He pointed out that the average income of the wealthiest 1 percent had increased by more than 250 percent, to $1.2 million a year. He also described the nation’s widening inequality and the decline of economic mobility as “the defining issue of our time.”
The Rich and the Super-Rich
What Obama and a growing number of Americans understood is that within the United States there is a growing divide between the super-rich and the rest of society. America’s super-rich are also part of a small global elite whose total wealth dwarfs that of most of the world’s population.16 Among the world’s 7 billion people, the richest 10 percent own 83 percent of the world’s wealth, with the top 1 percent alone accounting for 43 percent of global assets. In contrast, the bottom half of the global population together possess less than 2 percent of global wealth.
There are about 84,500 individuals in the world whose net worth exceeds $50 million. Almost half of them (37,950) live in the United States. According to the 2013 annual Forbes billionaires list, there are 1,426 billionaires in the world with a total net worth of $5.4 trillion. The United States leads the list with 442 billionaires, followed by Asia-Pacific (386), Europe (366), the Americas (129), and the Middle East and Africa (103).
At the very pinnacle, the world’s richest 200 people have about $2.7 trillion in total wealth, which is more than the world’s poorest 3.5 billion people, who have only $2.2 trillion combined, many of them living in extreme poverty and destitution.17
Moreover, the chasm between the world’s rich people and nations has been getting wider over the past several decades. Almost all of the world’s super-elite live in a handful of global cities, where the headquarters of the world’s large transnational corporations are located. These global cities include New York, London, Tokyo, Sydney, Stockholm, Paris, Singapore, Hong Kong, Chicago, San Francisco, Los Angeles, Zurich, Beijing, Seoul, Copenhagen, Boston, Berlin, Frankfurt, Buenos Aires, and Amsterdam, with a growing number of big cities in Asia, Latin America, and Africa soon to join the list.
The distribution of wealth is even more unequal than the distribution of income. In 2010, the top 1 percent of households controlled a larger share of national wealth than the bottom 90 percent. Between 1983 and 2010, the top 5 percent captured nearly three-quarters of the growth in household wealth.18
The typical household has two-thirds of its wealth in home equity, and the bursting of the housing bubble had devastating consequences for many middle- and lower-income Americans. Between 2006 and 2009, American households lost $7 trillion in household wealth. The impact was disproportionately felt by low-income families that had been victims of predatory lending and subprime loans. Since the beginning of the recovery, in June 2009, housing values have increased, but most Americans, particularly the poor, have not recovered the assets that they lost in the recession.19
Ranking sixth out of 187 nations in gross domestic product (GDP) per capita, the United States is one of the richest nations in the world. The United States is also referred to at times as the “land of opportunity”—and indeed, historically, American society has been based on an implicit social contract: If you work hard, you will get ahead. Substantiating this contract was not only the belief but also the experience that economic growth benefits all social classes. President John F. Kennedy’s memorable words, “A rising tide lifts all boats,” is a great bumper sticker but happens to be false: Rising prosperity, on its own, does not guarantee greater equality or opportunity; only government policy committed to shared prosperity can do that.
Economic Segregation: Place-Based Inequality
For decades, journalists, sociologists, and philanthropists have studied the lives and neighborhoods of the poor but downplayed the broader dynamics of inequality of income, wealth, and power that trapped many low-income families in urban (and now, increasingly, suburban) ghettos. A turning point in recent social science was William Julius Wilson’s 1987 book, The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy, which examined the “neighborhood effects” of living in areas with a large number of other poor people.20 Wilson looked not only at the conditions of the poor but at the larger forces—such as the decline of good-paying manufacturing jobs in urban centers—that led to the increased concentration of poverty.
Wilson’s study spawned a cottage industry of research devoted to understanding the geography of poverty—the consequences of living in areas of concentrated poverty, often compounded by racial segregation.21 But most of those studies paid little attention to the dynamic of widening economic inequality of income and wealth, the proliferation of low-wage jobs, the excessive compensation of top corporate executives, and the growing geographic isolation of America’s wealthy living in urban and suburban enclaves.
Few social scientists, foundation staffers, or policy-makers were asking, What about the consequences of living in areas of concentrated wealth? Who studies the lives of people in our wealthiest communities like San Marino, Bel Air, Greenwich, Lake Forest, and Bloomfield Hills, where the 1 percent (or, more accurately, the .01 percent) live?22 Why don’t foundations fund more research about the overlapping networks of corporate board members and the decisions made by top executives that have devastating impacts on the entire society, including middle-class and low-income people and their communities? Why don’t more social scientists explore the “culture of the rich” to learn how their daily lives and routines make most (though not all) of them immune to understanding (or caring about) the consequences of their corporate decisions on the lives of the poor and middle class?23 Why do we have to rely on after-the-fact reports by journalists and academics to get a glimpse into the decisions by top Wall Street executives that caused financial havoc, recession, layoffs, the epidemic of foreclosures, and the reality that, several years into the “recovery,” millions of Americans are still drowning in debt with “underwater” mortgages?24
In recent decades, places—neighborhoods, cities and suburbs, and regions—have become more unequal. Economic classes are becoming more separate from each other as the rich increasingly live with other rich people and the poor live with other poor people. Over the last half-century, the poor have become concentrated in central cities and distressed inner suburbs, while the rich live mostly in exclusive central-city neighborhoods and outer suburbs.
Living in high-poverty neighborhoods isolates residents from job opportunities, restricts them to bad schools, imposes unhealthy environments, and makes them pay high grocery prices. Such factors strongly influence individual life chances. Many studies show that most people leave such places whenever they can, suggesting they have little doubt about the negative consequences of living in such places.
Rising economic and geographic segregation reinforces disadvantage in central-city neighborhoods, speeds the deterioration of central cities and inner suburbs, and heightens the cost of suburban sprawl. A 2013 study examining variation in economic mobility across metropolitan areas got op-ed-page attention from the New York Times and columnist and Nobel Prize–winning economist Paul Krugman. Based on a massive data set of all tax filers in the United States from 1996 to 2011, the study found that—other things being equal—upward mobility was significantly higher in metropolitan areas with lower levels of economic segregation. The most likely explanation is that poor people, stuck in central cities and inner-ring suburbs, become isolated from economic opportunity when jobs sprawl out to distant suburbs.25
This dynamic would be bad enough if it simply reflected individual and household choices in free markets, but it does not. Federal and state policies have favored suburban sprawl, concentrated urban poverty, and promoted economic and racial segregation.26 Only new policies that level the metropolitan playing field and bring all parts of the metropolis into a dialogue can stop the drift toward greater spatial inequality. America needs central-city and suburban residents to unite in a new coalition to support shared prosperity.
Many cities are enjoying something of a revival. Young professionals and empty nesters are moving back to cities in search of pedestrian-friendly urban environments. These positive trends present opportunities for creating mixed-income neighborhoods and reversing decades of rising economic segregation. But this will not happen automatically. Indeed, the renewed vitality of many cities is generating new forms of economic segregation as gentrification pushes poor people, minorities, and immigrants out of cities into new suburban zones. This partly explains that explosion of suburban poverty in the past decade. Policies such as inclusionary zoning, which requires developers to build affordable housing along with market-rate housing, can ensure that urban revival moves toward equity.
However, cities by themselves cannot capture enough of the wealth generated within their borders to significantly reduce concentrated poverty. We need metropolitan-wide as well as federal policies to do that.
The problems of the different parts of metropolitan areas are interconnected. No part occupies the moral high ground. Overall progress will come only when the different parts of metropolitan areas work together and push for federal policies that create incentives for regional cooperation rather than beggar-thy-neighbor competition. But there are powerful interests that have a stake in the status quo that allows developers and businesses to pit cities against cities and regions against regions.
Democracy cannot flourish under conditions of extreme income inequality and residential segregation. The huge and growing gap between rich and poor communities results in tremendous differences in the quality of our schools, parks, garbage collection, and police and fire protection—as well as economic and social opportunities—across our metropolitan areas.
In the context of extreme local political fragmentation, economic and racial segregation has turned local governments into privatized interest groups concerned with the narrow self-interests of their residents. This cuts off those living in low-income neighborhoods and distressed suburbs from access to jobs and decent schools—or even the same kind of shopping and household services available to most Americans—and subjects them to unhealthy environments and poor healthcare. In this context, freedom of residential choice has little meaning. Growing economic segregation exacerbates income inequality and worsens its effects.
The pattern of metropolitan development in the United States helps explain why the United States has significantly lower levels of upward mobility than other developed countries.
Full Employment and Good Jobs: The Best Antipoverty Policy
As indicated above, place-based policies cannot on their own address the major trends that have led to widening inequality, a decline in the overall standard of living for most Americans, and an increase in poverty. Twenty years ago, research by economists Richard Freeman and Paul Osterman demonstrated that the most important factor in increasing the employment opportunities for inner-city youth and helping them escape poverty is a tight labor market—that is, full employment. When unemployment is low, employers hire workers who in looser labor markets struggle to get jobs. The so-called “hard to employ” workers with fewer skills and less education, and those with black skins who had previously been victimized by employer discrimination, get “pulled” into the labor market.27
This is exactly what occurred in Boston and other cities during the late 1990s. Aided by a tight labor market and the expansion of the federal Earned Income Tax Credit, the nation’s poverty rate dropped to 11.8 percent by 1999—the lowest rate since 1979. In central cities, the poverty rate fell from 21.5 percent in 1993 to 16.4 percent in 1999. For black Americans, the poverty rate dropped significantly.28
American workers today face declining job security and dwindling earnings as companies downsize, move overseas, and shift more jobs to part-time workers. A 2009 survey by the Economic Policy Institute found that 44 percent of American families had experienced either the job loss of one or more members, a reduction in hours, or a cut in pay over the previous year. For the vast majority of workers, the costs of basic necessities are rising faster than incomes. Productivity is also increasing faster than incomes, meaning that workers are not sharing in the benefits of economic growth.29
Government has ample powers to change these trends for the better. Back in the days of President Lyndon B. Johnson’s War on Poverty, Republican critics liked to say that the best antipoverty program was a job. The federal government has the capacity—and responsibility—to promote full employment, where everyone who wants to work has a job. But the kind of job—the pay, benefits, security, and prospects for advancement—is as important as the job itself.
A good job means one that pays enough to allow a family to buy or rent a decent home, put food on the table and clothes on their backs, afford health insurance and child care, send the kids to college, take a yearly vacation, and retire with dignity. A good job means that parents don’t have to juggle two or three jobs to stay afloat, and that they still have time to spend with their kids.
Economic security means more than having a job. It means not getting wiped out by illness, rising college tuition, a workplace injury, or a layoff. A few years ago, Yale political scientist Jacob Hacker calculated that one in five American households—the highest level in the past twenty-five years—is financially insecure. One in five Americans has lost at least one-quarter of his or her income within a year due to a job loss and/or large out-of-pocket medical expenses, and doesn’t have enough savings to replace those losses.30
Joblessness and economic insecurity lead to personal and economic disaster. People often lose their health insurance, lose their homes through eviction and foreclosure, suffer depression, and fall into poverty. And high unemployment weakens the bargaining power and reduces the wages of those who do have jobs.
Dr. Harvey Brenner, a sociologist and public-health expert at Johns Hopkins University and the University of North Texas Health Science Center, is a longtime student of the correlations between economic fluctuations and mental and physical health. According to Brenner, for every 1 percent rise in the unemployment rate (about 1.5 million more people out of work), society can anticipate 47,000 more deaths, including 26,000 from fatal heart attacks, 1,200 from suicide, 831 from murders, and 635 related to alcohol consumption.31 The National Institute of Justice reported in a 2004 study that violence against women increases as male unemployment rises. When a woman’s male partner is employed, the average rate of violence is 4.7 percent; but the average rises to 7.5 percent when the male partner experiences one bout of unemployment, and to 12.3 percent when he suffers two or more periods of joblessness.32
Moreover, much like post-traumatic stress disorder in wartime, for some people the symptoms become chronic, lasting even after they find work again. Psychological depression, troubled marriages, and loss of self-confidence don’t just go away when the economic recession ends. Economic hardship leaves behind a trail of wounded people who never fully recover.
Decent wages are necessary for social stability and for the purchasing power that the economy needs to trigger and sustain a strong recovery. The explosion of low-wage jobs is not the result of workers having inadequate education or skills. Over the past two decades, both education levels and skills have improved, while incomes have stagnated. This troubling trend is due, for the most part, to the declining bargaining power of America’s employees.
Consider the case of two newly hired security guards with the same level of education who work in downtown Los Angeles for Securitas—the nation’s largest security company, with $8.7 billion in revenues last year. Both José and Bill work in two of L.A.’s large office buildings. José’s starting pay is $12.50 an hour, with paid health insurance as well as two sick days, five paid holidays, five vacation days (increasing to ten days after five years), three paid bereavement days, and a uniform maintenance allowance of $2 a day. Bill starts at $9 an hour (the state minimum wage) and gets no health insurance or any other benefits. What accounts for the difference? José is a member of the Service Employees International Union (SEIU), which has a collective bargaining agreement with Securitas, while Bill is on his own, with no union contract.
Multiply this example millions of times, across different job categories and industries, and you get a sense that, contrary to business propaganda, unions are actually good for the economy. According to the Economic Policy Institute, union workers earn 13.6 percent more in wages than nonunion workers in the same occupations and with the same level of experience and education. The “union premium” is considerably higher when total compensation is included, because unionized workers are much more likely to get health insurance and pension benefits. A strong labor movement would do more to address the problems of the poor—urban and suburban—than all place-based policies together.33
Los Angeles provides a good illustration of how unions strengthen worker purchasing power and the economy. According to a December 2007 study by the Economic Roundtable, union workers in Los Angeles County earn 27 percent more than nonunion workers performing the same jobs. The higher wages for the L.A. union workers—who number about 800,000, or 15 percent of the workforce—add $7.2 billion a year in earnings. And there is a multiplier effect. As these workers purchased housing, food, clothing, child care, and other items, their consumption power created an additional 307,200 jobs, or 64,800 more than would have been produced without the higher union wages. The union wages also yielded about $7 billion in taxes to various levels of government.34 If unionization rates were higher, these positive ripple effects would increase across the economy.
Unions not only raise wages but also reduce workplace inequities based on race. The union wage premium is especially high for Hispanic/Latino employees (23.1 percent), black employees (17.3 percent), and Asian employees (14.7 percent). The union wage premium is 10.9 percent for white employees. In other words, unions help to close racial wage gaps by making it tougher for employers to discriminate.
Likewise, unions reduce workplace inequities based on gender. The union wage premium is 15.8 percent for black women, 14.7 percent for Hispanic/Latino women, 12.7 percent for Asian women, and 7 percent for white women. Unions also reduce overall wage inequalities, because they raise wages more at the bottom and middle than at the top.35
If unions are good for workers and good for the economy, why are so few employees union members? Some business leaders argue that American employees are simply antiunion, a consequence of our culture’s strong individualistic ethic and opposition to unions as uninvited “third parties” between employers and their employees. Antiunion attitudes, business groups claim, account for the decline in union membership, which peaked at 35 percent in the 1950s and is now about 11 percent.
But this story leaves out four decades of corporate union bashing that has increased the risk that workers take when they seek union representation. In general, polls reveal that American workers have positive attitudes toward unions, and these positive views are increasing as anxiety about job security, wages, and pensions grows.
A majority of American employees say they would join a union if they could; but they won’t vote for a union—much less participate openly in a union-organizing drive—if they fear they will lose their job or be otherwise punished or harassed at work for doing so.
And there’s the rub. Americans have far fewer rights at work than employees in other democratic societies. Current federal laws are an impediment to union organizing rather than a protector of workers’ rights. The rules are stacked against workers, making it extremely difficult for even the most talented organizers to win union elections. Under current National Labor Relations Board regulations, any employer with a clever attorney can stall union elections, giving management time to scare the living daylights out of potential recruits. According to Cornell University’s Kate Bronfenbrenner, it is standard practice for corporations to subject workers to threats, interrogation, harassment, surveillance, and retaliation for union activity during organizing campaigns.36
During the summer of 2014, Los Angeles mayor Eric Garcetti proposed adopting a citywide minimum wage that would begin at $10.25 in 2015, increase to $11.75 in 2016 and $13.25 in 2017, and rise with inflation after that. He called it “the biggest anti-poverty program in the city’s history.” According to an analysis commissioned by the mayor’s office and conducted by researchers from the University of California-Berkeley, Garcetti’s plan would increase incomes for an estimated 567,000 workers by an average of $3,200 (or 21 percent) a year. Predictably, the Los Angeles Chamber of Commerce warned that “this proposal would actually cost jobs, would cause people to lose jobs and would cause people to have cutbacks in hours.” It said the same thing in 1997 when Los Angeles adopted a much narrower “living wage” law that only covered employers with municipal contracts. It was crying wolf. There’s no evidence that the living-wage law has had such negative consequences, but the Chamber of Commerce keeps repeating the “job killer” mantra and the media keep reporting businesses’s warnings as though they had any credibility.
Indeed, one of the biggest barriers to adopting effective antipoverty laws—at the federal, state, regional, and local levels—is the propaganda campaign waged by big business against policies that would require corporations to be more socially responsible. When activists propose policies to raise wages or regulate business practices, corporate lobbyists and their consultants-for-hire warn that these policies will scare away private capital, increase unemployment, and undermine a city’s tax base. When a politician (like the aforementioned Mayor Eric Garcetti) suggests that we raise the minimum wage, chambers of commerce and other business lobby groups warn that it will kill jobs. Ditto with inclusionary zoning, laws to strengthen oversight of banks’ predatory lending and racial redlining, and efforts to require companies to reduce spewing of dangerous toxics into the environment (such as L.A.’s Clean Truck Program). In every instance, the business groups’ warnings were bogus; but so long as elected officials and the media take them seriously, they can cause policy paralysis.
What Cities Can and Can’t Do
The role of the federal government in addressing issues of poverty in general and concentrated poverty in particular has ebbed and flowed in sync with political and ideological fluctuations. With some exceptions, states have generally been even less committed to dealing with these issues, particularly since the 1970s, as suburban voters have dominated state government. Cities and city officials have to deal with the realities of poverty in their backyards; but progressive, liberal, and conservative urban officials have differed in their approaches to urban poverty.37 Some academics have argued that cities are in no position to address questions of poverty and, more broadly, redistribution. In his 1981 book City Limits, political scientist Paul Peterson argued that both capital mobility and people mobility made it difficult for cities to engage in redistribution policy to help the poor.38 Cities, Peterson claimed, cannot tax or regulate businesses too much because they could then leave, taking their jobs and tax base with them. And if cities help the poor too much, they will attract even more poor people, further increasing the costs to local governments and triggering an even greater exodus of well-off people and businesses.
There are certainly limits to what local governments can accomplish when it comes to addressing poverty; the federal government has many more tools to deal with these issues. But experience over the past few decades suggests that Peterson was too timid. Even in a global economy, local governments have considerable leverage over business practices, job creation, and workplace quality. Most jobs and industries are relatively immobile. Private hospitals, universities, hotels, utilities, and other “sticky” industries—as well as public enterprises such as airports, ports, transit systems, and government-run utilities—aren’t about to flee to Mexico or China if government policy requires them to raise wages, pay higher taxes, or reduce pollution. This makes threats to pull up stakes less compelling and gives cities (and progressives) more negotiating power.
The past few years have seen an upsurge of activism, such as the wave of strikes at Walmart and in the fast-food industry, with employee protests backed by a broad coalition of consumers, community groups, and unions calling for a $15 minimum wage (or, in the case of Walmart workers, a full-time salary of $25,000). A growing number of cities have adopted living-wage and minimum-wage law