Effect of Social Benefit Programs Chronically Underestimated by Public

Print Share on LinkedIn More

November 2, 2015; Washington Post, “WonkBlog”

On the 50th anniversary of President Johnson’s War on Poverty, there’s new evidence that social safety net programs are working better than most people believed. The new understanding comes from looking at experiential data.

The Washington Post explains what’s behind a series of stories that have appeared over the past several months. In the Post story, Dr. Bruce D. Meyer, a professor at the Harris School of Public Policy at the University of Chicago, describes how he and a colleague matched data from the Census Bureau’s Current Population Survey against actual service data from the State of New York:

“We’re talking about a huge gap,” said Meyer, whose findings are in a recently published National Bureau of Economic Research paper. “When the numbers are corrected, we see that government programs have about twice the effect that we think they do.”

Dr. Meyer’s paper is also the basis for a CityLab story, “The Benefits of Housing Vouchers Have Been Grossly Understated.” Earlier this year, the Center for Budget and Policy Priorities had a similar story that found that Housing Choice Vouchers (HCV) were “the most effective tool to help homeless families with children find and keep stable housing.” The article by Douglas Rice, “Major Study: Housing Vouchers Most Effective Tool to End Family Homelessness,” was based on a HUD study of a range of HUD funded efforts to prevent homelessness. Then, this past week, the Center for Budget and Policy Priorities issued a new report designed to urge Congress to expand the HCV program in the 2016 budget. “New Research Reinforces Case for Restoring Lost Housing Vouchers” builds on another study by Harvard’s Raj Chetty, who analyzed HUD’s Move to Opportunity (MTO) data. Douglas Rice of CBPP writes:

Children whose families moved to low-poverty neighborhoods when they were young were more likely to attend college and less likely to become single parents as adults than control group families that did not receive an MTO voucher; they also earned significantly more as adults.

What ties all these stories together is that the studies are based on real experience,, not opinion or ideology.

So why does the Washington Post describe the discovery of program effectiveness as having “serious implications for the poor”? After all, these new analyses make the case that some social welfare programs are doing better than policymakers thought at reducing or preventing poverty. The danger, the Post article suggests, is that decision-makers might conclude that things are not so bad and they can relax efforts to remediate the poor. Citing Dr. Meyer, the Washington Post writer Roberto A. Ferdman observes:

The official poverty rate now is higher it was three decades ago, but by almost any measure the poor are better off than they were then. Meyer believes that a more accurate gauge would show that things are better or, at the very least, not worse.

However, if the message is that some programs are working, Congress seems not to be getting it—at least, not yet. Just this summer, Representative Jeb Hensarling, chair of the Housing Financial Services Committee, which oversees HUD authorizations, invited Americans to offer alternatives to the 50 years of failure of HUD programs. “For whatever good HUD does, it clearly has not won the War on Poverty. Only economic growth and equal opportunity can do that.” What if the data now show clearly that the some programs do reduce or prevent poverty? What if the limiting factor is, as Center for Budget and Policy Priorities argues, that the programs deserve more funding in order to achieve more success?

There’s another interesting angle to the Washington Post’s story. Dr. Meyer’s explanation of “the growing problem” has an eerie resonance in the context of the repeated political polling failures of the past year:

The truth is that surveys in general are becoming problematic. They are widely used in social science, and regularly relied upon in public policy, but they are fickle things. When people tell the truth and eagerly take part, as Americans did for many years, they tend to be wonderfully accurate. When people grow tired of answering questions, when they shy away from sharing truthful information about themselves, the gap between what surveys suggest and what is actually true begins to grow.

This hypothesis seems to be relevant to the recent failures of political polling in the Israel, United Kingdom, and Canadian parliamentary elections, and, most recently, the statewide races in Kentucky, Mississippi, and Ohio. Pollsters are struggling to explain to the media this pattern of failure on all sorts of technical breakdowns. Pollsters’ credibility and livelihoods could be on the line, but the idea that respondents are lying to pollsters has the ring of common sense.

While the hidden impact of social programs seems like good news, social service progressives may want to step lightly for now. Overpromotion of some preliminary findings has a way of backfiring as academic researchers sift through more and more experiential data. Also, it’s important to understand that using data based on experience, some HUD programs did not show great success. Still, there’s a strong argument that some social programs can’t be simply dismissed as being ineffective boondoggles. Can you imagine if Congresspersons were required to distinguish between social initiatives based on real data, not opinion or ideology?—Spencer Wells