astev Roman / Shutterstock.com
The first step for nonprofits toward the use of Big Data is getting access to large sets of information that correlate many different attributes. NPQ wrote about where and why to get some Big Data last fall, and since that time, more voices have added to the conversation about how to acquire this data and turn it into information. The scale of data creation is now exponentially higher than our ability to evaluate it. IBM’s new report tells us that over 90% of the world’s recorded data was created in the last two years, at a rate of 2.5 quintillion bytes daily.
The point of these overwhelming numbers isn’t to send underfunded nonprofits cowering because the mountain of data seems so big. The hope is that we can learn something different this year than we could have even just a couple of years before. We have to be asking the right questions and be willing to put a stake in the ground based on intuition until we can be certain the data backs us up. Now that nonprofits are using not only the data they collect directly, but also related sets of data collected in larger operations, we have to start surging past some of the early-adopter problems that can stymie the sector.
A key challenge of using Big Data, according to Rae Ann Fera on the Co-Create blog at Fast Company, is separating the data from action. One must be willing to take creative stabs at action, and then measure with data. (Rinse and repeat—probably many times.) The data itself cannot dictate a course of action in a community; it can only help measure change. Nonprofits need to hypothesize what actions will create measurable change, but a hypothesis doesn’t just float out of a pivot table. As flawed as our gut guesses may be, we can’t wait for perfect data analysis to dump an undisputed answer into a work plan.
When we take our stab at delivering work to bring about social change, we have to measure, and we must be ready to abandon that gut reaction if the data tells us we’re barking up the wrong tree. Experts in the analysis of Big Data, however, advise caution against leaping too quickly to conclusions. Correlation does not equal causation, and before assuming either that we are right or wrong, we may need to try a few more attempts. Fera’s interview with Joe Rospars about his work with the Obama campaign emphasized the creative need for nonprofits to understand Big Data. We have to be “humans understanding humans” using Big Data, not data scientists understanding data structures—though, of course, we love those folks, too.
Nonprofit missions tend toward Big Audacious Goals, which can make them more prone to data paralysis than a more focused business, looking just to increase sales in one region in the next quarter, might be. Maybe your nonprofit does job training, and you want to measure reductions in unemployment. If we see a change in unemployment numbers, is it really because our job-training program got people to work? Or is it because they simply gave up and are no longer counted as job-seeking individuals in one data set? Either could produce the same result from a single data source, but if we look to Big Data and delve further into the numbers of all people in a service area not working, not just those counted as “unemployed,” we may be able to draw better conclusions. High concentrations of retired or disabled individuals could impact one data set, as could large numbers of new Americans not yet working because of language barriers but definitely trying to join the paid workforce. Numbers of jobs created could be less significant if many of them are going to people already employed. Most importantly, we can’t stop providing services until someone figures out the perfect model. We have to admit errors when we can find them, talk about them so others can learn from our missteps, and then try another path.
Nonprofits should welcome engagement from supporters and critics who are churning the same Big Data. Every new criticism can be met with “let’s look at that, but we have to keep going in the interim” rather than “that doesn’t fit our model.” The more we exchange data that doesn’t fit, the closer we come to finding solutions backed by more than one source.
Steve Boland is a nonprofit technology and operations specialist. Steve holds a Master of Nonprofit Management degree from Hamline University, and is a regular contributor to Nonprofit Quarterly. He can be reached at [email protected] or twitter.com/steveboland