January 4, 2018; The Conversation

Carolyn Axtell, part of a research team at the University of Sheffield, reports in The Conversation that in the United Kingdom, 25.7 million workdays were lost due to ill health that had work-related causes in 2016–2017. “Analysis across different data sets held within organizations could possibly improve our ability to predict and prevent such problems arising,” Axtell adds.

Fitness tracking devices, Axtell notes, which some employers already provide to their workers, are one way to generate health data. “But such an approach,” Axtell cautions, “focuses on limited aspects of individual well-being and not on the organizational systems and pressures that contribute to these outcomes. Concentrating on individuals without also considering organizational factors,” Axtell adds, “is likely to be less effective in the long term.”

What other tools are available? Well, as Axtell notes, there are many possible data sources in today’s digital world. Axtell explains,

Employees provide huge amounts of data about their activity through their use of modern information and communication technologies. Such data includes log-in and log-off times, email traffic, use of mobile devices for work purposes, use of work-based systems and web access.

This could be linked with other data sources to find work patterns that relate to well-being. For instance, growing workload might be highlighted in part of the organization through analysis that reveals rising work hours, fewer breaks, logging in more often at weekends (or during holidays) and more sick days. Emails could also be processed using sentiment analysis for language that reveals well-being problems. This could provide an early warning sign, allowing the problem to be fixed before employees reach breaking point.

So, how do you feel about your boss looking through your email to, for example, determine whether your attitude is “positive, negative, or neutral” as a way to help identify “well-being problems”? Would you feel protected—or, to the contrary, horrified?

Axtell suggests that “using big data for monitoring well-being could have positive effects if conducted in a culture of care and trust, where employees and the organization co-own and co-design the data collection effort, analysis and resulting actions.” A code of conduct that includes the right of employees to “opt out” of participation would be an essential element, Axtell says, adding that, in the best-case scenario, “If the data analysis reveals work processes that are a risk to well-being, then the organization would take responsibility to change the way work is done or managed and would design this intervention with employees.”

Of course, Axtell acknowledges, “If data is collected within a culture of fear and distrust, then concerns about ‘Big Brother’ and how data might be used could create an environment where employee well-being suffers dramatically.”

But do we really believe that a code of conduct and the right to opt out would be sufficient protection? The Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society identifies four principles for guiding the use of digital data. In many respects, these principles are in accord with those Axtell enunciates. The first two, protection and privacy, parallel Axtell’s code of conduct and opt-out provisions. The last two, openness and pluralism, speak more to co-ownership and co-design principles that Axtell raises.

That said, the Digital Civil Society Lab is a good deal more specific: “Pluralism,” for example, is not only about including “diverse voices and approaches to governing digital data” but also speaks to ensuring “the transparency and auditability of data and algorithms.” As NPQ’s Cyndi Suarez noted last June, algorithms can easily encode prejudices and biases, as MIT graduate student Joy Buolamwini’s research has documented.

Regarding data openness, the Digital Civil Society Lab also warns that “designing data practices with openness and sharing in mind from the beginning requires the development of appropriate consent and privacy practices.” As for privacy, the Lab notes “the best approach to data privacy for most [civil society organizations] is minimum viable data collection. Don’t collect what you can’t protect.”

As Agata Piekut—who runs a nonprofit research organization out of Warsaw, Poland, called the Healthy Culture Action Tank—noted in a 2016 article published by Digital Impact, nonprofits and others tend to “focus too much on the benefits of data and not enough on the potential risks.” Yet, she adds, “The civil society sector is strategically placed to help people regain control over their data at a time when their data are more valuable than ever. An ethical approach to data management in the nonprofit and civil society space—particularly in the healthcare sector—must be a top priority.”—Steve Dubb