Crowds

August 19, 2014; ThinkAdvisor

The rise of crowdfunding surfaces a lot of questions. Will the method produce better or worse results than funding decisions left to experts in venture capital or grantmaking organizations? How will the decisions differ? What will be the social results of those differences?

In this fascinating working paper, entitled “Wisdom or Madness? Comparing Crowds with Expert Evaluation in Funding the Arts,” Ethan Mollick of the Wharton School and Ramana Nanda of Harvard University compares the funding decisions for theater projects made by the “crowd’ on kickstarter to those that a panel of “experts” made. The experts were drawn from the ranks of those who had participated in formal grantmaking for the arts. Here is an excerpt from the paper, which is well worth reading:

“We find that crowds and experts broadly agree on project quality, and that the main difference between the crowd and experts appears to be that the crowd is willing to fund projects that experts are not, even when experts are given unlimited funds. Looking at the outcomes, these projects seem to have similar final results to those selected by experts, meaning that the crowd expands the number, and potentially the type, of projects that have a chance of success.

“Based on this evidence, the change from a hierarchical expert‐led system to a mixed expert and crowd‐based one may have large positive effects on the types of innovations that the system produces (Sah & Stiglitz, 1986), as allowing more ideas to come to fruition has been shown to lead to increased innovation quality (Kornish & Ulrich, 2011; Terwiesch & Ulrich, 2009). Similarly, a crowdfunding approach has the ability to include individuals who would not otherwise have access to funds because of the potential challenges of applying for NEA grants: they may not have experience or knowledge of grant writing, may have the wrong skillset to apply, or may be proposing programs that are not within NEA guidelines. A more diverse pool of individuals can further increase innovation (Østergaard, Timmermans, & Kristinsson, 2011). Finally, there are some suggestions in the data that the crowd may be more willing to take a chance on projects with higher variance outcomes than experts might be comfortable with. Though it is not statistically testable, we find that the crowd funded a higher percentage of hits (27%) than the experts (7%), and also the only project that failed due to a quality issue. Increasing the number of high variance projects may lead to more breakthrough ideas.”

So, in short, the research suggests that at least in the area of the arts, the crowd’s judgment is not only as sound as experts’, but that it is likely to result in a broader distribution of funding and potentially more innovation and diversity.

We’d love to hear your thoughts on these findings; we find them intriguing and worthy of discussion.—Ruth McCambridge