Is Charity Navigator’s Revised Rating System an Improvement?

May 31, 2016; The Journal News and the New York Times

NPQ has been covering the controversial role of nonprofit watchdog agencies for years. Among other things, we’ve looked at the confusing picture they present to donors, as each has its own methods of measurement. Rick Cohen took this on in 2013 in an article that looked at how the top three rated the embattled Wounded Warrior Project very differently.

One of these watchdogs, Charity Navigator, announced changes to its rating system that went into effect yesterday, and we would love to hear what our readers think of the changes.

Charity Navigator looks at two broad performance metrics: financial health and accountability and transparency. In this update, called CN 2.1, only the financial health rating system was changed, which includes seven metrics. To summarize:

  • Program Expenses, Administration Expenses, Fundraising Expenses, Fundraising Efficiency, and Working Capital Ratio went from being calculated based solely on the previous fiscal year to an average of the last three fiscal years.
  • Primary Revenue Growth, which measures growth in income from the work that a charity does (e.g., grants, donations, program fees) was removed from the rating system. Charity Navigator decided this metric did not truly represent the financial health of an organization, as a nonprofit might recognize a large gift one year yet actually receive the funds over several years.
  • Program Expense Growth, conversely, measures the growth in expenses incurred from doing the work of the nonprofit measured over the last three to five fiscal years. There was no change to this metric.
  • A new metric that was added to the rating system is the Liabilities to Assets Ratio, based on the most recently completed fiscal year. The intent of including this ratio in the assessment is to call attention to excessive debt.

An important change that is not reflected in the metrics themselves is how Charity Navigator is now looking at overhead costs. Previously, an organization could not get a perfect score if it had any administrative expenses. This isn’t a realistic standard to hold nonprofits to, as Charity Navigator’s former CEO Ken Berger said in a 2013 letter he co-signed addressing the “overhead myth.” Under CN 2.1, as long as an organization’s overhead expenditure falls within a given range depending on the type of nonprofit, they can receive a perfect score.

Overall, the new system seems fair and offers a more accurate picture of the health of an organization, which is exactly what Charity Navigator was aiming for. Elizabeth Searing, a nonprofit expert and member of the task force charged with updating the rating system, said, “All of the changes they made are definitely improvements.”

For instance, looking at expenses over time as opposed to just one year allows for consideration of outlier years, such as anniversary years, when organizations tend to spend more on fundraising and promotion.

Even with the new system, only about 25 percent of the organizations rated by Charity Navigator had a rating that changed. However, donors and other individuals involved with nonprofit organizations are encouraged to take these ratings with a grain of salt. According to Sandra Miniutti, vice president of marketing for Charity Navigator, “We don’t advocate that a donor jump ship if a rating changes. Rather, a change suggests that donors should seek more information from the charity and see if it offers a good explanation. In that way, donors can education themselves about the charity and decide over time if they want to continue supporting it.”

One thing that Charity Navigator did not change in the update is its stance on Joint Program Allocations, which has been controversial in some instances (like WWP) but in our opinion is justified. Here is what they say about that:

Consistent with Generally Accepted Accounting Principles (GAAP), some organizations that follow SOP 98-2 or ASC 958-720-45 report a portion of their specific joint costs from combined educational campaigns and fundraising solicitations as program costs. The IRS requires that these organizations disclose the allocation on the Form 990. In most cases, charities utilizing this technique allocate a small percentage of their solicitation costs to program expenses from fundraising expenses. However, we believe that donors are not generally aware of this accounting technique and that they would not embrace it if they knew a charity was employing it, nor does Charity Navigator. Therefore, as an advisor and advocate for donors, when we see charities using this technique we factor out the joint costs allocated to program expenses and add them to fundraising. The exceptions to this policy are determined based on a review of the 990 and the charity’s website (in some cases we review data provided to us from the charity directly). We analyze these items to see if the organization’s mission includes a significant education/advocacy program or other type of program that would directly be associated with joint costs. If that is the case, we inspect in further detail the charity’s expenses in regards to those specific programs. Finally, we review the charity’s website to confirm that there is clarity for a potential donor that the organization in question employs the types of programs that entail joint cost activity.

Where we may not agree on the direction Charity Navigator is taking, its intent—not yet come to fruition—is to try to reflect programmatic outcomes on its platform. Michael Thatcher, president and CEO of Charity Navigator, says that the long-term goal of the organization is to include a “results reporting” metric that would take into account how effective nonprofits are at achieving their mission. However, as each organization is different and may use different criteria to assess mission progress, creating such a standardized rating system has been difficult. NPQ’s opinion is that if the watchdogs have taken this long to understand the problems in emphasizing overhead, their efficacy in being able to measure the more complex picture of outcomes may be nearly nil.—Sheela Nimishakavi and Ruth McCambridge

  • Jodi Segal

    I find the update to be barely an improvement. They still expect an unrealistic 1 year of operating reserves, judge the growth of your program budget, and deem fundraising and overhead to be a burden to the point that larger organizations can skew their reporting boost ratings. I think they need an overhaul, not just a tweak in a couple of categories.

  • It’s a slight improvement, like taking a 1/2 teaspoon of sugar out of a can of Coke. They continue to re-allocate joint costs in contradiction to federal standards, and other methodological problems as well as the judgment choices noted in this article.

  • SophieB

    I have to disagree with NPQ’s assertion that Charity Navigator has at long last come to understand the problems of emphasizing overhead. Look a little deeper at Charity Navigator’s website about their methodology regarding administrative costs. It still says “Lower is better”. And, the ranges for administrative costs for the different scores are just absurd. For a general nonprofit you need to have administrative costs below 15% to get the highest rate. Heaven forbid you are a food bank–to get a perfect score you can’t spend more than 3% on overhead. These expectations continue to feed the nonprofit starvation cycle (sorry about the pun). This seems contrary to the letter they signed with the BBB and Guidestar that said more nonprofits should actually be spending more on these costs in order to be efficient and effective.

    But, if you want a chuckle, look up Charity Navigator’s most recent 990. They’ve got a lot of work to do to bring down the amount they are spending on fundraising in order to get a decent score depending on which category of nonprofit they put themselves in–I find it hard to think that the costs of what they do should be compared to the costs of a fostercare provider.

  • anitagjen

    I have never considered CN’s rating system to be a fair or accurate way to assess charitable organizations, or to make a decision to support one – or not. These changes in their ratings system do not change my mind, and I will continue to tell any one who’s considering donating to any charity to ignore them and their rating system. I will continue to check a charity’s financial health and effectiveness via Guidestar and Googling.

  • Scott Schaffer

    Charity Navigator’s adjustments are tiny relative to the problems the organization should have been trying to solve. Consider that in the “old” system, only one of the seven component metrics of the Financial Health Rating (working capital ratio) had anything to do with financial health. The other six relate to “financial efficiency” and “financial capacity,” which can (and often do) flow counter to financial health. “Financial efficiency” metrics actually discourage smart investments in organizational capacity, making Charity Navigator one of the biggest perpetrators of the idea that “overhead” spending is bad. With these tweaks, just two of seven component metrics in the “new” system relate to financial health. Five do not, and since Charity Navigator nonsensically weights each metric equally in the formula, 71% of the Financial Health Rating is off topic. Charity Navigator has overly complicated the task of determining financial health, and thereby muddled it.

    My organization, Public Interest Management Group, published a white paper on Charity Navigator’s methodology earlier this year. Many hoped forthcoming updates would correct the core problems, but it’s disappointing to report that the paper’s major findings all remain true.

    What’s most interesting to me about this story is the extent to which the nonprofit sector has accepted a deeply flawed methodology. What does this say about nonprofit culture?