When a charitable donor wants to do “due diligence” on a potential grantee, there are a number of sources for researching potential recipients. Guidestar provides access to some basic information, largely drawn from grantees’ IRS Form 990 submissions. Rating services such as Charity Navigator, the American Institute of Philanthropy (AIP), and the Better Business Bureau Wise Giving Alliance evaluate and grade thousands of nonprofits based on a few standardized measures of organizational efficiency. There has been much criticism of the charitable rating services, though sometimes the findings and observations can be very useful in explaining some financial aspects of troubled nonprofits. All are trying to reduce their reliance on financial indicators and find measures that reflect more about the organizations’ accomplishments, impacts, and outcomes. What about someone interested in investing in community development financial institutions (CDFIs)?
CDFIs, the vast majority of which are nonprofits, provide financial services—lending and investment—in low income neighborhoods. Typically, most people know of them as community development banks or community development loan funds that lend for affordable housing and economic development, taking higher-risk projects that commercial lenders typically eschew. Among the better known CDFIs are Boston Community Capital, Enterprise Corporation of the Delta in Jackson, Miss., the Federation of Appalachian Housing Enterprises in Berea, Ky., IFF (which used to be known as the Illinois Facilities Fund), the Reinvestment Fund in Philadelphia, and Southern Bancorp in Arkadelphia, Ark. Some 860 CDFIs around the nation have been certified at one time or another by the U.S. Department of the Treasury for funding through its CDFI Fund.
But other than checking to make sure a CDFI has been certified by Treasury for funding, how might a private investor—a bank, a pension fund, a foundation—ascertain whether or not a particular CDFI was worth the risk? With the increase of socially motivated capital in search for community ventures, CDFIs have been proving their mettle as solid and reputable actors in community lending, but how could investors conduct due diligence on them?
Enter the Opportunity Finance Network (OFN), the national trade association for many CDFIs. OFN realized that if the industry was to grow, it would have to provide a tool that would standardize the key information about CDFIs and make the information transparent. In 2004, with the help of an advisory board of investors, OFN assembled eight people to begin exploring and testing how to assess and rate CDFIs. Paige Chapel, formerly a senior staff member at Shorebank Advisory Services, was one of those eight. OFN made the opportunity to be a ratings guinea pig available to much of its membership. Approximately 50 CDFIs signed up, Chapel told Nonprofit Quarterly, and eight were selected to be the first to be subjected to the CARS review.
Chapel says the obvious assumption was that they “were going to get an A from us” and that ratings would “open up new doors to capital.” This triggered two surprising responses. First, “there were a lot of upset people” when they were “told that they weren’t the best,” Chapel says. It was exceptionally difficult to get a top rating. Next, after going through the process, Chapel says, some CDFIs asked, essentially, “Okay, we got rated, so where’s the money?”
While obviously useful to the rated CDFIs, the ratings were for the investors as clients or customers, not for the CDFI. And at least at the outset, subjecting one’s organization to this rigorous analysis didn’t necessarily lead to the deep pockets of investors.
The program has since issued over 300 ratings opinions on 70 CDFIs, addressing their impact, performance, and financial strength. It is known as the CDFI Assessment and Ratings System, or CARS. These aren’t quickie reviews based on 990s. They are deep analyses that are meant to stand for three years, a 60-page “deep dive” followed by a third-year desk review. Chapel says the CARS ratings are “driving performance among rated institutions...dealing with issues exposing investors to risk.” Although generated for investors, the CARS ratings identify for the CDFIs where they have weaknesses, giving them a tool that they can use for self-improvement.
According to Chapel, the ratings have multiple components (including elements that sound useful as frames for nonprofits in general, but are rarely incorporated into charity ratings). For investors, the impact performance rating addresses “how well the CDFI does what it says it is doing”—that is, how investors might know whether or not the organization is achieving its mission. That involves examining whether the organizations are generating data that is actually relevant to their outcomes. The issue isn’t just whether they collect data. Chapel says that the rating assesses how the CDFIs use the data they collect and “whether they are changing their programs and products and services” based on the data feedback. She notes that a common denominator of many of the early rated CDFIs was that they “didn’t have a formal feedback loop. They collected data because they had to, and there was no demonstration that they were adjusting their behavior in response.” In response, the big shift from the CDFIs has been the creation of formal feedback loops.
CDFIs are lenders, so the financial performance ratings are similar to the FDIC’s on banks—focusing on capitalization, asset quality, management, earnings, and liquidity, with the acronym of CAMEL. Unlike analyses in the banking sector, the CARS program didn’t use comparative benchmarks because of the huge variety in size and complexity of the groups. Chapel says that the smallest rated CDFI in her program has assets of $1.5 million, while the largest has over $1 billion, making industry-level benchmarks pretty meaningless. Rather, CARS examines each CDFI individually for the areas of risk it would present a potential investor and the mitigation efforts that the organization has undertaken. Chapel and her team found that the CDFIs all did their business differently, so CARS standardized their financials for a reasonable comparative analysis. The most common weakness was asset quality—how secure and collateralized the loans were, whether loans were restructured, and rigor in portfolio monitoring and management. How much were the CDFIs—like the banks—on top of risk management?
Seventy currently rated institutions isn’t a large proportion of the CDFI sector, much like the few thousand of nonprofits rated by Charity Navigator, AIP, and the Wise Giving Alliance. But of the roughly 600 CDFIs currently certified by Treasury with roughly $10 billion in assets, the 70 rated by CARS manage some $4.5 billion. The obvious bias, then, is size. The rigor of the CARS analysis compels rated groups to do “a ton of work,” Chapel notes. “They spend weeks or months getting ready to get ready,” she says. “A tiny organization would have to stop doing some of its work to prepare for the rating process.”
How does CARS differ from some of the ratings systems of the Wall Street investment rating firms? The customer is the investor who buys the ratings reports. While the CDFIs have some “skin in the game,” Chapel says, the primary customer base in the investor, and, “if the ratings aren’t objective and reliable, the investors won’t use them.” Based on surveys (with the last one done in 2010), the CARS system has improved from the initial eight-group test run in 2004. CARS has helped the larger CDFIs (those with over $50 million in assets) to better raise assets, generating some of the investment response that the initial eight had hoped for. For the smaller CDFIs, the ratings have helped them understand the weaknesses they needed to address.
The Design Tweaks
Eventually, CARS spun off from the trade association, as OFN had intended. Now separate from OFN, CARS can overcome the perception of a structural conflict of interest, for example, in considering a role in underwriting CDFI bonds, which it probably couldn’t have done at OFN since OFN was an advocate for CDFI bonds. Today, it would be no surprise to see the impact investing movement turn to CARS for help in assessing CDFI risk.
Nonetheless, there is room for expansion and further development of the rating system. Although CARS collected a deep, standardized data set on rated CDFIs using 300 data points from data collected annually, Chapel says that it would be much better for investors and CDFIs to collect the data quarterly. It could also be working with CDFIs on a data collection platform, sort of like bank call reports, but with more of a focus on financial indicators than on transactions. And it might do well to expand its list of investors that subscribe to the CARS ratings; currently, half of the subscribers are from financial institutions and more than one-fourth are from foundations doing Program Related Investments (PRIs), with the balance being social investment funds.
While the CARS ratings have helped advance the CDFI industry, CARS’ implementation occurred during the national “meltdown in the real estate sector [that] has reversed so much of the good work” of community developers, Chapel says. She describes the impact of foreclosures as “clefts in the dental work of neighborhoods.” The areas of poverty in which CDFIs invest are no longer “ghettoized,” she notes, but are much more geographically dispersed beyond inner-city neighborhoods. With the limitations on public subsidies and the reluctance of private lenders to do much in poor neighborhoods, CDFIs have had to adjust to the fact that their moneys—often upfront predevelopment lending, construction financing, site purchase funds, etc.—will be in the deals longer. This is consistent with the trend in the affordable housing world as state governments that used to fill financing gaps no longer have the money, thereby delaying closings for permanent financing. As Chapel says, “everything now takes longer.”
The Nonprofit Test Drive?
Are there lessons for nonprofits from the CDFI rating experience? Yes, and one is the importance of courage. A retreat into defensiveness or a resistance to external review is a signal to investors—or donors—that an organization might have something to hide. Sticking out one’s organizational neck by allowing an entity like Chapel’s to do a “deep dive” is a statement of the courage of nonprofit convictions. Not all organizations will emerge from this kind of 300-data-point analysis with top notch ratings. However, the willingness to do so and the ability to take advantage of the ratings to develop a program of organizational self-improvement sends a positive message to investors—or donors.
Everyone knows that nonprofit transparency depending solely on 990s is pretty thin. Until nonprofits start regularly making audits public and opening themselves up to qualified teams comparable to Chapel’s staff, they will be missing opportunities in the realm of social investment funding. Ultimately, CARS examines more than just a CDFIs real estate and loan portfolio. It is geared to evaluating CDFIs as organizations—how well they articulate their goals, how well they collect and use feedback, whether they are smart enough and capable enough to use feedback to make necessary program and product changes. Nonprofits would be well advised to explore the CARS system—with investors (or donors) as the customers—and discover how they would benefit from independent, objectives experts conducting similar “deep dives” on their own organizations.