La Alianza Hispana, a community-based, multi-service organization in Boston, has used evaluation as a central management practice to transform the organization from one that “was a little scary and dreadful,” “very hierarchical, stiff, and heavily fractionalized” into a place “that I love to come to,” and where “everyone participates in our mission.” Because evaluation can be confusing to newcomers, we are extensively featuring one organization’s experience with instituting evaluation to give you a more thorough view of what it means in real life. Without question Alianza has experienced profound changes in its operations, which gives us all hope that evaluation is truly beneficial to constituents. Listen in on the conversation Molly Weis had with some Alianza staff members: Rosita Colon, director of Central Services, Administrative Support Team; Carlos Martinez, executive director, Administrative Support Team; David Oritz, program manager, Youth Department; Carlos Telles, director of personnel, Administrative Support Team; Doralisa Torres, administrative assistant, Public Health Unit and Senior Services.
Carlos M.: Alianza was a classic nonprofit. We had no systems to do any type of evaluation. We were doing a lot of output measurements: just looking at the numbers of people served; workshops provided. Program managers would say, “I know how the programs run. I know the benefits we provide.” But we had no documentation to back up their claims. Or I would hear “You’re not trusting me. These foundations and funders are asking us to change and we are doing a good job. Why can’t they understand that? The people who are going to suffer are the clients themselves, because I am no longer able to provide the direct care I was providing before.” The staff worried mostly about too much paperwork and time away from their existing responsibilities. In part, this is true: starting an initiative that involves change creates more work, but over time it has just melded into our everyday operations and does not feel like extra work.
Instituting evaluation also created conflict between the old and the new way of thinking. Shifting to evaluation really is more about changing people’s mental models or perspective about their work. It’s about helping them understand the why of evaluation. I had to get to know those who resisted and understand their concerns and build trust. Get to know them on a personal basis. In addition, I had to model the behavior I wanted to see in the organization. Modeling was very, very important. It’s amazing how far we have come since 1995. One person said to me, “You can tell whose people have not bought into evaluation and teams.” For me that was wonderful to hear because if everyone in the organization knows who’s thinking in the old way, then we have come to a new beginning–a new set of cultural values.
Internally, it has been wonderful. It’s been a lot of work. But to see people thinking and designing programs differently–they are thinking outcomes and models, and not just the activity. They are thinking long term and the bigger picture. People now see the benefits and its relevance. It’s really amazing how we have come along over the years. A few years ago, people would wait to be told what to do. I had to come up with ideas. Now they are generating new ideas. We are able to unleash people’s creativity.
Doralisa:I would add that the organization feels free and relaxed and by feeling that way people work harder. If I’m treated with respect, in exchange, I will work hard or may work late one day. Before I was a strict nine-to-five person.
Carlos M.: As a community-based organization, for me the driving force behind outcomes measurement is to create social change. Social change in the sense that people become more social and become economic participants in our community, as well as get involved with civic participation. If we cannot demonstrate that our programs are contributing to social change then we are not doing our job as a community-based organization.
Carlos M.: Before I discuss the process experience, it’s important to understand that we worked on two issues–changing to a team-based, participatory management system, and beginning evaluation. The team system must go hand in hand with evaluation. When the information comes back from measurements, the entire staff needs to be engaged in the redesign of the program. Participation is essential.
So to give you a feel for our change process, in December of 1995 we hosted a voluntary retreat to introduce a total quality management initiative–which is process evaluation–and we introduced a team-based management system that involved considerable collaboration and exchange of ideas among all staff. Initially few staff participated in our improvement teams–at some meetings only five or six people attended. At the time it was very discouraging. But we stuck to our guns and little by little more people joined us.
Then in the third year, 1998, we started our outcome initiative. With two years’ worth of experience in quality improvement, the staff was at a different level now. They had bought into change. In terms of outcome evaluation, we wanted to know how were we really going to document and measure the good stuff. Previously, we had no documents that told us the benefits that we accomplished. Today, you can really see the difference. Our last retreat had full staff participation.
Carlos T.:I would add that we relied heavily upon third parties to keep us on track because we were stumbling a lot in implementation. The real change came when Carlos Martinez restructured central administration as a model for the rest of the organization. After that the departmental work really followed.
Carlos M.: The staff had to see how evaluation improvements could play out. That took a lot of time. One day I may have one perspective. A couple of months later, because I learned something new, I changed my mind. And you need to illustrate how the new information changed your mind. I emphasized that we learn from the practices we implement. We look to see how the intervention worked. We built time for reflection, to discuss if it was working. We’re not basing our work on a model in a best-practice book
Some people–including funders–think as they go into this that we can start measuring things in a year. That is really impossible. After a year, people just start figuring it all out. That’s if all goes well. We are in the second year of a three-year process. The first year was for getting people’s mental models into place, the second is for starting to develop the tools for measuring the outcomes, and in the third year we’ll finally implement those and test them. Finally we hope to get good data so we can say, “Wow, this is how we can redesign our programs for the future.” We want to institutionalize quality improvement and outcome evaluation as a way of thinking. That, for me, is a key issue. How do you sustain evaluation improvement for the long-term?
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Carlos M.: Technology is critical to the collected information, and you need to do something with it right away. For us that means creating databases so that we can immediately query for information and people can see how it informs decisions. To get this done, we’ve hired consultants to design databases and invested in technology. Each department is at a different level. Our education department is using computers. Our employment and training division already has their database in place. Our clinical unit is doing that right now. So we’re doing one program at a time.
Carlos T.: We’ve had one training in outcome measurements, so this is very new. I liked outcomes evaluation from the outset. But outcomes measurement forces you to think about just what you do. Our job descriptions have really been nothing more than a to-do list. If you could do this laundry list, then you could claim you were a good employee and getting the job done. Frankly, anybody could do their job description and not produce a single positive outcome and, in fact, could produce totally negative outcomes. So for human resources, forcing you to think about what you do gets you to think very differently about your job. It forces you to think about customer orientation; now the customer is not just the client but anybody you have contact with-face-to-face, hand-to-hand, or through the mail. How will my work impact other people’s work and how do I relate to others in their department, the organization, and out in the community? So over the last year we have been looking at our jobs and outcomes. Now the to-do list is set in a larger context of outcomes. You’re hired to produce positive outcomes.
David: In the after-school program, being able to see the improvements in the kids is great because we’re not thinking anymore about what’s right for the kids–like providing space after school. Now we are seeing what’s happening to them. That’s the great part of it. So now we receive all the report cards and take their total grade average and track it. When we find weakness in subjects, our computer instructor may create a curriculum for that student. The computer also keeps track of their improvement, if any, and the parents, the child, and the teacher receive a copy of the report.
In terms of quality improvement, we have weekly meetings with staff, volunteers, and the students and work on ways to improve activities and learning.
Doralisa: Today we think differently about the senior citizen’s activity schedule. They love bingo, but we realized it limited them from learning and participating with each other and the community. Now we’re taking field trips to festivals and having activities like Let’s Speak English. They joke about practicing English, but they really are improving.
Rosita: Also, we are implementing reading and English as a second language (ESL)–three people are enrolled in classes. We also have time to read newspapers and listen to the radio and this keeps them involved. Recently, the governor of Puerto Rico and Scott Harshberger visited us and they felt so proud when their picture made the newspaper. The members also help achieve our goals: one member was not participating and the other members picked up their table and moved it over to the quiet person.
In terms of quality improvement, I’m more conscious now to take minutes as a recording method.
Carlos M.: We have a new method for a department-generated operating plan for the fiscal year. We start with the outcomes logic model that United Way structured and look at our desired outcomes and then we work backward. This determines the kinds of activities, like planning evaluation, and training, that we have to put in place to be able to accomplish our outcomes. Do we need to redesign our curricula? Perhaps invest in other areas?
Each department’s operating plan covers its vision for the department; strategic positioning and core competencies; target population; services; outcomes and outputs; investment in improvements; competitors; funders; operating budget; technology plan; and internal collaborations with other departments and outside agencies.
Carlos T.: Each department developed surveys and questioned its clients and community, inviting them to inform the development of the operating plan.
Carlos M.: Last year it was amazing how close our estimated budget was to our actuals. Last year we estimated $1.995 million and we ended up with $2.015 million. I should add each department does their plan a little bit differently because they are at different stages of their development.
Carlos T.: The operating plan keeps everyone informed enough and online so they can be courageous enough about being creative and offering input, and about feeling that they can impact what we are going to do and how we are going to do it.
Carlos M.: Those who understand outcomes think it’s great. For those who don’t, it’s more difficult to know their expectations.
Some funders are not realistic about what an agency can measure. Some want you to measure long-term outcomes–that’s not feasible. For example, in a training program, the long-term goal is for people to become self-sufficient. Now to what degree do I measure it and for how long do I do monitor and follow-up? Some funders want to see more than a year follow-up. I could do that, but not with the funding they currently provide. With state funders, they don’t pay for evaluation; they just pay for the direct service. So if you want to become a viable player in a certain service that you provide, and the funding source is not willing to fund evaluation, how do you institute outcomes measurement in a way that can yield some type of evaluation, but that does not drain the organization–financially or in terms of human resources.
Carlos M.: Funders are asking us to collect this information, but I’m not quite sure to what degree funders are going to be using it. And how uniform are funders are going to be among themselves? Is this going to drive competition or lessen competition between agencies? Is it going to mean that for everyone who provides youth services for boys, they are all going to be measured using the same information? How are funders going to really use this information?