You are a professional sociologist. In your experience, is it common for evaluators to have a background in sociology or psychology, a professional mastery of SPSS and similar programs, or is it really not necessary?
Fortunately, there is additional training available. Much of my sociological education was strictly academic, and I had to learn evaluation by doing. When I started, there were no courses, no seminars. I had to take interviews and surveys as sociological methods and adapt them to evaluation. Psychologists had to do the same with tests, and economists with cost-benefit analysis. Working in evaluation requires the ability to adapt these academic research methodologies. This is a key skill.
Today evaluation is not only about techniques, but also about interpersonal relationships. Evaluators must be good listeners, able to work with people from different cultures, be able to make connections, communicate well, solve problems and manage conflicts.
Lets talk about the so-called soft skills. Are they really important to the evaluator?
Thats a good question. The American Evaluation Association has established a set of competencies that are essential to an evaluator. Of course, these include methodological competence, project management, knowledge of professional and ethical standards.
And then there are interpersonal competencies. Evaluators must be good listeners, able to work with people from different cultures. They must treat people with respect. Be able to make connections, communicate well, solve problems and manage conflicts.
One of my recent books focuses on the relevant skills for working with stakeholders at various levels. This is an important change, as evaluation today is not only about techniques, but also about interpersonal relationships.
In some of your publications, you mentioned the concept of evaluative thinking and suggested sharing this type of thinking with the project team, with the client. Doesnt this threaten the existence of the evaluators as a profession? Wouldnt evaluation become just a management function?
I dont consider this a danger. You described the situation very well, but evaluative thinking in management only means that you have better communication because you speak the same language. People understand the concepts of logical frameworks, theories of change, SMART goals, different types of evaluation for different purposes, but in order to do the actual evaluation, you still need a fresh perspective, you still need people who can look at things differently.
When people do have an evaluative mindset, they begin to appreciate having a specialist who helps them look at things through the eyes of an outsider. It is something I do often. As an evaluator, I help managers, employees and even shareholders understand how evaluators think, how they define criteria and interpret data, how they draw conclusions but they would still need an evaluator to make their job easier, to have another pair of eyes to look at things from a different perspective. Helping to think in an evaluative way is about improving communication and understanding of whats going on.
So you dont think that evaluation will become just a part of management, because the evaluator has other functions besides allowing you to understand whats really going on.
Yes, thats right. Evaluators also give an additional point of view. Even where there are internal evaluators in the management, their job is to do the evaluation and make sure theres good data and make sure people take the time to interpret it. I believe the evaluator function will remain, but the better management understands evaluation, the better communication and use of evaluation results will be.
In your 2009 article, The Future of Program Evaluation, you wrote that the gold standard of evaluation methodologies would be hotly debated over the next few decades. Can we say that the answer to this question has changed significantly since then? Or do we still have the same leader randomized controlled trials?
The debate continues, but its becoming increasingly clear that in a rapidly changing world, we need fast-turnaround techniques that produce real data in real time. Randomized controlled trials are really not very useful in a complex, dynamic, rapidly changing world. They work best in a stable situation where there are clear fixed interventions.
In a rapidly changing world, we need fast-turnaround techniques that produce real data in real time and provide them just as quickly to people.
If you have to work on topics like pandemics, climate change, or civil unrest when things are changing rapidly, randomized controlled trials are not appropriate they are too rigid and too time-consuming.
The speed of change means that the evaluation must produce results quickly, in real time, and provide them just as quickly to people who need to make decisions in a time-constrained environment.
The evaluators profession is changing in response to what is going on, but evaluators themselves are also changing the world. What do you think this profession has already given to the world?
I think we are living in a time of a worldwide battle between approaches based on evidence, facts, and science, on the one hand, and a part of the world that ignores them and only seeks to promote ideology. Evaluation is part of this worldwide data. Evaluation proves that if you use data, you make better decisions and help people more.
Weve seen it work with people who refused COVID vaccination, who dont believe the pandemic and vaccination data. In the so-called post-truth or anti-science world, where people can say whatever they believe and assume it to be true, evaluators help people appreciate the importance of evidence, looking at reality and working with it rather than with subjective perceptions and beliefs. This is our contribution.
In the so-called post-truth or anti-science world evaluators help people appreciate the importance of evidence, looking at reality and working with it.
You are the author of many program evaluation concepts used by evaluators around the world. We would like to discuss some of them. The first question concerns the concept of utilization-focused evaluation. In it, you emphasize the clients role and interest, their desire to participate in the use of the evaluation results. Nowadays, investors in social projects and even the state (as in the social impact bond model, SIB), often become the evaluators clients. They are not involved in the project itself, but are its customers. Is it possible to use the evaluation principle with an emphasis on its use in this specific situation? Are there any peculiarities of its application in this case?
This is an important question. Utilization-focused evaluation requires considering different levels and different types of stakeholders we call them intended users. Who is this evaluation for? If the intended users are social investors, then I would want to engage them in defining the criteria they are seeking. What do they want to know to make a better investment?
When I work with social investors, part of what I do is help them understand as clients that there are different ways to do evaluation, because there are many different types of programs. For example, when I work with boards of directors investing in social projects or charitable foundations, I do an exercise with them. I make a list of different types of financial instruments: blue chip stocks, high growth stocks, undervalued stocks, that is, stocks of companies in transition or struggling, long-term bonds, short-term bonds, mutual funds. These people are well aware of the different types of financial instruments and the differences in their profitability. Then I take a set of projects and draw parallels between different kinds of projects and different kinds of financial instruments.