Lies, Statistics, and the PMO, Page 2
3. Data becomes valid-or meaningless-in the analysis phase.
Especially if there are any qualitative questions (open-ended questions that elicit individualized answers), the task of analyzing survey results belongs not just to number-crunching but to creative, critical thinking. Good numbers don't always translate into sound conclusions. That's why you should ....
4. Go to the source.
What is the source of the statistic in question trying to sell you? On the surface, it may seem like disinterested information, but in today's society, we are frequently the targets of marketing disguised as information. When collecting research studies, go to the source instead of relying on quotes. I've recently seen some of our Center for Business Practices findings misrepresented in press releases—and from a university, at that. More than that, look at the numbers yourself, not just at the packaged results offered in an executive summary. Your knowledge of your own industry may allow you to draw conclusions from the data that the report writer missed.
5. Remember that even the best surveys raise more questions than they answer.
Otherwise, research would have ended a long time ago. The thoughtful researcher—or consumer of research—won't accept as final any results that seem counterintuitive. When I see an assertion such as "instituting career paths has no effect on project success rates" I immediately wonder: Why not? How long have they been in place? Do they offer technical and non-technical tracks? Is professional development cost supported by the organization? Are project managers more interested in other aspects of reward, rather than promotion or advancement? This is why articles in scholarly journals normally end with a section listing all the further research that needs to be done to explain the results of their research!
6. When doing internal research, be mindful of the Hawthorne Effect.
Merely by the way you structure your metrics-gathering, you can alter the processes and outcomes that create that data. The best example (for the worst result) that I can think of comes not from project management but from the human services field: When a state child protection agency began reviewing the metric of "cases closed," suddenly closing cases became more important than actually protecting children, with predictable results for the kids. Research on project management, thankfully, isn't likely to create such dire consequences, but it's easy to imagine a situation in which the project managers interviewed on topics such as whether or not their use of reward incentives translated into higher project success rates might start looking around for a new job.
For more good tips and resources on statistics and internal research, see the December 2002 issue of People On Projects. (This isn't available online, but back issues can be ordered in electronic format (PDF) by contacting email@example.com.)
About the Author
Jeannette Cabanis-Brewin is editor-in-chief of the Center for Business Practices, the research arm of project management consultancy PM Solutions.