By Ali Jaffer and Mona Mourshed
The United States has thousands of workforce development and training programs, run by the public, social, and private sectors. Some are excellent; others, not so much. The problem is that we don’t know which are which.
That lack of knowledge is costly. According to the Georgetown University Center on Education and the Workforce, spending on programs in the U.S. for those not going to four-year colleges — everything from federal and state jobs initiatives to on-the-job training, certifications, community college, and employer training — is at least $300 billion a year. But according to the World Bank, only 30% of youth employment programs are successful, with many of those offering only marginal benefit. And most programs have no positive effect at all.
Yet workplace training is more necessary than ever, as technology and globalization continue to change the types of jobs that are available. In a dynamic economy workers are expected to adapt, to change not just jobs but sometimes careers, to pick up new skills when necessary. That requires successful training programs, which means we need to know which ones work.
Most existing training programs do try to assess their effectiveness. Many measure cost per student. Some measure job placement rates. A minority track on-the-job retention. These metrics are useful but miss the big picture, in part because they mistake a program’s cost for its value.
Think about it. If a program has a low cost per student but fails to actually help people forge a solid career, then the fact that the failure is cheap does not make it any less of a failure. Conversely, some programs may promise high rates of job retention, but at such a high cost per student that the program proves impractical or impossible to scale. Or, if the jobs themselves are low paying and don’t offer students a viable career path, they may not be worth it regardless of the high retention rates. Cost per student is good to know, but it doesn’t mean much if students don’t succeed in the workplace. Job placement matters, but a high placement rate is meaningless if the participant leaves after a week or if the job itself is temporary or doesn’t pay well.
Conducting an accurate cost-benefit analysis requires a holistic approach, one that incorporates costs and job placement and alsoaccounts for how participants are doing after they leave the program. We need to adopt something similar to a “total cost of ownership” (TCO) analysis. Now common in industry, TCO considers both direct and indirect costs over time. Applying a form of TCO to workforce programs makes sense because, instead of concentrating on inputs (in the form of spending), this approach emphasizes outcomes (in the form of long-term results).
We have come to this realization the hard way — through experience. For the last two years we have been implementing Generation, a youth employment program that is part of the McKinsey Social Initiative. So far, Generation has served nearly 10,000 young people in five countries: India, Kenya, Mexico, Spain, and the United States. As we sought to measure Generation’s results, we began to understand the limitations of current practice.
We developed a new metric — cost per employed day (CPED) over the first six months — that we believe better defines how well employment programs work.
CPED combines elements of existing measures into a powerful, readily understandable one. It measures the social and economic benefits of employment programs with much greater precision.
Here’s an example. Program X serves 1,000 students at a cost of $1,000 each, or $1 million total. Five hundred individuals are placed into work (a 50% “job placement” rate), and they stay employed for an average of 60 days in the first six months. That adds up to 30,000 days on the job, at a cost of $33 per employed day. Program Y, on the other hand, has an up-front cost of $2,000 per student, but a placement rate of 80%, and graduates stay on the job for an average of 120 days. That comes out to 96,000 working days, or $21 per employed day. Clearly, Program Y, which at first blush looks twice as expensive as Program X, provides far more value in terms of helping participants find and keep gainful employment. At Generation, the CPED figure varies depending on the market, ranging from about $5 in India to $26 in the United States.
Debating the utility of specific metrics might seem like a minor thing. But adopting more-accurate measures of success increases accountability. And accountability drives results.
For example, once Generation managers realized the power of CPED, they used it to make operational improvements. On the basis of what we learned from CPED, we began to work more closely with employers to track retention rates and we increased our emphasis on mentoring in the first days on the job. Generation is also developing tools to improve data collection and management. While the data needed to make comparisons with other job training programs does not yet exist, our sense is that using CPED would reveal tens of billions of dollars in inefficient spending, in the form of programs with subpar CPED performance.
Perhaps the biggest challenge to widespread use of CPED is that workforce development programs are fragmented, with thousands of providers and almost as many ways of doing things. That makes getting basic information next to impossible. And because reporting requirements vary from place to place, practitioners spend an inordinate amount of time fulfilling compliance obligations that may be pointless.
CPED, by contrast, provides a simple and effective way to measure performance. For it to be adopted more widely, or even to become standard, all programs would need to collect data on cost per student, job placement, and retention. In addition, to enable everyone to learn what works, there should be a centralized database in which this information can be gathered and then easily accessed. Funders could help by adopting CPED and mandating that programs collect the necessary data.
Despite the promise shown by CPED, we have significant work ahead to improve this new metric and make it the standard across training programs. Today, for instance, many programs would struggle to measure CPED at the three-month mark, let alone at the six-month mark. Our hope is that once we, Generation, and other programs take this next step, we can extend the timeline for CPED, and perhaps even incorporate wages — both of which would make CPED a richer, even more accurate metric. While CPED can continue to be improved, it’s a big step in the right direction and can help us better measure the effectiveness of worker training programs.
“What gets measured gets managed” has become a cliché. Like many clichés, this one earned its status because there is a large element of truth to it. In a world in which 73 million young people are unemployed and over 200 million more struggle in unstable or dead-end jobs, it is surely possible to do much better. Data and metrics are part of the solution.
Ali Jaffer is the global head of operations for Generation and is based in Toronto with the McKinsey Social Initiative.
Mona Mourshed is global executive director of Generation and a senior partner with McKinsey & Company in Washington, DC, where she leads the education practice.
This article orignally appeard on HBR.org