Log In | Become a Member | Contact Us

Market Intelligence for Printing and Publishing

Connect on Twitter | Facebook | LinkedIn

Featured:     SGIA EXPO     Production Inkjet     Installations and Placements Tracker

Economics & Research Blog

Can We Trust the Economic Data We Get?

In 2012,

By Dr. Joe Webb
Published: October 17, 2014

In 2012, the New York Post economics writer John Crudele reported that there were problems with the employment surveys being conducted in the Philadelphia office of the Bureau of Labor Statistics. The reports were from a confidential source, and that source was proven to be correct.

A Congressional review of the report was initiated and the investigation was not particularly encouraging. The bottom line is that the employment field survey data can be manipulated if someone wants to do it. Here are some of the comments from the report to Congress:

“the Current Population Survey is vulnerable to data falsification and that the Census Bureau needs to make common sense reforms to protect the integrity of survey data.

“It is impossible to match logged activity with the employee who performed it with certainty. Some records and case notes can also be edited or deleted with no record of the changes made.

“...the Bureau’s current practices make it difficult to report or track potential data falsification and, in some cases, create clear incentives to disregard potential data falsification.

“The inefficient, paper-based investigative procedures lack consistency and make tracking suspected falsification difficult.

If you download the report, page 8 has the major findings on one page.

The most guarded report published by the US government, excluding Department of Defense reports, is the unemployment report; it has been that way since its beginning. The report states that the problem was related to Philadelphia and was not rampant nationally. But the report is disturbing in that the number of vulnerabilities in the the system, and how supervisors traded off getting responses and ignored data quality.

This is a reminder that incentives can be wrongly directed to result in counterproductive results. I've seen it many times: have a sales contest to push a product and the most conniving salespeople can get sales, only to have dissatisfied customers returning the products once they realize it wasn't right for them.

Workers tend to do what the incentives encourage them to do. Short-term incentives usually disrupt the natural flow of business and can undercut critical long term values. The employment data are bedrock for the models that create many other government data series and initial estimates for many series, including initial reports of printing shipments until other data are available. If there's a problem with the employment data, those problems ripple through all kinds of data series that statisticians use and don't suspect employment data are involved. That is, unless they've read the footnotes or methodology documents.

This report gives good reason for healthy skepticism about government data, but data conspiracy theorists should be disappointed. The data are discouraging enough without malfeasance. In the case of the Philadelphia data, it was a case of seriously flawed management, and I would not be surprised if some of that management direction was to cut corners to be considered for a promotion or qualify for a bonus program. The report says basically that:

“Incentive structures for reviewers discourage the identification of falsification. The falsification investigation still occurs in a cumbersome, paper-based process... the Census Bureau still mostly uses response rates to determine performance ratings... There are few incentives for reporting suspected falsification, and the process for doing so is difficult.”

It's disheartening to hear that the most important economics report issued by the Federal government can have problems like this, and lacks modern processes, but there are lessons for businesses and managers in here.

The first is to beware of the unintended consequences of performance incentives. The purpose of BLS research is accurate information, and not how many questionnaires are collected. Many sales organizations focus on the number of sales calls per day. If someone makes 5 or 6 calls per day, they're rewarded. It's the content of the calls that matters. Do we create perverse incentives in our business? Over time I've come to see that many of our traditional incentive programs create inflexibility in management, resistance to change, and impediments to the implementation of new strategy. The big publishing companies had big bonuses at the top, and those were all put at risk with the rise of digital media. They were paid to perpetuate their usual ways of doing business and disparaged the digital competitors. Everyone sees where that got them. There were numerous parallels in the printing business.

Second, incentives in one place can cause problems in a different place. I was at a company that had manufacturing problems. The solution? Widen the specs to make more product qualify as “good.” What happened? The manufacturing department got bonuses. The tech support department needed more budget to hire staff handling the complaints.

Third, be on the lookout for things that don't make sense. Management needs good information. Processes and measurement can get out of range. Don't assume that anything out of range is good or bad, but objectively search for reasons. Forcing things to stay within averages can hide problems, but when things get out of set ranges it should ignite curiosity and not blame. Fixing problems is good, fixing blame can be really bad if it's handled poorly and does not lead to solutions and insight.

# # #

Dr. Joe Webb is one of the graphic arts industry's best-known consultants, forecasters, and commentators. He is the director of WhatTheyThink's Economics and Research Center.



Become a Member

Join the thousands of printing executives who are already part of the WhatTheyThink Community.

Copyright © 2018 WhatTheyThink. All Rights Reserved