Hugh L. McColl Distinguished Professor, Area Chair of Organizational Behavior, UNC Kenan-Flagler
UNC Kenan-Flagler Business School
Truck drivers don’t seek out accidents. Nurses and doctors don’t intend to hurt patients. Pilots don’t plan to make errors.
But sometimes they forget an important rule, make a mistake or take shortcuts.
The results can be costly interruptions in business operations, damage to a company’s reputation, serious injuries or illnesses, even deaths.
“Having operated in the high-risk safety space for a long time, I’m fairly convinced that there are not very many people in the world who intentionally do things they think will result in harm for themselves or others,” said David Hofmann, the Hugh L. McColl Distinguished professor and area chair of organizational behavior at UNC Kenan-Flagler.
Researchers have long sought to understand why otherwise well-intentioned people fail to follow policies designed to protect them and those around them. If employees could be trained to follow rules consistently, workplace accidents, job errors and other problems could be reduced, saving time and money.
The decline of willpower
So why don’t people follow rules and procedures?
One theory is that willpower and the ability to regulate our behavior — including conscientiously following safety rules and procedures — declines as we get tired.
Just as muscles get tired after exercise, willpower or the “self-regulatory muscle” gets weaker after a person has been working for a while – especially if the job requires mental focus or emotional regulation and the constant switching of gears between different types of tasks. The theory predicts that when people rest, their self-regulatory muscles will recover, enabling them to follow procedures more faithfully when they return to work after a break.
Hofmann and colleagues Brad Staats of UNC Kenan-Flagler and Hengchen Dai and Katherine Milkman of the Wharton School used a large dataset to study this theory in the real world.
Clean hands, high stakes
Hand washing is a big problem in health care. Worldwide, the World Health Organization estimates that health-care workers follow hand-hygiene protocols about 38 percent of the time, on average.
In hospitals, where sick patients are especially vulnerable to infection, cleaner hands translate into patients who recover faster. Clean hands mean fewer hospital-acquired infections, which increase the cost of treatment, lead to longer stays and can threaten patients’ lives. Hand hygiene is one factor that contributes to hospital-acquired infections, which result in an estimated 100,000 deaths per year in the U.S.
Doctors and nurses know all this, and many hospitals go to considerable lengths to enforce hand-washing rules. They place hand sanitizer dispensers in every room, post reminder signs and even ask patients to remind their health-care providers to wash their hands.
But keeping their hands clean is just one of the many critical tasks health-care workers do. They fill out reports, monitor vital signs, administer and update prescriptions, interact with patients and their families and deal with emergencies. Other critical tasks compete with hand washing for their willpower.
Hand hygiene data
Proventix makes a radio-frequency ID system which uses a system of badges and monitors to measure how frequently doctors and nurses wash their hands on entering or leaving patients’ rooms. It provided a database of more than 13 million observations – which covered more than 4,157 caregivers at 35 hospitals –for Hofmann and his colleagues to analyze.
The researchers used it to calculate how long individual health-care providers had been at work and how much time they had off between shifts.
Were doctors and nurses more likely to wash their hands earlier in their work shifts than later? When employees had a few days off between shifts did they comply at a higher rate with hand hygiene rules?
Yes to both. Over a typical 12-hour shift, the data showed, hand hygiene compliance dropped from 42.6 percent during the first hour of a shift to 34.8 percent during the last shift, an 8.5 percent decrease in compliance over the course of the shift.
When workers had particularly intense shifts, working harder, hand-washing rates fell faster.
So what happened when workers had time off — from a few hours to a few days — between shifts? Sure enough, hand washing compliance rates went back up.
Implications for business
The researchers’ findings appear in the Journal of Applied Psychology article “The Impact of Time at Work and Time off from Work on Rule Compliance: The Case of Hand Hygiene in Healthcare.”
They have implications for other health-and-safety occupations – and any work in which compliance with routine rules and procedures is critical.
In the last few decades, rules have been implemented to limit the number of hours that truck drivers go without a break, how long pilots can fly and even how long doctors can stay on duty. Hofmann’s research supports those rules, reinforcing the idea that fatigue not only might lead to mistakes, but also reduces compliance with rules and procedures.
“There are a lot of conversations around fatigue, sleep deprivation and work performance,” Hofmann said. “How many hours can you work consecutively without a break?”
To improve compliance, their research suggests that how long people work, when and how often they get breaks, and how much time they get between shifts might be adjusted.
That could mean shortening shifts or building in mandatory breaks during shifts, Hofmann suggested. Reducing overtime also could make a difference.
But given that compliance – even among caregivers just starting their shifts – is relatively low, job design by itself isn’t a cure-all.
Other techniques – such as developing technology that automatically prompts people to wash their hands or follow procedures – are likely to be required.
Just knowing that they were being monitored resulted in more compliance with hand-washing rules by health-care workers, according to Hofmann’s data.
Disappointingly, that uptick didn’t “stick.” When the monitoring stopped, compliance rates dropped again.
So mitigating the impact of fatigue is one part of getting employees to follow rules more consistently, but only one part.
“It’s a whole system of things,” Hofmann said. “It’s a systemic problem that needs a systemic solution.”
(Printed with permission from UNC Kenan-Flagler Business School)
University of New Mexico
Research papers and data products are key outcomes of the science enterprise. Governmental, nongovernmental, and private foundation sponsors of research are increasingly recognizing the value of research data. As a result, most funders now require that sufficiently detailed data management plans be submitted as part of a research proposal. A data management plan (DMP) is a document that describes how you will treat your data during a project and what happens with the data after the project ends. Such plans typically cover all or portions of the data life cycle—from data discovery, collection, and organization (e.g., spreadsheets, databases), through quality assurance/quality control, documentation (e.g., data types, laboratory methods) and use of the data, to data preservation and sharing with others (e.g., data policies and dissemination approaches). Fig 1 illustrates the relationship between hypothetical research and data life cycles and highlights the links to the rules presented in this paper. The DMP undergoes peer review and is used in part to evaluate a project’s merit. Plans also document the data management activities associated with funded projects and may be revisited during performance reviews.
Fig 1. Relationship of the research life cycle (A) to the data life cycle (B); note: highlighted circles refer to the rules that are most closely linked to the steps of the data life cycle.
As part of the research life cycle (A), many researchers (1) test ideas and hypotheses by (2) acquiring data that are (3) incorporated into various analyses and visualizations, leading to interpretations that are then (4) published in the literature and disseminated via other mechanisms (e.g., conference presentations, blogs, tweets), and that often lead back to (1) new ideas and hypotheses. During the data life cycle (B), researchers typically (1) develop a plan for how data will be managed during and after the project; (2) discover and acquire existing data and (3) collect and organize new data; (4) assure the quality of the data; (5) describe the data (i.e., ascribe metadata); (6) use the data in analyses, models, visualizations, etc.; and (7) preserve and (8) share the data with others (e.g., researchers, students, decision makers), possibly leading to new ideas and hypotheses.
Earlier articles in the Ten Simple Rules series of PLOS Computational Biologyprovided guidance on getting grants [1], writing research papers [2], presenting research findings [3], and caring for scientific data [4]. Here, I present ten simple rules that can help guide the process of creating an effective plan for managing research data—the basis for the project’s findings, research papers, and data products. I focus on the principles and practices that will result in a DMP that can be easily understood by others and put to use by your research team. Moreover, following the ten simple rules will help ensure that your data are safe and sharable and that your project maximizes the funder’s return on investment.
Rule 1: Determine the Research Sponsor Requirements
Research communities typically develop their own standard methods and approaches for managing and disseminating data. Likewise, research sponsors often have very specific DMP expectations. For instance, the Wellcome Trust, the Gordon and Betty Moore Foundation (GBMF), the United States National Institutes of Health (NIH), and the US National Science Foundation (NSF) all fund computational biology research but differ markedly in their DMP requirements. The GBMF, for instance, requires that potential grantees develop a comprehensive DMP in conjunction with their program officer that answers dozens of specific questions. In contrast, NIH requirements are much less detailed and primarily ask that potential grantees explain how data will be shared or provide reasons as to why the data cannot be shared. Furthermore, a single research sponsor (such as the NSF) may have different requirements that are established for individual divisions and programs within the organization. Note that plan requirements may not be labeled as such; for example, the National Institutes of Health guidelines focus largely on data sharing and are found in a document entitled “NIH Data Sharing Policy and Implementation Guidance” (http://grants.nih.gov/grants/policy/data_sharing/data_sharing_guidance.htm).
Significant time and effort can be saved by first understanding the requirements set forth by the organization to which you are submitting a proposal. Research sponsors normally provide DMP requirements in either the public request for proposals (RFP) or in an online grant proposal guide. The DMPTool (https://dmptool.org/) and DMPonline (https://dmponline.dcc.ac.uk/) websites are also extremely valuable resources that provide updated funding agency plan requirements (for the US and United Kingdom, respectively) in the form of templates that are usually accompanied with annotated advice for filling in the template. The DMPTool website also includes numerous example plans that have been published by DMPTool users. Such examples provide an indication of the depth and breadth of detail that are normally included in a plan and often lead to new ideas that can be incorporated in your plan.
Regardless of whether you have previously submitted proposals to a particular funding program, it is always important to check the latest RFP, as well as the research sponsor’s website, to verify whether requirements have recently changed and how. Furthermore, don’t hesitate to contact the responsible program officer(s) that are listed in a specific solicitation to discuss sponsor requirements or to address specific questions that arise as you are creating a DMP for your proposed project. Keep in mind that the principle objective should be to create a plan that will be useful for your project. Thus, good data management plans can and often do contain more information than is minimally required by the research sponsor. Note, though, that some sponsors constrain the length of DMPs (e.g., two-page limit); in such cases, a synopsis of your more comprehensive plan can be provided, and it may be permissible to include an appendix, supplementary file, or link.
Rule 2: Identify the Data to Be Collected
Every component of the DMP depends upon knowing how much and what types of data will be collected. Data volume is clearly important, as it normally costs more in terms of infrastructure and personnel time to manage 10 terabytes of data than 10 megabytes. But, other characteristics of the data also affect costs as well as metadata, data quality assurance and preservation strategies, and even data policies. A good plan will include information that is sufficient to understand the nature of the data that is collected, including:
The precise types, sources, volume, and formats of data may not be known beforehand, depending on the nature and uniqueness of the research. In such case, the solution is to iteratively update the plan (see Rule 9).
Rule 3: Define How the Data Will Be Organized
Once there is an understanding of the volume and types of data to be collected, a next obvious step is to define how the data will be organized and managed. For many projects, a small number of data tables will be generated that can be effectively managed with commercial or open source spreadsheet programs like Excel and OpenOffice Calc. Larger data volumes and usage constraints may require the use of relational database management systems (RDBMS) for linked data tables like ORACLE or mySQL, or a Geographic Information System (GIS) for geospatial data layers like ArcGIS, GRASS, or QGIS.
The details about how the data will be organized and managed could fill many pages of text and, in fact, should be recorded as the project evolves. However, in drafting a DMP, it is most helpful to initially focus on the types and, possibly, names of the products that will be used. The software tools that are employed in a project should be amenable to the anticipated tasks. A spreadsheet program, for example, would be insufficient for a project in which terabytes of data are expected to be generated, and a sophisticated RDMBS may be overkill for a project in which only a few small data tables will be created. Furthermore, projects dependent upon a GIS or RDBMS may entail considerable software costs and design and programming effort that should be planned and budgeted for upfront (see Rules 9 and 10). Depending on sponsor requirements and space constraints, it may also be useful to specify conventions for file naming, persistent unique identifiers (e.g., Digital Object Identifiers [DOIs]), and versioning control (for both software and data products).
Rule 4: Explain How the Data Will Be Documented
Rows and columns of numbers and characters have little to no meaning unless they are documented in some fashion. Metadata—the details about what, where, when, why, and how the data were collected, processed, and interpreted—provide the information that enables data and files to be discovered, used, and properly cited. Metadata include descriptions of how data and files are named, physically structured, and stored as well as details about the experiments, analytical methods, and research context. It is generally the case that the utility and longevity of data relate directly to how complete and comprehensive the metadata are. The amount of effort devoted to creating comprehensive metadata may vary substantially based on the complexity, types, and volume of data.
A sound documentation strategy can be based on three steps. First, identify the types of information that should be captured to enable a researcher like you to discover, access, interpret, use, and cite your data. Second, determine whether there is a community-based metadata schema or standard (i.e., preferred sets of metadata elements) that can be adopted. As examples, variations of the Dublin Core Metadata Initiative Abstract Model are used for many types of data and other resources, ISO (International Organization for Standardization) 19115 is used for geospatial data, ISA-Tab file format is used for experimental metadata, and Ecological Metadata Language (EML) is used for many types of environmental data. In many cases, a specific metadata content standard will be recommended by a target data repository, archive, or domain professional organization. Third, identify software tools that can be employed to create and manage metadata content (e.g., Metavist, Morpho). In lieu of existing tools, text files (e.g., readme.txt) that include the relevant metadata can be included as headers to the data files.
A best practice is to assign a responsible person to maintain an electronic lab notebook, in which all project details are maintained. The notebook should ideally be routinely reviewed and revised by another team member, as well as duplicated (see Rules 6 and 9). The metadata recorded in the notebook provide the basis for the metadata that will be associated with data products that are to be stored, reused, and shared.
Rule 5: Describe How Data Quality Will Be Assured
Quality assurance and quality control (QA/QC) refer to the processes that are employed to measure, assess, and improve the quality of products (e.g., data, software, etc.). It may be necessary to follow specific QA/QC guidelines depending on the nature of a study and research sponsorship; such requirements, if they exist, are normally stated in the RFP. Regardless, it is good practice to describe the QA/QC measures that you plan to employ in your project. Such measures may encompass training activities, instrument calibration and verification tests, double-blind data entry, and statistical and visualization approaches to error detection. Simple graphical data exploration approaches (e.g., scatterplots, mapping) can be invaluable for detecting anomalies and errors.
Rule 6: Present a Sound Data Storage and Preservation Strategy
A common mistake of inexperienced (and even many experienced) researchers is to assume that their personal computer and website will live forever. They fail to routinely duplicate their data during the course of the project and do not see the benefit of archiving data in a secure location for the long term. Inevitably, though, papers get lost, hard disks crash, URLs break, and tapes and other media degrade, with the result that the data become unavailable for use by both the originators and others. Thus, data storage and preservation are central to any good data management plan. Give careful consideration to three questions:
The answer to the first question depends on several factors. First, determine whether the research sponsor or your home institution have any specific requirements. Usually, all data do not need to be retained, and those that do need not be retained forever. Second, consider the intrinsic value of the data. Observations of phenomena that cannot be repeated (e.g., astronomical and environmental events) may need to be stored indefinitely. Data from easily repeatable experiments may only need to be stored for a short period. Simulations may only need to have the source code, initial conditions, and verification data stored. In addition to explaining how data will be selected for short-term storage and long-term preservation, remember to also highlight your plans for the accompanying metadata and related code and algorithms that will allow others to interpret and use the data (see Rule 4).
Develop a sound plan for storing and protecting data over the life of the project. A good approach is to store at least three copies in at least two geographically distributed locations (e.g., original location such as a desktop computer, an external hard drive, and one or more remote sites) and to adopt a regular schedule for duplicating the data (i.e., backup). Remote locations may include an offsite collaborator’s laboratory, an institutional repository (e.g., your departmental, university, or organization’s repository if located in a different building), or a commercial service, such as those offered by Amazon, Dropbox, Google, and Microsoft. The backup schedule should also include testing to ensure that stored data files can be retrieved.
Your best bet for being able to access the data 20 years beyond the life of the project will likely require a more robust solution (i.e., question 3 above). Seek advice from colleagues and librarians to identify an appropriate data repository for your research domain. Many disciplines maintain specific repositories such as GenBank for nucleotide sequence data and the Protein Data Bank for protein sequences. Likewise, many universities and organizations also host institutional repositories, and there are numerous general science data repositories such as Dryad (http://datadryad.org/), figshare (http://figshare.com/), and Zenodo (http://zenodo.org/). Alternatively, one can easily search for discipline-specific and general-use repositories via online catalogs such as http://www.re3data.org/ (i.e., REgistry of REsearch data REpositories) and http://www.biosharing.org (i.e., BioSharing). It is often considered good practice to deposit code in a host repository like GitHub that specializes in source code management as well as some types of data like large files and tabular data (see https://github.com/). Make note of any repository-specific policies (e.g., data privacy and security, requirements to submit associated code) and costs for data submission, curation, and backup that should be included in the DMP and the proposal budget.
Rule 7: Define the Project’s Data Policies
Despite what may be a natural proclivity to avoid policy and legal matters, researchers cannot afford to do so when it comes to data. Research sponsors, institutions that host research, and scientists all have a role in and obligation for promoting responsible and ethical behavior. Consequently, many research sponsors require that DMPs include explicit policy statements about how data will be managed and shared. Such policies include:
Unfortunately, policies and laws often appear or are, in fact, confusing or contradictory. Furthermore, policies that apply within a single organization or in a given country may not apply elsewhere. When in doubt, consult your institution’s office of sponsored research, the relevant Institutional Review Board, or the program officer(s) assigned to the program to which you are applying for support.
Despite these caveats, it is usually possible to develop a sound policy by following a few simple steps. First, if preexisting materials, such as data and code, are being used, identify and include a description of the relevant licensing and sharing arrangements in your DMP. Explain how third party software or libraries are used in the creation and release of new software. Note that proprietary and intellectual property rights (IPR) laws and export control regulations may limit the extent to which code and software can be shared.
Second, explain how and when the data and other research products will be made available. Be sure to explain any embargo periods or delays such as publication or patent reasons. A common practice is to make data broadly available at the time of publication, or in the case of graduate students, at the time the graduate degree is awarded. Whenever possible, apply standard rights waivers or licenses, such as those established by Open Data Commons (ODC) and Creative Commons (CC), that guide subsequent use of data and other intellectual products (seehttp://creativecommons.org/ andhttp://opendatacommons.org/licenses/pddl/summary/). The CC0 license and the ODC Public Domain Dedication and License, for example, promote unrestricted sharing and data use. Nonstandard licenses and waivers can be a significant barrier to reuse.
Third, explain how human subject and other sensitive data will be treated (e.g., seehttp://privacyruleandresearch.nih.gov/ for information pertaining to human health research regulations set forth in the US Health Insurance Portability and Accountability Act). Many research sponsors require that investigators engaged in human subject research approaches seek or receive prior approval from the appropriate Institutional Review Board before a grant proposal is submitted and, certainly, receive approval before the actual research is undertaken. Approvals may require that informed consent be granted, that data are anonymized, or that use is restricted in some fashion.
Rule 8: Describe How the Data Will Be Disseminated
The best-laid preservation plans and data sharing policies do not necessarily mean that a project’s data will see the light of day. Reviewers and research sponsors will be reassured that this will not be the case if you have spelled out how and when the data products will be disseminated to others, especially people outside your research group. There are passive and active ways to disseminate data. Passive approaches include posting data on a project or personal website or mailing or emailing data upon request, although the latter can be problematic when dealing with large data and bandwidth constraints. More active, robust, and preferred approaches include: (1) publishing the data in an open repository or archive (see Rule 6); (2) submitting the data (or subsets thereof) as appendices or supplements to journal articles, such as is commonly done with the PLOS family of journals; and (3) publishing the data, metadata, and relevant code as a “data paper” [5]. Data papers can be published in various journals, including Scientific Data (from Nature Publishing Group), the GeoScience Data Journal (a Wiley publication on behalf of the Royal Meteorological Society), and GigaScience (a joint BioMed Central and Springer publication that supports big data from many biology and life science disciplines).
A good dissemination plan includes a few concise statements. State when, how, and what data products will be made available. Generally, making data available to the greatest extent and with the fewest possible restrictions at the time of publication or project completion is encouraged. The more proactive approaches described above are greatly preferred over mailing or emailing data and will likely save significant time and money in the long run, as the data curation and sharing will be supported by the appropriate journals and repositories or archives. Furthermore, many journals and repositories provide guidelines and mechanisms for how others can appropriately cite your data, including digital object identifiers, and recommended citation formats; this helps ensure that you receive credit for the data products you create. Keep in mind that the data will be more usable and interpretable by you and others if the data are disseminated using standard, nonproprietary approaches and if the data are accompanied by metadata and associated code that is used for data processing.
Rule 9: Assign Roles and Responsibilities
A comprehensive DMP clearly articulates the roles and responsibilities of every named individual and organization associated with the project. Roles may include data collection, data entry, QA/QC, metadata creation and management, backup, data preparation and submission to an archive, and systems administration. Consider time allocations and levels of expertise needed by staff. For small to medium size projects, a single student or postdoctoral associate who is collecting and processing the data may easily assume most or all of the data management tasks. In contrast, large, multi-investigator projects may benefit from having a dedicated staff person(s) assigned to data management.
Treat your DMP as a living document and revisit it frequently (e.g., quarterly basis). Assign a project team member to revise the plan, reflecting any new changes in protocols and policies. It is good practice to track any changes in a revision history that lists the dates that any changes were made to the plan along with the details about those changes, including who made them.
Reviewers and sponsors may be especially interested in knowing how adherence to the data management plan will be assessed and demonstrated, as well as how, and by whom, data will be managed and made available after the project concludes. With respect to the latter, it is often sufficient to include a pointer to the policies and procedures that are followed by the repository where you plan to deposit your data. Be sure to note any contributions by nonproject staff, such as any repository, systems administration, backup, training, or high-performance computing support provided by your institution.
Rule 10: Prepare a Realistic Budget
Creating, managing, publishing, and sharing high-quality data is as much a part of the 21st century research enterprise as is publishing the results. Data management is not new—rather, it is something that all researchers already do. Nonetheless, a common mistake in developing a DMP is forgetting to budget for the activities. Data management takes time and costs money in terms of software, hardware, and personnel. Review your plan and make sure that there are lines in the budget to support the people that manage the data (see Rule 9) as well as pay for the requisite hardware, software, and services. Check with the preferred data repository (see Rule 6) so that requisite fees and services are budgeted appropriately. As space allows, facilitate reviewers by pointing to specific lines or sections in the budget and budget justification pages. Experienced reviewers will be on the lookout for unfunded components, but they will also recognize that greater or lesser investments in data management depend upon the nature of the research and the types of data.
Conclusion
A data management plan should provide you and others with an easy-to-follow road map that will guide and explain how data are treated throughout the life of the project and after the project is completed. The ten simple rules presented here are designed to aid you in writing a good plan that is logical and comprehensive, that will pass muster with reviewers and research sponsors, and that you can put into practice should your project be funded. A DMP provides a vehicle for conveying information to and setting expectations for your project team during both the proposal and project planning stages, as well as during project team meetings later, when the project is underway. That said, no plan is perfect. Plans do become better through use. The best plans are “living documents” that are periodically reviewed and revised as necessary according to needs and any changes in protocols (e.g., metadata, QA/QC, storage), policy, technology, and staff, as well as reused, in that the most successful parts of the plan are incorporated into subsequent projects. A public, machine-readable, and openly licensed DMP is much more likely to be incorporated into future projects and to have higher impact; such increased transparency in the research funding process (e.g., publication of proposals and DMPs) can assist researchers and sponsors in discovering data and potential collaborators, educating about data management, and monitoring policy compliance [6].
Acknowledgments
This article is the outcome of a series of training workshops provided for new faculty, postdoctoral associates, and graduate students.
References
(Originally published: http://dx.doi.org/10.1371/journal.pcbi.1004525;This work was supported by NSF IIA-1301346, IIA-1329470, and ACI-1430508 (http://nsf.gov). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.)
Professor of Finance, UNC Kenan-Flagler Business School
University of British Columbia
It’s not exactly a formula for success – or at least it wouldn’t seem to be.
In the months leading up to the global financial crisis of 2008 and the Great Recession, some companies in the U.K. signed labor agreements with unionized workers, agreeing to pay higher wages. In doing so, they locked themselves into higher pay just before the economy tilted into the deepest downturn since the Great Depression.
With higher costs and a financial crisis, you might expect those firms would perform poorly compared to their peers. But you’d be wrong, based on research from Paige Ouimet, a UNC Kenan-Flagler finance professor, and Elena Simintzi of the University of British Columbia.
From 2010-2012 – the years following the downturn – those firms outperformed their peers.
When the researchers compared changes in sales following the crisis to a pre-crisis baseline, firms locked into higher wage agreements realized higher growth in sales than the sales increases other firms saw.
The firms bound by higher wage agreements also earned higher returns on assets (ROA) during some of these post-crisis years, suggesting that they performed better even after accounting for higher labor costs, although the ROA boost wasn’t as dramatic as the sales numbers.
So paying workers more during a downturn could lead to improved business performance, the findings suggest.
“The main result is quite surprising,” Ouimet says. “It was the constrained firms that outperformed the unconstrained firms. That the firms with limited options actually ended up doing better is a very unusual finding.”
She writes about their findings in “Wages and Firm Performance: Evidence from the 2008 Financial Crisis.”
Their results are in line with other research that shows paying workers more can increase their job performance.
But what’s striking is that they suggest that even during very difficult economic periods in the economy, businesses might be able to gain an advantage by paying employees better.
The effect was more pronounced when the employees subject to the labor agreement were supervisory or management workers, says Ouimet. Giving line managers a raise, for example, seemed to deliver a bigger boost to performance than giving janitorial workers a pay increase. That’s in line with the idea that workers more involved in a company’s core business processes would have a greater impact on firm performance.
What’s not clear is the mechanism at work. Are workers more motivated because they feel grateful for the higher wages and are returning the favor? Do higher wages during a recession – when jobs are hard to come by – reduce turnover and allow companies to hire better employees? Do workers making more money work harder (or smarter) because they’re more motivated to avoid layoffs? Or does the data merely indicate that workers at firms that didn’t have the same upward pressure on pay didn’t perform as well, perhaps due to lower incentives.
“This goes to strengthen the argument that your workforce is an important generator of value in the firm,” Ouimet says. “So keeping the workforce happy is important for long-run profits.”
The paper taps data about several hundred public and private U.K. companies that had unionized workforces. Ouimet and Simintzi collected data about labor agreements, when they were signed, how long they lasted, the relative impact of those agreements on pay and the types of workers covered. Then they examined the financial performance of those companies in the years following the financial crisis.
Using two different private databases, they compiled a set of 606 labor agreements affecting 344 companies. They defined the beginning of the downturn as September 2008 — the month that Lehman Brothers declared bankruptcy.
The constrained group involved companies that signed agreements to increase wages before September 2008 and whose labor agreements were in effect until at least January 2010. That meant the companies had to pay out higher wages for all of 2009. Labor agreements are very difficult to unwind in the U.K. so it was unlikely these companies subsequently reduced pay to workers covered by the agreements.
Ouimet and Simintzi also ran a number of statistical tests on the data to account for other variables, such as differences in industry, operating leverage levels and the financial health of the companies prior to the recession and at the time of signing the long-term wage agreements. They found no evidence that the results were driven by coincidental firm traits associated with the timing of the wage agreements.
Ouimet cautions against extrapolating the results too broadly outside the setting of unionized U.K. firms during the Great Recession. Labor practices in other parts of Europe, she points out, are different.
Different countries also have different unemployment benefits, as well as differences in how employers and unions negotiate, which could affect worker productivity in other ways.
“Continental Europe would be a very different situation and it would be hard to say we’d find the same results there,” she says. About 40 percent of U.K. workers belong to unions – almost four times the U.S. rate.
Nonetheless, the study does offer tantalizing evidence that cutting pay during a recession might not always be the best long-term strategy.
“These results certainly show the importance of maintaining a properly motivated workforce,” Ouimet says. “They suggest costs to cutting wages even during a downturn. It gives you something to think about when setting wages.”
(Printed with permission from UNC Kenan-Flagler Business School)
College of Information and Computer Science, Long Island University, New York, USA
Professor & Dean, Palmer School of Library and Information Science, Long Island University, New York, USA
Starting in 1995, there has been an explosion in the literature surrounding the developing concept of knowledge management. Today, hardly anyone can attend a conference or read a journal without seeing literature referring to the concept. Despite its popularity, the jury is still out as to whether knowledge management will become a significant and permanent component of management, or just another management fad.
The concept has been defined broadly with a number of definitions being touted. For example, Ponelis and Fair-Wessels (1998) assert that knowledge management is a new dimension of strategic information management. Davenport and Prusak(1998) claim that knowledge management is the process of capturing, distributing, and effectively using knowledge. Skyrme (1997) suggests that knowledge management is the explicit and systematic management of vital knowledge along with its associated processes of creating, gathering, organizing, diffusing, using, and exploiting that knowledge.
This paper’s objective is not to provide another knowledge management definition but to illuminate its current state of development. It uses annual frequency counts of articles on three well-known fads to demonstrate that management fads generally peak in approximately five years. Applying this technique to the concept of knowledge management allows the paper to shed light on the field as a whole.
Analytical framework for examining management fads
In this section, we present empirical evidence that management fads generally peak in approximately five years. We provide support by applying the simple bibliometric technique of article counting to three well-known management fads.
A management fad can be considered an innovative concept or technique that is promoted as the forefront of management progress and then diffuses very rapidly among early adopters eager to gain a competitive advantage. After organizational leaders come to the realization that the concept has fallen short of its expected benefits, the concept is quickly discontinued or drops back to very modest usage.
The graphing of article-counts annually is a bibliometric technique that determines how many articles have been devoted to a given concept over time. The rationale for this method is that bibliographic records are a relativety objective indicator for measuring discourse popularity. In other words, the higher the article counts, the larger the volume of discussion.
The initial result of the article-counting technique is time-series data that can be charted into a lifecycle (Abrahamson & Fairchild, 1999). The most well-known lifecycle shape is an S-curve. It depicts an ideal representation for the emergence, growth, maturity, and decline of an idea or product. In reality, however, not all ideas and products exhibit an S-shaped lifecycle (Rogers, 1995). Our concern is the lifecycles of fads and fashions. As illustrated in Figure 1, fads emerge quickly and are adopted with great zeal, then peak and decline just as fast. Fashions, on the other hand, are fads that briefly show signs of maturity before declining (Wasson, 1978).
Figure 1: Fad & fashion lifecycles (Source: Wasson 1978)
The theory of management fashion primarily draws from the work of Eric Abrahamson. Abrahamson’s (1991, 1996) theory describes the process by which “fashion setters,” or fashion evangelists, which are generally consulting firms, management gurus, mass-media publications, and business schools, disseminate beliefs that certain management techniques are at the forefront of management progress.
Once information is published in the form of articles, annual counts can be captured to provide time-series data that can be charted and analyzed. Based on the work of Abrahamson (1991, 1996) and Abrahamson & Fairchild (1999), the bibliometric technique of article counting is a reliable analytical approach to begin an analysis of the published literature in order to illuminate and trace the development of a concept.
In recent years, the academic and industry communities have observed numerous management fads – for example, Quality Circles, Total Quality Management, and Business Process Reengineering (Hilmer& Donaldson, 1996). The Quality Circles movement is graphed below for illustration.
Quality Circles
In the early 1980s, Quality Circles became of interest to American manufacturers as a competitive tool in response to the quality gap with Japan. This management technique theorized the importance of organizational goals to achieve greater quality and labour productivity.
A literature review shows that between 1980 and 1982, 90% of the Fortune 500 companies had adopted the Quality Circles management approach (Lawler &Mohrman, 1985). Afterwards, a survey conducted by Castorina and Wood (1988) revealed that more than 80% of the Fortune 500 companies that originally adopted Quality Circle programs in the early 1980s had abandoned them by 1987.
In 1996, Abrahamson created a Quality Circles lifecycle that independently confirms the literature’s claims that the Quality Circle movement was indeed a management fad. Retrieving article counts from ABI Inform, Abrahamson graphed a ten-year trend line representing articles that include the phrase ‘Quality Circles’ in either the title or abstract.
Abrahamson’s results revealed that the Quality Circles movement to have a bell-shape pattern. The pattern depicts a rapid growth starting in 1978 and then reversing in 1982 (see Figure 2). By 1986, this measure returned to its pre-popularity levels, which indicates a management fad.
Figure 2: The lifecycle of quality circles, 1977-1986 (Source:Abrahamson, 1996)
Furthermore, we also observed that Quality Circles’ momentum peaked in five years and we wanted to know if this time period was consistent in other management fashions. To test this proposition, we developed lifecycles with an accepted management fad and a management fashion, namely, Total Quality Management and Business Process Reengineering.
Total Quality Management & Business Process Reengineering
Total Quality Management and Business Process Reengineering were quality movements that became popular in the 1980s and 1990s. To date, neither lifecycle has been charted from a bibliometric perspective.
To capture a broader lifecycle image than Abrahamson’s work, article counts were retrieved on March 16, 2002 from three DIALOG files: Science Citation Index (File 34), Social Science Citation Index (File 7), and ABI Inform (File 15). (See Appendixfor Dialog search strings and commands) These files were selected because of their comprehensive and broad coverage of the academic and industry literature.
After counts were captured annually by querying for each key phrase located in the title, abstract or descriptor fields and duplicates removed, the results were graphed using Microsoft Excel. The assumption made is that retrieved records that included each key search phrase in the mentioned bibliographic fields are representative writings focused on the subject.
The resulting Total Quality Management and Business Process Reengineering lifecycle graphs clearly resemble the bell-shape fashion pattern noted earlier in the Quality Circles movement (see Figure 3 and Figure 4). The graphs strikingly illustrate the way in which these movements grew and fell in popularity as represented in the academic and industry literature.
Figure 3: Total Quality Management, 1990-2001
Figure 4: Business Process Reengineering, 1990-2001
When comparing Figures 2, 3, and 4, each management fashion peaked from four to six years after some momentum had started. More specifically, in 1979 Quality Circles appeared to have momentum only to peak in five years. The same holds true for Total Quality Management (starting in the late 1980s and peaking in 1993) as well as Business Process Reengineering (starting in 1991 and peaking in 1995). To this end, it is reasonable to assume that management fads begin to lose popularity in about five years.
The limitations to this assumption are, of course, that this phenomenon has been tested in only three cases and that the article counts were limited to just three databases. The following section discusses the above approach in the context of knowledge management.
The case of knowledge management
To a large extent, knowledge management is being considered by many as an emerging multidisciplinary field associated with the likes of system engineering, organizational learning, and decision support, to mention a few. Skeptics, on the other hand, are claiming that knowledge management is just another fad like Total Quality Management or Business Process Reengineering. In this section, the article-counting technique is applied to the concept of knowledge management in order to illuminate its current state of development.
Using the same approach employed in the earlier cases, article counts were retrieved from the three DIALOG files i.e., Science Citation Index (File 34), Social Science Citation Index (File 7), and ABI Inform (File 15). The retrieved counts were articles that included the phrase ‘knowledge management’ in its title, abstract, or descriptor fields. The assumption made is that retrieved records that included ‘knowledge management’ in these fields represent writings focused on knowledge management.
Figure 5: Knowledge management, 1991-2001
The results, which are graphed above in Figure 5, suggest that knowledge management has weathered the five-year mark and perhaps is becoming an addition to the management practice. The diagram illustrates that the popularity of Knowledge Management expanded rapidly from 1997 through 1999, contracted in 2000, and then rebounded in 2001. To explore the growth period of the knowledge management lifecycle further, an additional bibliometric technique was used to reveal of Interdisciplinary Activity.
Interdisciplinary activity indicates the exportation and integration of theories or methods to other disciplines (Pierce, 1999; Klein, 1996), in our case, to the development of the emerging field of knowledge management. The method ranks journal names of knowledge management source articles from above and then assigns an ISI’s Subject Category Code. ISI’s codes have been operationalized by ISI and have been assumed as indicators of disciplines (White, 1996). This study assumed a threshold count of three or greater. In other words, three or more sources articles in ISI-assigned journals needed to occur in order to be included in the analysis. This threshold reduces the number of random occurrences in journals and indicates the concentration of publication activity.
Table 1: Interdisciplinary activity by column percentage, 1996-2001
Discipline | 1996 | 1997 | 1998 | 1999 | 2000 | 2001 |
Computer Science | 35.7% | 43.1% | 42.0% | 38.8% | 28.7% | 36.2% |
Business | 21.4% | 16.9% | 32.4% | 25.6% | 18.0% | 20.7% |
Management | 42.9% | 7.7% | 5.3% | 12.8% | 13.2% | 17.2% |
Information Science & Library Science | 15.4% | 10.6% | 7.9% | 16.9% | 14.2% | |
Engineering | 10.8% | 4.3% | 8.6% | 13.6% | 7.7% | |
Psychology | 6.2% | 5.3% | 1.7% | 1.8% | 1.5% | |
Multidisciplinary Sciences | 2.0% | 4.0% | ||||
Energy & Fuels | 0.7% | 3.7% | 0.7% | |||
Social Sciences | 1.7% | |||||
Operations Research & Mgt. Science | 1.0% | |||||
Planning & Development | 1.0% | |||||
Total: | 14 | 65 | 207 | 407 | 272 | 401 |
Interdisciplinary Breadth: | 3 | 6 | 6 | 10 | 8 | 8 |
Table 1 is the proportion of disciplinary affiliation of journals over time. In the 1996, interdisciplinary activity appeared mainly in three areas of study, namely, Computer Science, Business, and Management. Through 1999, the number of disciplines, or Interdisciplinary Breadth expanded to 10. According to Koenig(2000), this expansion was in response to new developments in technology and to organizations seeking an advantage in an increasingly competitive market.
In 2000, a pullback in popularity occurred that is, the total number of articles dropped by about 30%. Proportionally, Computer Science and Business experienced a decrease while the remaining six disciplines increased. According toAbrahamson (1991, 1996), swings downward in popularity might be the direct result of shortfalls in realized benefits being experienced by organizations. One such study that indicated knowledge management was coming up short was in 1999, when Bain & Company conducted their well-known survey on management tools and techniques. Bain & Company reported that knowledge management “not only had relatively low utilization but also very low satisfaction scores relative to the average” (Rigby 2001: 145). Finally, while in 2001 the top two disciplines return approximately to 1996 proportions, the breadth of disciplines participating has more than doubled since 1996.
Summary
This paper provides empirical evidence that management movements generally reveal themselves as fads or fashions within approximately five years after having gained some type of momentum. When applying this general rule of thumb to the popular concept of knowledge management, it appears that knowledge management has initially survived.
It is certainly plausible to hypothesize that if knowledge management does indeed mature into a permanent new component of managerial attention, it will continue to grow and in the process undergo a tweaking phenomenon — that is, morphing or transforming into clearer, easier understood concept. The 2000 dip in popularity does suggest such a phenomenon.
To examine whether knowledge management indeed has survived and is on its way to becoming a significant and permanent part of management’s tool box, will require not only the passage of time, but will also require a somewhat more sophisticated analysis. It is quite plausible that this phenomenon could obscure the continued growth of a movement. In other words, focusing on the appearance of a new title term can distinguish between typical fads and more long-lasting phenomena, but a more detailed analysis, which the authors look forward to conducting, needs to be undertaken to determine whether knowledge management is more than an unusually broad shouldered-fad.
(Originally published: http://www.informationr.net/ir/8-1/paper145.html)
Harvard University
Dartmouth College
Five questions prompted by the articles in the American Psychologist special issue on leadership (January 2007, Vol. 62, No. 1) suggest some new directions for leadership research: (1) Not do leaders make a difference, but under what conditions does leadership matter? (2) Not what are the traits of leaders, but how do leaders’ personal attributes interact with situational properties to shape outcomes? (3) Not do there exist common dimensions on which all leaders can be arrayed, but are good and poor leadership qualitatively different phenomena? (4) Not how do leaders and followers differ, but how can leadership models be reformulated so they treat all system members as both leaders and followers? (5) Not what should be taught in leadership courses, but how can leaders be helped to learn?
For all of the research that has been conducted on the topic of leadership, the field remains curiously unformed.. Leadership scholars, including those who have written for this special issue, agree that leadership is extraordinarily important both as a social phenomenon and as a subject for scholarly research and theory. Yet, as both Bennis (2007, this issue) and Vroom and Jago (2007, this issue) have pointed out, there are no generally accepted definitions of what leadership is, no dominant paradigms for studying it, and little agreement about the best strategies for developing and exercising it.
Among the many possible reasons for this gloomy state of affairs is that leadership scholars over the years may have been asking questions that have no general answers, thereby adding complexity but not clarity to our understanding. The articles that comprise this special issue summarize a great deal of informative research about leadership, to be sure. But perhaps their greatest contribution is that they raise a number of questions, the answers to which will help us develop knowledge about leadership that is interesting, useful, and cumulative. In answer to Bennis’s (2007, this issue) plea that scholars use their creativity to identify and reframe the most important questions about leadership, we pose in this concluding essay five questions that were prompted by the articles in this issue. We hope that these questions may be somewhat more informative, or at least more tractable, than some that have historically been prominent in leadership research.
Question 1: Not do leaders make a difference, but under what conditions does leadership matter?
As the authors of these articles have noted, the long-standing debate between leader-centric and structural or situational explanations of collective performance has never been resolved, and probably cannot be. The reason is that the debate is about the wrong question. The right question is to distinguish conceptually and empirically those circumstances in which leaders’ actions are highly consequential for system performance from those in which leaders’ behaviors and decisions make essentially no difference (Avolio, 2007, this issue; Chan & Brief, 2005; Hackman & Wageman, 2005; Vroom & Jago, 2007, this issue; Wasserman, Nohria, & Anand, 2001).
This question invites observers of leadership to swim upstream against strong attributional currents. Lay observers, as well as not a few leadership scholars, tend to view leaders as a dominant influence on system performance (see Bennis, 2007, this issue). But are leaders really a main, or the main, influence on what transpires in social systems? Or does our tendency to view them that way merely reflect what Meindl (1990) called the “romance” of leadership? Consider, for example, how we explain an athletic team that has winning season after winning season. “That John Wooden at UCLA!” we exclaim. “What a basketball coach he was!” Or reflect on a team that has had a few losing seasons: It is the coach who is fired. We refer to this tendency to identify the leader as the main cause of collective performance as the leader attribution error. The leader attribution error is understandable (both because of the high visibility of leaders and the relative invisibility to observers of structural or contextual factors that may be powerfully shaping outcomes), it is pervasive (it occurs for both favorable and unfavorable outcomes), and it is powerful (system members as well as observers are vulnerable to it) (Hackman, 2002, chap. 6; Hackman & Wageman, 2005).
Under some conditions, of course, leaders’ actions really do spell the difference between success and failure. In recent years, scholars have begun the conceptual and empirical work that will be needed to move beyond the old debates about how influential leaders are and to free us from the erroneous assumption that anyone in any leadership position has the opportunity to make a constructive difference. The study by Wasserman and his colleagues, for example, showed that chief executive officers of corporations have the greatest impact when organizational opportunities are scarce but slack resources are plentiful (Wasserman et al., 2001). And a conceptual analysis offered by Hackman and Wageman (2005) identified how constraints on team processes, including both those built into the team’s structure and those that reside in the broader context, can significantly constrain leaders’ autonomy and latitude to lead. Similar analyses of other social systems— ranging from dyads to nation states—would appear to be worthwhile because they could focus the attention of both scholars and practitioners on leaders’ behaviors in precisely those circumstances where what they do is most consequential for system outcomes.
Question 2: Not what are the traits of leaders, but how do leaders’ personal attributes interact with situational properties to shape outcomes?
Even though the authors of the articles in this issue differ in their reliance on traits as explanations of leader behavior (Zaccaro [2007, this issue] was the most sympathetic to trait-centric models; Sternberg [2007, this issue] emphasized the modifiability of leader traits; and Vroom and Jago [2007, this issue] gave greatest attention to situational features), they agree that neither trait nor situational attributes alone are sufficient to explain leader behavior and effectiveness. It is the interaction between traits and situations that counts.
The interactionist position is entirely sensible and acknowledges what has been found in decades of research on leadership. Still, it is a mark of the pervasiveness and power of dispositional thinking that the authors, without exception, offered readers their own lists of the leader traits that they believe to be most important. Moreover, with the exception of Vroom and Jago (2007, this issue), they offered relatively few suggestions about what the key leadership-relevant attributes of situations might be.
Although it is indisputable that any robust model of leadership must address the interaction between personal and situational attributes, how should that interaction be framed? The generally accepted strategy is to deploy a contingency model (for a review of such models, see Avolio, 2007, this issue). That is, if the direct relationship between some leader attribute X and some outcome measure Y is insubstantial, or if its size or direction changes in different settings, then a situational variable Z is posited as a moderator of the X–Y relationship. Aside from the statistical difficulties of documenting moderating effects (Lubinski & Humphreys, 1990), contingency models necessarily become quite complex as research identifies increasing numbers of potential moderators. In that inevitability lies the rub: The more complete and complex a contingency model of leadership, the less conceptually elegant and practically useful it is. Moreover, if the contingency involves the actual behavior of a leader, as is the case for many of the models discussed in these articles, a level of online processing by the leader is required that can exceed human cognitive capabilities (Gigerenzer, 1999; Simon, 1990).
The systems theorists’ notion of equifinality (Katz & Kahn, 1978, p. 30) offers one possible strategy for circumventing the inherent difficulties of contingency models. Equifinality posits that there are many different ways that an open system (such as a person, a group, or an organization) can behave and still achieve the same outcome. When applied to leadership,equifinality implies that different leaders can behave in their own quite idiosyncratic ways and still get key leadership tasks accomplished. Rather than try to tailor their behaviors or styles to some set of contingent prescriptions, then, excellent leaders know how they prefer to operate, what they are able to do easily and well, and what they can accomplish only with difficulty if at all. They may never have heard of the principle of equifinality, but they behave in accord with it. This approach, perhaps, could extract psychologists from overreliance on either fixed traits or complex contingencies in leadership studies—especially if scholars take seriously the proposal by Avolio (2007, this issue) that robust leadership theories must acknowledge the reality that leader behavior is shaped by multiple factors operating at different levels of analysis. Although scholars have not yet carried out the conceptual or empirical work that would be required to explore the application of the principle of equifinality to leader behavior, the effort just might generate nontrivial advances in how we construe, study, and practice leadership.
Question 3: Not do there exist common dimensions on which all leaders can be arrayed, but are good and poor leadership qualitatively different phenomena?
As noted by the authors of the articles in this issue, leadership scholars have devoted considerable effort over the decades to identifying dimensions that reliably summarize and describe leader behavior and style. The most prominent of these, of course, are “Initiation of Structure” and “Consideration,” which emerged from the Ohio State Studies (Fleishman, 1973). Any leader can be assigned a score in the two-dimensional space defined by these dimensions, on the basis of self-reports and/or the ratings of others. A great deal of research has been conducted using leaders’ standing on these dimensions to assess both (a) the impact of leader behavior on subordinates and on unit performance and, more recently, (b) the impact of subordinate behavior and contextual conditions on leader behavior itself. The aspiration has been to identify those leadership behaviors and styles that are most appropriate and effective under various conditions.
The scores of leaders on such dimensions can range from “low” to “high” (in practice, of course, actual numerical scores are computed). But what if good and poor leadership actually were qualitatively different phenomena, if there were no single dimension on which both good and poor leaders could be meaningfully arrayed? That possibility is not as unlikely as it may seem. In fact, there are many social and psychological phenomena for which two different systems are required to distinguish one extreme from the other. Positive and negative affect, for example, appear to involve different neural systems. Rewards have qualitatively different effects on organisms than do punishments. The prospect of losing resources is qualitatively different from the prospect of a gain. And those who study
human competencies compare excellent performers with average performers rather than with poor performers precisely because demonstrating competence invariably involves different processes than does behaving incompetently.
The same asymmetry may operate for leadership. Research by Ginnett (1993) on the leadership of airline captains, for example, showed that leaders who had been identified by their peers as excellent crew leaders used their authority to accomplish three generic functions (bounding the crew as a performing unit, helping the crew come to terms with its task, and establishing basic norms of conduct for the team). Leaders who had been identified as poor crew leaders, by contrast, did not merely fail to accomplish these three leadership functions; instead, they all exhibited some kind of difficulty with control issues (for example, being overcontrolling, or under controlling, or vacillating between the two). Poor leaders were not individuals with low scores on the same dimensions on which good leaders excelled; instead, they exhibited entirely different patterns of behavior.
As Bennis (2007, this issue) noted, there is increasing interest these days in the dynamics of “bad” leadership. What has been learned thus far is consistent with the possibility that good and bad leadership may be qualitatively different phenomena (Kellerman, 2004). That possibility is further reinforced by Sternberg’s (2007, this issue) proposal that wisdom, defined as the leader’s use of his or her intelligence, creativity, and knowledge to promote the common good, is a key ingredient of effective leadership. Unsuccessful leaders, Sternberg suggested, do not merely lack wisdom; they also fall victim to a series of cognitive fallacies that effective leaders do not. Further research on the special and separate dynamics that characterize good and poor leadership, each as contrasted with “average” leadership or with no leadership at all, may well bring to the surface insights about leadership that otherwise would remain unnoticed.
Question 4: Not how do leaders and followers differ, but how can leadership models be reframed so they treat all system members as both leaders and followers?
The authors of several of the articles in this issue made the point that leaders must have followers. Although certainly correct, that assertion also implicitly reinforces the traditional view, discussed by Avolio (2007, this issue), that leaders act and followers mainly react. The opposite is true as well, however: Leaders also are followers, and followers also exhibit leadership.
There are few, if any, organizational or political leaders who have unchecked authority. Each boss also is a subordinate—even chief executives who lead entire organizations invariably report to some higher-standing person or group. This reality means that people who hold formal leadership positions must continuously chart a course between what essentially is a covert coup (acting as if one’s own leader need not know what one is doing) and abdication (mindlessly passing on to one’s subordinates whatever is received from above). It can take a good measure of skill and personal maturity to balance between one’s simultaneous roles as leader and as follower, and the dynamics of managing that balance may deserve more research attention than they have thus far received.
Moreover, as Bennis (2007, this issue) noted, every follower is, at least potentially, also a leader. This fact was empirically illustrated in our recent study of analytic teams in the U.S. intelligence community (Hackman & O’Connor, 2004). Data about the time allocation of the teams’ leaders showed that they spent most of their time structuring the work, running external interference, and coaching individual employees. Of all the leader activities we assessed, working directly with their teams received the least attention. That fact opened up many opportunities for peer leadership among rank-and-file team members. And it turned out that the amount of peer coaching members provided one another correlated more strongly with our criterion of team effectiveness (r .82) than did any other variable we measured. Clearly, most of the hands-on leadership these teams received was provided by members themselves—and to good effect.
To the extent that leadership and followership are inextricably bound up with one another, the distinction between leaders and followers becomes blurred and the whole idea of “shared leadership” takes on a new meaning. In this view, shared leadership is far more than just a partnership or the use of a “participative” style. Instead, it raises the possibility, first suggested decades ago by McGrath (1962), that anyone who fulfills critical system functions, or who arranges for them to be fulfilled, is exhibiting leadership. The functional approach to leadership is the one that we find most intellectually agreeable, and we have written at some length about its implications for the leading of task-performing teams (Hackman, 2002; Hackman & Wageman, 2005; Wageman & Mannix, 1998). It remains to be seen whether the functional approach also is useful in understanding the leadership of larger and more complex entities such as whole organizations or nations.
As the authors of several articles in this issue have noted, psychologists devoted considerable attention in the early decades of leadership research to identify the attributes that distinguish leaders from nonleaders (i.e., followers). Indeed, Zaccaro (2007, this issue) argued that the same traits that differentiate leaders and followers also contribute to a person’s effectiveness in enacting the leadership role. We concur that much is known about who is likely to become a leader, but we suggest that it was not psychologists who were mainly responsible for generating this knowledge—it was, instead, our friends from one level up, the sociologists. If one wants to know who is likely to occupy a position of formal leadership, there is no better place to look than the opportunity structure of society. Or, to put it more colloquially: If you want to be king, your best bet is to be the son of a king or queen.
Although people who occupy leadership roles certainly have more latitude to lead than do followers, one does not have to be in a leadership position to be in a position to provide leadership. Indeed, among the most interesting, and occasionally inspiring, varieties of leadership we have observed is that provided by followers, especially followers who are unlikely ever to be selected for formal leadership positions.
Question 5: Not what should be taught in leadership courses, but how can leaders be helped to learn?
The articles in this section document that all leaders have mental models that guide their actions. Because these models are abstracted gradually over time from observations, experience, and trial and error, they risk over focusing on especially salient features of the leadership situation. Thus, the behavior of another leader one has observed, or especially vivid personal episodes, or dispositions of a particularly difficult boss or subordinate, may become more central in a leader’s mental model than is actually warranted.
Ideally, leaders would be motivated to behave in ways that foster their own continuous learning from their experiences. Sternberg (2007, this issue) proposed that such learning is far more readily accomplished than would be suggested by leadership models that emphasize the importance of fixed traits or capabilities. Yet, as Sternberg also noted, continuous learning almost always requires that leaders overcome inherently self-limiting aspects of their existing mental models. Because such models become so well learned that they are virtually automatic, leaders may not even be aware of the degree to which their models are shaping their leadership behaviors. For this reason, Vroom and Jago (2007, this issue) suggested that leadership training must both bring to the surface trainees’ own preferred leadership strategies and then explore the conditions under which those strategies are and are not appropriate.
Any personal leadership model is certain to be flawed or incomplete in some significant way and therefore certain to spawn occasional errors or failures. Since implicit models are not recognized as having contributed to the failure, however, a leader’s response is more likely to be defensive (e.g., blaming chance or others for what happened) than learning oriented (e.g., inspecting the assumptions that guided the behavior that generated the failure).
Avolio (2007, this issue) suggested that new research is needed to fully understand how leaders learn from their experiences, especially when they are coping with crises. We go further and suggest that error and failure provide far more opportunities for learning than do success and achievement, precisely because failures generate data that can be mined for insight into how one’s assumptions and models of action might be improved. Overcoming the impulse to reason defensively, however, can be a significant personal challenge. It necessarily involves asking anxiety-arousing questions (e.g., about the validity of deeply held assumptions or about personal flaws in diagnosis or execution), gathering data that can help answer those questions, and then altering one’s mental models and behavioral routines. As Argyris (1991) has shown, such activities are neither natural nor comfortable. Moreover, they are likely to be especially challenging for senior leaders, who precisely because they have track records of leadership success, may have limited experience in learning how to learn from error and failure.
Leading well, therefore, may require a considerable degree of emotional maturity in dealing with one’s own and others’ anxieties. Emotionally mature leaders are willing and able to move toward anxiety-arousing states of affairs in the interest of learning about them, rather than moving away from them to get anxieties reduced as quickly as possible. Moreover, such leaders are able to inhibit impulses to act (e.g., to correct an emerging problem or to exploit a suddenly appearing opportunity) until more data have appeared or until system members become open to the contemplated intervention. Sometimes it is even necessary for leaders to engage in actions that temporarily raise anxieties, including their own, to lay the groundwork for subsequent interventions that seek to foster learning or change.
Unlike the cognitive and behavioral leadership challenges addressed in the articles in this issue, emotional maturity may be better viewed as a long-term developmental task than as something that can be systematically taught. Emotional learning cannot take place in the abstract or by analyzing a case of someone else’s failure. Instead, it involves working on real problems in safe environments with the explicit support of others. Only to the extent that leader development programs take on the considerable challenge of providing such settings are they likely to be helpful to leaders both in developing their own learning habits and in providing models for those they lead to pursue their own continuous learning.
(Originally published: http://nrs.harvard.edu/urn-3:HUL.InstRepos:3228648; Some material presented here is based on previous work by the authors (Hackman, 2002; Hackman & Wageman, 2005)…)
Leave a comment
You must be logged in to post a comment