The ECGI blog is kindly supported by
Why Watching an Executive Cheat Makes the Rest of Us a Little More Likely To
A review of the Kelley-ECGI Lecture “Spillover Effects of Corporate Misconduct” by Professor Ashley Sauciuc on 1st April 2026.
Corporate misconduct has long been studied as a problem of incentives and oversight: put the wrong pressures on managers, weaken the controls, and fraud follows. But what if the more insidious danger lies not in the act itself, but in its observation? That is the central provocation of research by Ashley Sauciuc (Indiana University), presented at a recent Kelley School of Business online public lecture, which traces how corporate fraud generates a contagion effect far beyond the original wrongdoing — and, crucially, identifies the mechanisms through which that spread can be interrupted.
The Scale of the Problem
According to the Association of Certified Fraud Examiners (ACFE), approximately 1% of U.S. public company revenue is lost annually to known fraud — translating to an estimated $190 billion against S&P 500 revenues of roughly $18 trillion in 2025. The broader unknown figure could reach 5%, approaching $900 billion on the same base.
The distinction matters because, as she put it, "fraud isn't an isolated event." Prior research had established a contagion effect among peers — employees within the same company or industry. What Sauciuc's recent study with co-authors Eric Holtzman, Bryant Miller, and Christy Rennekamp adds is more unsettling: the spillover extends to non-peers, individuals engaged in entirely unrelated tasks with no professional connection to the original misconduct.
The Experiment
To test this, participants first read either a neutral earnings announcement or a hypothetical news article about an executive committing large-scale fraud, then completed an unrelated real-effort task. In that task, they decoded letter puzzles as quickly as possible, with earnings determined by their reported completion time — faster times generated higher pay. Since compensation depended entirely on what they chose to report, participants could claim it took them one second and maximise their earnings, or report truthfully.
The study ran in two forms: one inferring misreporting through abnormally high reported earnings, another tracking actual performance directly to measure the gap precisely. In the first experiment, participants exposed to the executive fraud article showed a 13 percentage point increase in misreporting relative to the control group — and this effect had nothing to do with updated beliefs about detection risk. They were not subject to the same monitoring as the executives they had read about. They were performing a wholly unrelated task. The spillover existed nonetheless.
How It Spreads
The mechanism is advantageous comparison: individuals contrast their own misbehaviour against something far more egregious and use that contrast to downplay the ethical weight of their own actions. The internal logic runs: "mine is a tiny blip compared to theirs." This enables moral disengagement — the deactivation of one's self-sanctioning function, freeing the individual to act unethically without the psychological cost that normally follows.
The trigger is the descriptive norm: one's working belief about how frequently misconduct occurs in society. When high-profile fraud is observed, especially by figures in positions of prestige or power, it shifts that baseline upward, making dishonesty feel more normal and therefore easier to justify.
Within organisations, the incentive architecture can amplify this dynamic — "the gasoline" to the initial spark. In a separate study, Sauciuc finds that the cadence of performance-based incentives determines which psychological pathway employees use to rationalise fraud. Under low-frequency incentives, individuals operate through moral licensing: behaving honestly for extended periods, then feeling entitled to a single lapse — an oscillating pattern of mostly honest behaviour punctuated by spikes during high-stakes periods.
When incentive periods are frequent, that mechanism breaks down. Employees cannot rebuild the reserve of good behaviour that licenses a future lapse, and shift instead to reciprocity as their rationalisation: covering for a colleague who covered for them, until collusion becomes simply "what is done here." The result is not oscillation but erosion — misreporting that spreads into non-incentivised periods, persisting even when financial benefit is minimal. Once individuals began colluding under high-frequency incentives, defection was rare. Highly frequent incentives risk "almost weaponising peer reciprocity as a tool to overcome lying aversion."
What Stops It
Sauciuc and her co-authors tested three punishment mechanisms: monetary fines, imprisonment, and public shaming. Unlike incarceration — which carries significant taxpayer cost — both fines and public shaming are relatively low-cost sanctions. Public shaming, available to U.S. federal judges under the Federal Sentencing Guidelines as an alternative requiring public statements about wrongdoing in print or on television, has attracted particular interest precisely because it costs little yet may carry a powerful social signal.
The results were mixed but instructive. In the first experiment, imprisonment was particularly effective at reducing the spillover. In the second, where misreporting could be precisely measured, public shaming brought misreporting back to the control baseline — but not significantly below it. Across both, monetary fines reduced some spillover but did not activate injunctive norms in the same way as imprisonment and public shaming did, and showed no equivalent reduction in the spillover effect. Mediation analysis confirmed the operative channel: what reduced misreporting was not the severity of the punishment but its capacity to shift participants' sense of what behaviour is socially unacceptable — the injunctive norm — independent of how harsh the sanction felt.
A tension raised in discussion is that publicising a punishment necessarily also publicises the fraud. The problem is not reporting on fraud but reporting on it incompletely. News of fraud without disclosure of consequences leaves observers with only the descriptive signal that this is occurring. Completing the picture — making clear the behaviour was punished and deemed unacceptable — is what shifts the psychological balance.
Implications
The research carries an uncomfortable message for boards, compliance teams, and regulators alike: the ethical environment inside an organisation is not self-contained. Every high-profile fraud that goes unpunished — or is punished invisibly — sends a signal, and that signal travels. It reaches people who had nothing to do with the crime, reshaping what they think is normal and what they feel licensed to do.
The psychological mechanisms at work are not specific to financial reporting. Descriptive norms, moral licensing, advantageous comparison, reciprocity — these operate wherever people can observe misconduct and make decisions about their own behaviour. The framework extends naturally to greenwashing, regulatory violations, and ethical failures of all kinds.
What the evidence makes structurally clear is that corporate governance cannot treat fraud as an event with bounded consequences. The frequency of performance incentives, the visibility of accountability, the signals sent when wrongdoing goes unrewarded — these are not peripheral design choices. They are the normative infrastructure of an organisation, and they shape behaviour far beyond the individuals they are intended to govern.
-----------------------------------
This lecture is part of the Indiana University - ECGI Online Series, a public lecture series on corporate governance. The Kelley School of Business Institute for Corporate Governance (ICG+E), in partnership with Ethical Systems, collaborates with ECGI to deliver this ongoing initiative. As part of this public lecture series, distinguished speakers share insights on the evolving landscape of governance, finance, and market regulation.