By B.S. Cooper
It has become something of a quotidian occurrence (one might even say a banality) to encounter yet another instance of artificial intelligence producing what we might charitably call fabrications, though the less charitably inclined among us might prefer the term “hallucinations,” a designation that carries with it the whiff of psychopathology that the phenomenon perhaps deserves. Phantom citations materialize in legal briefs; nonexistent research papers are adduced in academic journals; statistical chimeras manifest themselves in government reports. The public, now somewhat inured to these recurring embarrassments, tends to metabolize them as technological hiccups—regrettable but comprehensible failures of an immature technology, the sort of teething troubles one expects from any nascent innovation.
This interpretation, while not entirely without merit, suffers from a certain naïveté. These visible confabulations constitute merely the observable stratum of a far more extensive geological formation. They are, to employ the inevitable metaphor, the tip of an iceberg, beneath which lurks a vastly larger mass of deception, though to call it deception would be to mischaracterize its essential nature. In numerous precincts of our professional economy, the question of whether a proposition corresponds to reality has long since ceased to be the primary consideration. What matters, rather, is whether it persuades.
The Ontology of Untruth
While the commentariat exhaust themselves cataloging the depredations of post-truth politics, a rather more insidious transformation has been proceeding apace in the mahogany-paneled conference rooms and glass-walled offices of corporate America. For several decades now—certainly since the ascendancy of management consulting as a dominant force in organizational life—truth has been treated as a rather plastic substance, amenable to reshaping according to the exigencies of the moment.
The mechanism is straightforward enough: If it serves the bottom line for data to trend in a particular direction, some enterprising firm will happily undertake research designed from its inception to deliver precisely that conclusion. Information is packaged into decks and reports constructed not to illuminate reality but to support a predetermined narrative, with inconvenient facts subjected to what might be termed strategic minimization or, in more flagrant cases, outright omission.
Enter artificial intelligence, which offers not merely incremental improvements to this dubious enterprise but rather a quantum leap in capability. Even when it refrains from wholesale fabrication—from conjuring data ex nihilo—AI excels at providing myriad pathways to obscure uncomfortable truths and to endow “alternative facts” (that mellifluous euphemism) with the veneer of plausibility. In any domain where the simulacrum of rigor matters more than rigor itself, artificial intelligence transmutes from liability to asset.
Frankfurt’s Taxonomy of Untruth
The philosopher Harry Frankfurt, in his admirably concise treatise On Bullshit, provides us with a taxonomy of untruth that proves illuminating in the present context. The essence of what Frankfurt denominates “bullshit,” he submits, resides not in its falsity but in its phoniness. The liar, Frankfurt observes, maintains a relationship—albeit an adversarial one—with truth; he knows it and seeks to conceal it. The bullshitter, by contrast, exhibits a magnificent indifference to the entire question of truth or falsity. He may stumble into veracity by accident, but this would be merely fortuitous. What animates the bullshitter is not accuracy but efficacy: the effect his words produce upon an audience, the impression they create, the latitude they afford him.
This distinction illuminates the predicament with admirable clarity. In numerous professional contexts, language serves not to describe or to argue but to persuade and to create impressions—which is to say, to perform the quintessential work of the bullshitter.
Frankfurt’s framework helps explain why artificial intelligence fits so seamlessly, so perfectly, into certain industries that one might almost suspect the technology was purpose-built for them. If one’s objective is the production of material that presents the appearance of authority and empirical foundation—whether or not it possesses these qualities in substance—then AI represents not merely a useful tool but very nearly an ideal one. It generates plausible-sounding prose with remarkable celerity and in prodigious volume. It experiences neither fatigue nor ethical compunction, and it requires no comprehension of its utterances in order to make them compelling.
The Political Economy of Nonsense
The late David Graeber, in his provocative essay on “bullshit jobs,” identified a category of employment characterized by work that even its practitioners privately suspect serves no genuine purpose. A substantial proportion of knowledge work—that category of labor we once imagined would deliver us from industrial drudgery into the sunlit uplands of the information economy—falls into this classification. For decades, consultants, analysts, and various species of corporate oracle have been rewarded for producing material that looks impressive: the thirty-page slide deck with its fashionable frameworks, the glossy report adorned with elegant two-by-two matrices, the presentation suffused with the language of expertise.
The material need not be good. It need only appear good—a distinction that, one might argue, represents one of the more significant epistemological developments of late capitalism.
This creates what we might call the optimal conditions for artificial intelligence adoption. If the objective is not substance but performance, not truth but persuasion, not honest inquiry but effective bullshit, then employing AI constitutes not a compromise but rather a straightforward optimization. One is not degrading quality because quality was never the animating principle. One is merely identifying a more efficient means of achieving the actual objective: the production of documents that impress clients, satisfy bureaucratic requirements, and generate billable hours.
The pathology here is not that artificial intelligence produces bullshit—though it certainly excels at this particular craft. The pathology is that we have constructed an economy that systematically rewards bullshit production, and then we have supplied it with an instrument of unprecedented efficiency for generating precisely that commodity.
The Metaphysics of Quality
To apprehend what hangs in the balance, one might profitably consider Robert Pirsig’s concept of “quality” as articulated in Zen and the Art of Motorcycle Maintenance. Quality, in Pirsig’s formulation, constitutes that property which renders a good thing genuinely good. It resists precise definition—indeed, any attempt at explicit definition tends to diminish it—yet remains unmistakable in its presence. One recognizes quality in the sensation of running one’s hand along a finely crafted table, feeling the seamless conjunction of wood, the joints executed with such precision that they seem to deny the very possibility of separation. One perceives it when every line and curve exists exactly as it ought, when there is a quiet rightness, an integrity, that bespeaks genuine excellence.
Were the institutions charged with knowledge production—consulting firms, certainly, but also universities, corporations, governmental bodies, media platforms—animated by an authentic commitment to quality in this sense, bullshit would find the terrain considerably less hospitable to its flourishing. But institutions inculcate values through their reward structures, and we have devoted decades to rewarding the production of impressive-looking material while maintaining a studied indifference to whether that material possesses genuine merit.
Consultants have merely perfected, carried to its logical terminus, what many of us have learned in lesser degree: the production of work that looks good without particular concern for whether it is good.
There exists an aphorism, perhaps Chinese in origin, though its provenance matters less than its wisdom: “First you wear the mask, then the mask wears you.” Initially, one supposes, it remains possible to produce bullshit while retaining the capacity to recognize it as such—to maintain what we might call epistemic distance from one’s own productions. But prolonged exposure to the bullshit-industrial complex, sustained engagement in its practices, tends inexorably toward a kind of blindness. One begins to drink, as they say, the Kool-Aid. The capacity to distinguish bullshit from quality atrophies. Artificial intelligence does not cause this blindness—it merely renders it impossible to ignore, makes it visible to anyone still possessed of functioning critical faculties.
A Prescription for the Perplexed
The path forward—and I submit there is one, though it is neither easy nor particularly popular—requires unflinching clarity about what we are actually producing and why. Artificial intelligence will not disappear, nor should we wish it gone. It constitutes a genuinely powerful instrument for augmenting human capability, for liberating our time from routine tasks and redirecting it toward more valuable pursuits. But it achieves these benefits partly through the seductive ease of the shortcut, and many of these shortcuts lead directly into what we might call the slough of inauthenticity.
The challenge, then, is to harness AI without succumbing to the temptation of the easy path. This requires sustained, deliberate effort at two distinct levels.
At the individual level: One must never—and here I cannot emphasize this injunction too strongly—accept AI-generated output without first making it one’s own. For every sentence, every datum, every claim, every citation, one must pose the question: Do I stand behind this? Can I defend it? Does it represent my actual understanding and judgment? If the answer to any of these questions is anything other than an unequivocal affirmative, one is obligated to verify claims, to interrogate arguments, to rewrite and revise and, where necessary, to reject entirely. This is arduous when an easier path beckons. But the arduousness is precisely what makes it necessary. Difficulty serves here as a kind of epistemic safeguard.
At the organizational level: While we must certainly trust our people to employ AI responsibly, this trust cannot remain purely aspirational. If an organization wishes to distinguish itself from the legions of professional bullshitters—and one would hope this represents more than a marginal preference—it must commit itself, repeatedly and seriously, to the production of work of genuine quality. This necessitates rigorous quality controls, rigorously enforced. Leaders must take full responsibility for everything that emerges from their organizations, affirming that they permit it to proceed into the world not because it sounds impressive but because it is, in fact, good. This is taxing. It demands time and sustained attention. It means reading with genuine comprehension rather than perfunctory glancing, being prepared to challenge oneself and one’s team not sporadically but as a matter of routine practice.
This is the difficult path, and it will always be more difficult than the alternative. Bullshit flourishes precisely because it is easy; because it offers immediate gratification and requires neither expertise nor integrity.
The Question Before Us
The question we confront is not whether to employ artificial intelligence. That question has been answered by the market, by competitive pressure, by the inexorable logic of technological adoption. The question is whether we possess sufficient discipline—intellectual, moral, or organizational—to employ it in the service of quality rather than mere appearance. Each time we face the temptation to allow AI-generated material to pass unchallenged because it looks sufficiently good, we should pose to ourselves a simple but unforgiving question: Are we creating something of genuine merit, or are we simply contributing to the ever-growing mountain of bullshit that threatens to bury what remains of our epistemic commons?
Our willingness to answer this question honestly, and to act upon that answer, will determine whether artificial intelligence becomes an instrument of excellence or merely the latest engine for transmuting substance into simulation, insight into appearance. The choice, ultimately, concerns not the technology but ourselves: what we value, what standards we are prepared to defend, and whether we retain sufficient integrity to take the harder path when an easier one presents itself.
One suspects that future historians, surveying the ruins of our reputation for seriousness, may judge us not by whether we developed artificial intelligence but by whether we developed the character necessary to use it wisely.