The Cost of Asking, “Is It True?”
On Fear, Evidence, and Moral Urgency
In highly charged public controversies, moral gravity can reshape the structure of discourse itself. The shift is subtle, and it does not always appear as censorship or dogma. Rather, it looks like urgency, solidarity, or moral clarity. More importantly, it feels right.
But what happens when urgency and moral certainty drift out of alignment with the facts, when the atmosphere becomes so charged that even asking “Is it true?” feels dangerous?
When that happens, standards by which claims are evaluated begin to shift. The social cost of questioning can replace the logical conditions of truth, as agreement starts to track moral alignment rather than evidence. This is the moment to ask, “What would count as enough evidence, and what would count against it?”
There is a simple idea of truth that was proposed in the 1930s by the Polish logician Alfred Tarski:
“Snow is white” is true if and only if snow is white.
It sounds almost banal, but read it again. Notice how the left-hand side is a sentence, and the right-hand side describes a state of affairs — what is true in the world. Here, the “if and only if” functions as a bridge that connects language to reality.
Tarski’s schema isn’t a practical way to investigate a claim or resolve moral disputes. But it matters because we have reached a point in contemporary discourse where what is real in the world is often doubted, especially when moral urgency is high.
In such situations, it would make sense to pause and ask, “Are there possible ways the world could be that would make a claim false?” If the answer is yes, the next question is, “Is it ok to even ask?”
The first question requires a logical check, and the second question requires courage. Both questions are important.
When Asking Feels Dangerous
It is possible for a claim to be logically falsifiable yet socially insulated, meaning telling the truth can feel lonely and often unsafe.
Consider a controversy in my hometown of Columbus, Ohio. Les Wexner’s long association with Jeffrey Epstein has received significant public attention. Epstein’s crimes were horrific, and Wexner has been subpoenaed in related proceedings. To my knowledge, he has not been criminally charged.
In 2011, Wexner made the largest single donation ever to The Ohio State University, and the university honored him by renaming the medical center and placing his name on university buildings. However, considering the Epstein crimes, there is growing demand that his name be removed.
That demand rests on an empirical claim: that Wexner knew of, facilitated, or bears responsibility for Epstein’s actions in a way that would justify institutional repudiation.
Under ordinary truth conditions, that claim would be true only if such knowledge, facilitation, or responsibility can be established.
The Cost of Questioning
I’ve been tempted to ask, “Isn’t removing the name without confirmation of guilt premature?” But I stop myself because I know how easily that question could be read as loyalty to power or indifference to survivors. In a climate like this, asking for evidence can look like moral failure.
When the expectation of evidence is treated as proof that someone does not care about what survivors have endured, we are in a condition I call normative override. It marks the point at which the demand that claims answer to the world is replaced by a demand for moral alignment — when moral valence substitutes for the truth conditions a claim would ordinarily require.
Here’s what this shift looks like:
Is it true?
becomes
Is it wrong to question it?
Epstein’s crimes were so grave that hesitation can look like complicity. But moral seriousness cannot replace evidential evaluation. When questioning itself becomes suspect, decisions begin to reflect pressure rather than proof.
Why Silence Stabilizes
This shift from “is it true?” to “is it ok to ask?” is supported by basic human incentive structures. When publicly asking for evidentiary thresholds carries reputational cost — accusations of defending power, minimizing harm, or betraying victims — while remaining silent carries little cost, silence becomes the rational strategy. My hesitation to ask, “Is it really true about Wexner?” reflects this cost.
Under those conditions, conformity does not require enforcement. It emerges because deviation is costly.
You may remember the scene from A Beautiful Mind in which John Nash realizes that when each person’s choice depends on everyone else’s, a stable pattern can form. Once that pattern sets in, the incentive to act alone — even to improve the outcome — largely disappears.
Something similar happens in morally charged controversies. If asking for evidence carries reputational risk while silence does not, each person has reason to wait. Even if many privately believe standards should be articulated, no one wants to incur the penalty of being first. Silence becomes individually rational, and the pattern stabilizes.
That is incentive-stabilized conformity; it allows claims to solidify without being tested.
Solidarity and Standards
There is a countervailing reality that explains how normative override can feel justified. Historically, survivors of sexual exploitation and abuse have often been dismissed or disbelieved. Miranda Fricker’s concept of “testimonial injustice” captures this phenomenon in which certain speakers suffer credibility deficits because of identity-based prejudice. Correcting those distortions is an epistemic requirement as much as a moral one. José Medina extends this analysis by arguing that those who experience structural harm often develop heightened sensitivity to the patterns that harm produces; thus, lived experience can sharpen perception.
I agree with Fricker and Medina: centering marginalized voices is essential because it improves collective understanding. These are indispensable insights.
However, when the moral commitment to take survivors seriously is conflated with lowering evidentiary standards, truth begins to drift because claims about responsibility are accepted without specifying what evidence would confirm or disconfirm them.
Centering survivors does not require suspending truth conditions, as credibility and proof are separate categories. A survivor can be granted full moral seriousness while specific claims about responsibility are evaluated against articulated criteria. Requiring evidence entails specifying what must be established for a claim about responsibility to be true and what would count as revision; it is not about disbelieving survivors.
Standards and Legitimacy
Even when a university is not acting as a court, it is exercising judgment. But judgment requires articulated criteria and a process capable of applying them. Otherwise, action becomes symbolic rather than evaluative.
Exposure to allegations does not make us judges. Hearing a claim and determining responsibility are distinct acts, and institutional legitimacy depends on preserving that distinction. When it erodes, decisions begin to track moral intensity rather than the conditions that would make a claim true.
Tethered To The World
Tarski’s formulation is austere for a reason: it keeps claims tethered to the world. A statement is true if and only if the world is as the statement says it is. That discipline prevents moral urgency from substituting for evaluation.
Hannah Arendt warned that when people lose their grip on reality, they become easier to control. In morally charged contexts, the temptation to let seriousness stand in for evidence is powerful, but the discipline of truth is a precondition for justice. When this discipline breaks down, institutions lose credibility.
A culture capable of both solidarity and rigor does not need to choose between believing survivors and specifying evidence. It can do both.
Sources
Arendt, Hannah. The Origins of Totalitarianism. New ed. New York: Harcourt Brace, 1973.
Fricker, Miranda. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press, 2007.
Medina, José. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Oxford: Oxford University Press, 2013.
Nash, John. “Non-Cooperative Games.” Annals of Mathematics 54, no. 2 (1951).
Tarski, Alfred. “The Semantic Conception of Truth and the Foundations of Semantics.” Philosophy and Phenomenological Research 4, no. 3 (1944): 341–375.


This is an astute series of observations. It’s fascinating the length to which we go—me too—to reason through a basic worry: is it alright to ask if a claim is true? If and only if a society is capable of both solidarity and rigor.
Tangentially, Professor Nash and I once entered into a lengthy discussion about his game theory and some of his evolved thoughts on its applicability in different scenarios and under various conditions.
Thanks for asking the question. I’ve quietly asked it, as well.
This is incredibly important and so well done.