Tom Passarelli's Blog
foundations ·

The Consistency Gate: Substrate-Cancellation Detection (V₁)

Editor’s note: This essay predates Running Comes First but has been revised into alignment with its process-first framing. The core claim remains the same: contradiction is not a problem for the inference engine to tolerate, but a failed input condition that should be caught before inference begins.

Why Paraconsistent Logic Has No Use Case That Classical Logic With a Consistency Gate Does Not Handle With Strictly Greater Inferential Power

Abstract

Paraconsistent logic exists to solve a problem that should never reach the inference engine.

Classical explosion — from a contradiction, anything follows — is usually treated as a defect in classical consequence. It is not. It is the correct behavior of an engine fed input that failed a precondition. The defect is architectural: standard presentations of classical logic place no validation layer between syntactic well-formedness and proof-theoretic consequence.

This paper identifies that missing layer, specifies it formally, and argues that once it is added, paraconsistent consequence relations have no remaining logical use case. Every reasoning task paraconsistency is supposed to handle can be handled by a gated classical architecture with strictly greater inferential power.

The Architecture of Evaluation

Reasoning is not one operation. It is a sequence of evaluations, each with its own domain and preconditions.

Layer 1: Syntactic evaluation. Input: raw expressions (strings of symbols). Operation: check whether the expression is generated by the grammar’s production rules. Output: well-formed formula or rejection. This layer partitions expressions into parseable and unparseable.

Layer 2: Proof-theoretic evaluation. Input: well-formed formulas, a premise set, and inference rules. Operation: determine whether a derivation exists from premises to conclusion. Output: derivable or not derivable. This layer partitions well-formed formulas into consequences and non-consequences of the premises.

Layer 3: Semantic evaluation. Input: well-formed formulas and an interpretation. Operation: compute truth value under the interpretation. Output: true or false. This layer partitions well-formed formulas into those that hold and those that do not under a given assignment.

Each layer’s value is entirely in its ability to discriminate. A layer that assigns the same output to every input has collapsed. It performs no work beyond what the previous layer already performed.

All three layers presuppose a more primitive running: the process must carry its target through strongly enough for it to remain identifiable, distinguishable, and evaluable at all. In the earlier essays this was framed as a consistency floor. In the newer framing it is better understood as self-sustaining process whose stable appearances include identity, distinction, and binary verdict. The terminology shifts. The architectural point does not. Reasoning only works where the relevant preconditions for discrimination are intact.

Explosion, Correctly Located

The standard presentation of explosion (ex contradictione quodlibet, ECQ) is as follows. Given a premise set containing P and ¬P, for any arbitrary formula Q:

  1. P (premise)
  2. P ∨ Q (disjunction introduction on 1)
  3. ¬P (premise)
  4. Q (disjunctive syllogism on 2 and 3)

Since Q was arbitrary and no property of Q was used, the derivation works for every well-formed formula. Layer 2 assigns “derivable” to every input. It has collapsed — it maps its entire domain to a single output value. It no longer discriminates. It performs no work beyond what layer 1 already performed.

The standard diagnosis: the inference engine is too powerful. Disjunctive syllogism, in particular, is the rule that enables the final step. Paraconsistent logic’s response is to restrict the engine — reject or modify inference rules — so that the collapse does not propagate.

This diagnosis is wrong. The inference rules are not the locus of failure. The failure occurs before any inference rule fires.

The Missing Layer

Consider what the premise set {P, ¬P} asserts. It asserts that some proposition is both the case and not the case. This is an evaluation state that does not execute. The state it describes — something jointly required to hold and not hold — fails the preconditions under which evaluation can discriminate.

“Does not execute” is used here in the technical sense developed in Passarelli (2026): the evaluation operator is undefined on that input because no terminating evaluation output exists. The evaluation state the string describes does not resolve into a judgment. Consequence is therefore partial, with that input outside its domain. The non-executability is what justifies the inadmissibility — the gate enforces a structural property of evaluation itself, not an arbitrary policy preference.

{P, ¬P} belongs to the same family of failures as “this statement is false” — expressible as a string, but the evaluation it requests does not terminate. It belongs to the same family as “nothing exists” — the string parses, but the state it describes requires the removal of the conditions required to describe it. In all three cases, the expression is syntactically well-formed but the evaluation state it specifies is non-terminating.

{P, ¬P} is two individually well-formed formulas. Each passes syntactic evaluation independently. But their conjunction describes a state that cannot resolve to an evaluation output. Classical logic passes any set of individually well-formed formulas directly to the inference engine. There is no checkpoint between “each formula is syntactically legal” and “begin deriving consequences.” The premise set is validated formula-by-formula at layer 1 and then handed to layer 2 as a set without any validation of the set as a whole.

This is the architectural defect. Not the inference rules. Not disjunctive syllogism. The absence of a validation gate between syntactic evaluation and proof-theoretic evaluation.

Formal Specification

The fix is a new evaluation layer — layer 1.5 — inserted between syntactic evaluation and proof-theoretic evaluation. Its specification is formal.

Let WFF be the set of well-formed formulas (the output class of layer 1). Let V be a consistency validator on premise sets:

V: 𝒫(WFF) → {pass, fail(Δ)}

where 𝒫(WFF) is the powerset of WFF — all possible subsets — and fail returns a witness contradiction Δ ⊆ Γ (for instance, {φ, ¬φ}).

Classical consequence ⊢cl is not a total function on arbitrary subsets of WFF. It is a partial function, defined only on validated premise sets:

Γ ⊢cl ψ is defined if and only if V(Γ) = pass.

This is the same kind of domain restriction that syntax imposes on proof theory. A grammar does not “modify” proof-theoretic evaluation by rejecting malformed strings. It specifies the domain on which proof-theoretic evaluation is defined. Consistency validation specifies the domain on which consequence is defined.

When V(Γ) = fail(Δ), the system does not return a weakened consequence relation. It does not return “some safe subset of consequences.” It returns a diagnostic object identifying Δ and removes Δ from the active reasoning context. Reasoning proceeds, classically and without modification, on the remaining validated premises Γ \ Δ (provided Γ \ Δ itself passes validation).

The system’s evaluation function is:

Eval(Γ, ψ) = if V(Γ) = pass: return Γ ⊢cl ψ (classical derivability check) if V(Γ) = fail(Δ): return Diagnostic(Δ); proceed with Eval(Γ \ Δ, ψ) if needed

This architecture has the following properties:

Full classical power on valid input. Every classical inference rule — modus ponens, disjunction introduction, disjunctive syllogism, all of them — is available for every premise set that passes validation. No inferential cost is imposed on consistent reasoning.

Explosion is structurally prevented. The precondition of explosion — a contradictory premise set reaching the engine — is never met. Explosion is not disabled. It is starved of input.

Contradictions are surfaced, not buried. The diagnostic object identifies exactly which premises conflict. The system does not silently route around the problem. It flags the problem and holds it for resolution.

The system does not halt. Premises not involved in the contradiction pass through the gate. The engine runs on everything it can trust. Only tasks depending on the contested premises are flagged as blocked.

Credulous Reasoning Without Paraconsistency

A natural objection: the gated architecture refuses to derive any conclusion that depends on an unresolved contradiction. But sometimes one wants to explore what follows from each side of a contradiction — credulous or conservative reasoning over inconsistent corpora — without first resolving the conflict. This, the objection goes, is a residual job that paraconsistency can do and the gate cannot.

The gate handles this trivially. When the validator identifies the witness contradiction Δ = {φ, ¬φ}, it has already located the pair. The system generates two consistent views:

View₁ = (Γ \ Δ) ∪ {φ} View₂ = (Γ \ Δ) ∪ {¬φ}

Each view is consistent. Each passes validation. The classical engine runs on each with full power — modus ponens, disjunction introduction, disjunctive syllogism, everything. The system produces two complete sets of consequences: one for each side of the contradiction. Both are labeled. Both are presented. The downstream consumer sees exactly what follows from each resolution without the conflict being resolved.

This is not paraconsistency. It is two classical runs on two consistent views. It requires no modification to the engine. No inference rule is sacrificed. And the output is strictly better than what any paraconsistent system produces, because each branch has the full classical engine — including every rule paraconsistency gave up to avoid explosion. Paraconsistency’s credulous reasoning runs on a weakened engine. The gate’s credulous reasoning runs on the full engine, twice.

For contradictions involving more than one pair, the same approach scales: enumerate the maximal consistent subsets of the premise set, run classical inference on each, present the results. The computational cost of enumeration grows, but this is an engineering constraint, not a logical one. The engine is never weakened. The inferential power of each branch is never reduced. And the contradiction is visible — each branch is explicitly labeled as contingent on a particular resolution of the conflict.

If a single set of conclusions is required rather than labeled branches, the system applies an aggregation policy over the branch-results: intersection (skeptical — only conclusions supported by every branch), union (credulous — conclusions supported by any branch), or a prioritized selection (preferred repair — conclusions from the most trusted branch). These policies are not inference. They are downstream decisions about which inferred conclusions to present. The inference has already happened — classically, at full power, on each branch. The aggregation is a selection layer.

Paraconsistent consequence relations bake aggregation into inference. They fuse the question “what follows from what” with the question “what to present when sources conflict” into a single modified consequence relation. The gated architecture separates them. The consequence relation handles inference. A separate layer handles aggregation. This separation is architecturally superior: different aggregation policies can be swapped without modifying the engine, the inference on each branch is fully classical, and the branch structure is preserved for inspection even after aggregation. Paraconsistency’s fusion of inference and aggregation is a design flaw, not a feature.

A more sophisticated objection: paraconsistency can sometimes derive conclusions that no single consistent view supports. Consider a premise set containing φ, ¬φ, φ → q, ¬φ → r, and (q ∧ r) → s. A paraconsistent system that blocks explosion might derive q (from φ), r (from ¬φ), then q ∧ r, then s — all in one run. The gated architecture, keeping branch labels honest, derives q only on the φ-branch and r only on the ¬φ-branch. Since q and r never co-occur in a single consistent view, s is not derived on either branch.

This looks like paraconsistency “out-infers” the gated system. It does not. The paraconsistent derivation of s requires treating q and r as jointly available premises. But q is contingent on φ being true and r is contingent on ¬φ being true. These are conclusions from incompatible resolutions of the same contradiction. Combining them drops provenance — it forgets which branch each conclusion depends on — and treats branch-contingent outputs as if they were unconditionally established. Under the framework of this paper, that is not inference. It is a provenance-forgetting aggregation smuggled into the consequence relation. It is exactly the fusion of inference and conflict-resolution policy that the gate architecture forbids.

If branch labels are maintained, s is correctly not derived — because there is no consistent state of affairs in which both q and r hold given that q depends on φ and r depends on ¬φ. If branch labels are dropped to obtain s, the system has reintroduced the core confusion: treating conclusions from incompatible premises as jointly assertible. Any “extra conclusion” a paraconsistent consequence relation produces beyond the branch-wise classical engine is either a label-dropping mix across incompatible views (aggregation disguised as inference) or a computational shortcut standing in for enumerating views (a heuristic, not a logical capability). Neither constitutes a use case for a non-classical consequence relation.

A final retreat: paraconsistency might serve as a computationally tractable one-shot approximation that avoids enumerating repairs. This concedes the logical point and retreats to engineering. Even on engineering grounds, the retreat fails. Paraconsistency as an efficiency shortcut is a lossy approximation. It trades inferential power — permanently, for every derivation — for speed. Every conclusion it draws, it draws with a weakened engine. The full architecture draws the same conclusions and more, with more power, and surfaces which branch supports what. Trading inferential power for computational convenience is an engineering tradeoff someone might choose to make. It does not constitute a use case for paraconsistent logic. It constitutes a use case for a heuristic — a fast, lossy approximation of something the correct architecture does better. A JPEG is not a use case that makes lossless image formats unnecessary. A paraconsistent shortcut is not a use case that makes classical logic with a consistency gate unnecessary.

There is no reasoning task that requires a non-classical consequence relation. Every task paraconsistency performs can be performed by running the unmodified classical engine on consistent views generated from the diagnosed contradiction. The gate does not refuse to explore consequences of unresolved conflict. It explores them with more power than paraconsistency has available.

Operational Definitions: Rejection vs. Tolerance

Two terms require operational definitions to prevent equivocation.

Tolerate inconsistency: continue deriving conclusions from a premise set that contains an identified contradiction. The contradiction remains in the premise set. Inference rules or semantic principles are modified or restricted so that the contradiction does not propagate to arbitrary conclusions. This is what paraconsistent logics do.

Reject inconsistency: refuse to derive any conclusion from any premise set that contains an identified contradiction. Return a diagnostic object. Proceed only on validated subsets from which the contradiction has been removed. This is what the consistency gate does.

Under these definitions, the gated system is inconsistency-rejecting. It does not reason in the presence of inconsistency. It ensures inconsistency is absent before reasoning begins. The fact that the system continues operating on consistent subsets of the original premise set is not tolerance. It is the same behavior as a system that receives a fresh, consistent premise set. The engine does not know or care that a contradiction was detected upstream.

The Universal Tradeoff

The relationship between inconsistent premises and classical inference is not a contingent feature of particular paraconsistent systems. It is a structural constraint.

The following three properties cannot be jointly satisfied:

  1. The premise set contains a contradiction ({φ, ¬φ} for some φ).
  2. Every classical inference rule is available.
  3. The consequence relation is non-trivial (not every formula is derivable).

Any two can be maintained. All three cannot. This is provable: the four-step explosion derivation (disjunction introduction followed by disjunctive syllogism) uses only standard classical rules and produces an arbitrary conclusion from any contradiction. The derivation is valid in any system that has both rules and admits the contradictory premises.

Every paraconsistent system resolves this trilemma by giving up property (2): some classical inference pattern, structural rule, or semantic principle is restricted or reinterpreted to prevent the explosion derivation from going through. The specific sacrifice varies by system — some reject disjunctive syllogism, some modify the structural rules governing how premises are used, some redefine the semantics of negation or disjunction so that the explosion proof no longer applies. But a sacrifice exists in every case. There is no system that admits inconsistent premises, retains all classical inference rules, and avoids triviality. That combination is not available.

The gated architecture resolves the trilemma by giving up property (1): contradictory premises do not enter the consequence relation. Properties (2) and (3) are maintained without restriction. Every classical rule is available. The consequence relation is non-trivial. No inferential sacrifice is made.

The choice between these strategies is the choice between weakening the engine and filtering the input. Paraconsistency weakens the engine — permanently, for all derivations, including the vast majority that involve no contradictions. The gate filters the input — at validation time, with zero effect on the engine’s behavior for consistent premise sets.

Decidability and the Engineering Boundary

A necessary clarification: consistency checking is computationally hard. For propositional logic, determining whether a set of formulas is satisfiable is NP-complete, so checking consistency is co-NP-complete. For first-order logic, consistency is undecidable in general — there is no algorithm that always terminates with the correct answer for an arbitrary premise set.

This does not invalidate the architecture. It means the architecture’s specification and its implementation are separate concerns.

The specification says: no inconsistent premise set reaches the engine. This is a logical constraint — a precondition on the domain of the consequence relation.

The implementation is approximate. Consistency can be checked incrementally at insertion time. Each new formula is checked against the existing set before admission. Subsystems can be isolated and checked independently. Contradiction detection can be bounded, probabilistic, or heuristic, flagging likely inconsistencies for review. In practice, most real-world premise sets have enough structure — typed variables, domain constraints, finite domains — that consistency checking is tractable for the relevant fragments.

The gap between specification and implementation is an engineering problem, not a logical one. The same gap exists for syntactic evaluation: parsing is decidable for context-free grammars but undecidable for arbitrary grammars. Nobody treats this as a reason to abolish syntactic validation. They build parsers for the grammars they use.

Paraconsistency responds to computational difficulty by giving up on the precondition: since we cannot always check consistency, redesign the engine to function without it. The alternative is: since we cannot always check consistency perfectly, check it as well as we can and improve the checking. The first response permanently weakens the engine for all input. The second invests in better validation while keeping the engine at full power. These are different engineering philosophies with different tradeoff profiles. This paper advocates the second.

Connection to Evaluation Foundations

The argument rests on a claim about what evaluation requires, developed fully in Passarelli (2026).

Evaluation is the process by which a proposition is assessed and a judgment is produced. This process has preconditions: identity (the proposition must be itself and not something else), distinction (the possible outputs must be distinguishable), and consistency (the state being evaluated must be one that can in principle resolve). A state that is both P and not-P does not resolve. It is not pending. It is not indeterminate. It is an input that does not meet the preconditions of the evaluation process.

These are boundary conditions, not axioms. Whatever counts as questioning, reasoning, or explaining is already operating inside what it is trying to examine. The floor holds itself up.

Paraconsistent logic does not operate beneath these conditions. It operates on top of them. The metalanguage in which paraconsistent semantics is defined uses identity, distinction, and bivalent evaluation. The non-bivalent or contradiction-tolerant features exist at the object level. The meta-level — the level at which the system’s own behavior is certified — is classical.

The standard defense of this arrangement is: “Of course the metalanguage is classical. Every non-classical logic is studied using a classical metatheory. That is how formal investigation works.” This defense is correct as a description of methodology. It does not answer the specific challenge this paper raises.

The challenge is not “you used classical tools.” The challenge concerns what the classical meta-level reveals about the nature of the object-level constructs. True and false, following Passarelli (2026), are evaluation outputs — exit conditions of a process — not symbols in a set that can be extended by adding a third member. A formal system can define a notation with three symbols and call them truth values. But the process of certifying that the notation behaves correctly — the meta-level evaluation — still terminates in two outputs: the system works as defined, or it does not. The three-valued notation at the object level has not extended evaluation. It has introduced a bookkeeping device whose correctness is certified by the same two-valued evaluation it purports to move beyond.

This does not show that non-classical object languages are incoherent as formal systems. They are coherent notations. The claim is that they do not demonstrate what paraconsistency’s strongest philosophical proponents — specifically, dialetheists who assert that some contradictions are true — claim: that evaluation itself can tolerate contradiction. The object language tolerates contradiction-shaped strings. The evaluation that certifies the object language does not tolerate contradiction. Object-level non-explosion does not establish that evaluation has been extended. It establishes a formal bookkeeping choice whose correctness is verified by the same bivalent evaluation it claims to have moved beyond.

This paper rejects dialetheism — the thesis that some contradictions are true — on these grounds. “True” is an evaluation exit-condition. It is what evaluation returns when it succeeds. A “true contradiction” would require evaluation to return both “succeeds” and “fails” for the same input. That is not a third evaluation output. It is the non-executability identified throughout this paper: a state that does not resolve into a judgment. Truth-value gluts are not additional exits from the evaluation process. They are a notation for non-execution, mistaken for a result.

Connection to the Broader Framework

Within the larger body of work, this paper identifies one species of non-executable input: contradictory premise sets that destroy the discriminative conditions needed for consequence to do work. This is V₁.

Later work identifies a second species: cyclic evaluative requests that fail to ground out. That is V₂.

The shared idea is architectural. A formal system should not treat every syntactically legal input as executable. There must be a boundary between expression and execution.

Contradiction and cyclic anchor-failure are different failure modes, but the response is the same in form: do not romanticize the output of an ungated engine. Validate the input first.

The Category Error

The motivation for paraconsistency often begins with the observation: “contradictions appear in our information.” This is true. Contradictions appear in databases, in testimony, in scientific data, in legal codes, in multi-source intelligence with no privileged adjudicator.

But “contradictions appear in our information” does not entail “contradictions are possible states of affairs that require a logic to reason about.” Information can be wrong. Records can conflict. Sources can disagree. Jurisdictions can assert incompatible rules. None of this means reality contains true contradictions. It means representations of reality contain errors or unresolved conflicts. The correct response to errors in representation is correction. The correct response to unresolved conflicts is adjudication, harmonization, or explicit acknowledgment that resolution is pending.

The case of multi-source evidence with no current adjudicator does not help paraconsistency. Multiple sources disagreeing with no available way to determine which is correct is not a true contradiction in reality. It is an unresolved epistemic state. The diagnostic queue is designed for exactly this: the contradiction sits in the queue, flagged, until resolution is possible. That might take a long time. It might never resolve within the system’s operational lifetime. The system still does not reason on it as if both sides were true. “We do not know which source is right” is not “both sources are right and reality is contradictory.” Conflicting legal regimes are the same: two jurisdictions asserting incompatible rules is a conflict in the normative system, not a true contradiction in the structure of reality. The correct response is harmonization, adjudication, or explicit flagging — all of which are diagnostic operations, not tolerance.

Meanwhile, the credulous reasoning section of this paper shows that the gated architecture can explore the consequences of each side of an unresolved contradiction using full classical power, without tolerating the contradiction as a single premise set and without weakening the engine. The “we need to reason about the conflict” use case is handled — with more inferential power than paraconsistency provides.

The move from “our information contains contradictions” to “we need a contradiction-tolerant logic” is a category error. It confuses the properties of a representation with the properties of the thing represented. A map with two conflicting labels for the same location does not require a new geometry that tolerates contradictory positions. It requires someone to check which label is correct — or, if no one can check yet, a flag on the map that says “conflict: unresolved” and a system that can show what follows if each label is correct.

Compact Argument

Classical logic’s inference rules are sound. Explosion is not a defect in those rules. It is the correct propagation of an error state produced by input that violated a precondition.

The precondition was never enforced because classical logic’s standard formulation lacks a validation layer between syntactic evaluation (layer 1, which checks individual formulas) and proof-theoretic evaluation (layer 2, which derives consequences from premise sets). Each formula is checked individually. The set is never checked as a whole.

The fix: a consistency gate (layer 1.5) that validates premise sets for joint consistency before they reach the inference engine. Formally, consequence is a partial function defined only on consistent premise sets. Inconsistent sets are not in its domain, the same way malformed strings are not in the domain of proof-theoretic evaluation. When the gate detects a contradiction, it returns a diagnostic object, removes the contradictory pair, and forwards the remaining consistent premises to the classical engine, which runs at full power without modification. When exploration of both sides of an unresolved contradiction is needed, the gate generates two consistent views — one containing each member of the contradictory pair — and runs the full classical engine on each.

This resolves the trilemma of inconsistent premises, classical inference rules, and non-triviality. The three properties cannot be jointly satisfied. Paraconsistency gives up classical inference rules. The gate gives up inconsistent premises. Only the gate preserves the full classical engine.

Paraconsistent logic responds to the same problem by redefining the consequence relation to remain total on inconsistent input. This requires restricting some classical inference patterns, structural rules, or semantic principles — the specific sacrifice varies by system, but a sacrifice exists in every case. The restriction applies permanently and globally, weakening every derivation the system performs, including the vast majority that involve no contradictions. Paraconsistency buries the contradiction inside the reasoning process rather than surfacing it for resolution.

The gated architecture preserves every classical inference rule, imposes zero inferential cost on consistent premise sets, surfaces contradictions as diagnostic objects, continues operating on everything the contradiction does not touch, and can explore the consequences of each side of an unresolved conflict with full classical power. Any conclusion a paraconsistent system derives beyond what branch-wise classical inference produces is the result of provenance-forgetting aggregation — combining conclusions from incompatible branches as if they were jointly established — which is conflict-resolution policy disguised as inference. The gated architecture separates inference from aggregation, keeping both explicit. It is inconsistency-rejecting, not inconsistency-tolerant. It renders paraconsistent logic unnecessary.

Objections and Replies

“The gate just halts the system. That’s refusal mode, not a substitute for paraconsistency.”

The gate does not halt the system. It halts inference on the specific contradictory pair. Every premise not involved in the contradiction passes through the gate and receives the full classical engine. Downstream tasks depending on uncontested premises proceed normally. Tasks depending on the contradictory pair are flagged as blocked pending resolution. This is precision, not refusal.

“Quarantine is functionally the same as paraconsistency.”

Paraconsistency modifies the consequence relation — it changes what follows from what. The gate does not touch the consequence relation. It restricts the domain of the consequence relation. The consequence relation is classical. It still validates explosion. It still validates disjunctive syllogism. The reason explosion does not fire is that its precondition is never met. These are architecturally different: a firewall filters input while preserving the protocol; rewriting the protocol to tolerate malicious packets changes the protocol for all traffic.

“The gated system defines a non-classical consequence relation over the original inconsistent input.”

This objection formalizes the gated system as Cgate(Γ) = Ccl(Filter(Γ)), observes that Cgate(Γ) ≠ Ccl(Γ) when Γ is inconsistent, and concludes the consequence relation has been changed. The move fails because it presupposes that Γ is a legitimate argument to a consequence function. The entire point of the gate is that inconsistent Γ is not in the domain of consequence, the same way a malformed string is not in the domain of derivation. Wrapping the validation layer inside the consequence function and announcing the function has changed is a boundary-drawing error. The validation layer is outside the consequence function. It determines what reaches the consequence function. The function itself — classical, unmodified — only ever sees consistent input. One can always totalize a partial function by defining behavior on invalid inputs. The gate architecture refuses that totalization. That refusal is the thesis.

“Your filter policy is a choice that corresponds to a known paraconsistent stance.”

Yes, it is a choice, in the same way that syntactic evaluation is a choice. The grammar’s production rules are choices. The choice to reject a malformed string is a policy. Nobody treats this as evidence that proof-theoretic evaluation has been “implicitly altered.” The grammar is a precondition. The consistency gate is a precondition. Preconditions are not modifications to the thing they gate.

“Paraconsistency can do credulous reasoning — derive what follows from each side of a contradiction. Your gate can’t.”

The gate does this with more power than paraconsistency. When the validator identifies {φ, ¬φ}, it generates two consistent views: Γ \ {¬φ} and Γ \ {φ}. The classical engine runs on each. Each run uses the full set of classical inference rules — including every rule paraconsistency sacrificed. The output is two complete, labeled sets of consequences, one per side of the contradiction. Every reasoning task paraconsistency performs on an inconsistent premise set with a weakened engine, the gate performs on consistent views with the full engine, twice.

“If you need a single output, you need an aggregation policy. That’s where paraconsistent consequence relations live.”

Aggregation is not inference. It is a selection layer downstream of inference. The inference has already happened — classically, at full power, on each branch. The question “which branch-results to present as a single output” is a policy decision, not a derivation. Paraconsistent consequence relations bake aggregation into the consequence relation, fusing inference with conflict-resolution policy. The gated architecture separates them: the consequence relation handles inference, a separate layer handles aggregation. This separation is cleaner, not weaker — different aggregation policies can be swapped without touching the engine, and the full branch structure is preserved for inspection.

“Paraconsistency is at least a computationally tractable one-shot approximation that avoids enumerating repairs.”

This concedes the logical point entirely and retreats to engineering. Even there, it fails. Paraconsistency as a shortcut is a lossy approximation — it trades inferential power for speed. Every conclusion it draws, it draws with a weakened engine. The full architecture draws the same conclusions and more, with more power, and surfaces which branch supports what. A JPEG is not a use case that makes lossless image formats unnecessary. A paraconsistent shortcut is not a use case that makes classical logic with a consistency gate unnecessary.

“Paraconsistency can derive conclusions no single consistent view supports — like s from (q ∧ r) → s where q and r come from opposite sides of a contradiction.”

This is the strongest-looking counterexample and it fails. The derivation of s requires treating q and r as jointly available. But q depends on φ and r depends on ¬φ. These are conclusions from incompatible resolutions. Combining them drops branch provenance and treats branch-contingent outputs as unconditionally established. That is not inference — it is provenance-forgetting aggregation smuggled into the consequence relation. If labels are maintained, s is correctly not derived. If labels are dropped to obtain s, the system has reintroduced the fusion of inference and conflict policy that the gate architecture forbids.

“Two conflicting records in a database both exist. That state executes.”

Two records existing is not {P, ¬P}. It is two data entries. The physical state — record A exists and record B exists — is perfectly consistent. The inconsistency arises only when the records are interpreted under a constraint. At that point, exactly one of three things is wrong: record A, record B, or the constraint. All three are diagnostic conclusions leading to resolution. The data does not “execute as a contradiction.” It exists as data. The contradiction is a product of interpretation, and the correct response is to investigate which component of the interpretation is wrong.

“You must either halt, restrict inference, or revise. There is no fourth option.”

The gated system does not halt inference (it continues on uncontested premises), does not restrict inference (the engine runs full classical logic on its input), and is not merely a revision system (though revision of the contested premises is the expected downstream resolution). It filters input. Filtering input is what syntactic evaluation does. Nobody calls syntactic evaluation a “restriction on inference.” The consistency gate is the same operation at a different level.

“Consistency checking is undecidable. Your gate can’t always fire.”

Correct for first-order logic in full generality. The specification and the implementation are separate concerns. The specification — no inconsistent premise set reaches the engine — is a logical constraint. The implementation is approximate: incremental checking at insertion time, subsystem isolation, bounded or heuristic detection for the relevant fragment. The same gap exists for parsing: it is decidable for context-free grammars and undecidable for arbitrary grammars. Nobody abolishes syntactic validation because parsing is hard in general. They build parsers for the grammars they use.

“The system is really inconsistency-tolerant, just implemented differently.”

Under the operational definitions given in this paper: to tolerate inconsistency is to continue deriving conclusions from a premise set that contains an identified contradiction. The gated system never does this. It removes the contradiction before reasoning begins. Calling this “tolerance” reverses the meaning of the architecture.

“Paraconsistency’s metalanguage being classical doesn’t undermine its object-level claims.”

The challenge is not “you used classical tools.” The challenge is that paraconsistency’s thesis — evaluation can tolerate contradiction — is certified by a meta-level evaluation that does not tolerate contradiction. The meta-level definitions either hold or they do not. The meta-level proofs are either valid or they are not. The three-valued or glut-valued notation at the object level is a bookkeeping device whose correctness is certified by the same two-valued evaluation it purports to move beyond. The object language tolerates contradiction-shaped strings. The evaluation that certifies the object language does not tolerate contradiction.

“You’ve declared inconsistent Γ inadmissible. That’s the disputed point, not a theorem.”

Yes, it is a policy choice. So is rejecting ill-formed strings. The policy is justified because the evaluation operation the consequence relation implements has preconditions — preconditions identified in Passarelli (2026) as boundary conditions, not axioms. A premise set that jointly asserts φ and ¬φ describes an evaluation state that does not terminate. “Derive consequences from it” is not an evaluation step — it is propagation of an error state. The correct output is a diagnostic object, not a set of theorems. Paraconsistent logics respond to failed validation by redefining consequence so it returns outputs anyway. The gate architecture treats failed validation as a type error and routes it to diagnosis, not inference.

“Dialetheism says some contradictions are true. Your framework just assumes otherwise.”

This paper rejects dialetheism on explicit grounds. “True” is an evaluation exit-condition — what evaluation returns when it succeeds. A “true contradiction” would require evaluation to return both “succeeds” and “fails” for the same input. That is not a third evaluation output. It is the non-executability identified throughout this paper: a state that does not resolve. Truth-value gluts are not additional exits from the evaluation process. They are a notation for non-execution, mistaken for a result. This is not an assumption. It is a consequence of what evaluation is, as developed in Passarelli (2026).

What Paraconsistency Got Right and What It Got Wrong

Paraconsistency correctly identified a real problem: classical logic’s behavior on contradictory input is useless. An inference engine that derives everything from a contradiction is doing no work. This observation is correct and important.

Paraconsistency incorrectly located the problem in the inference rules and proposed modifying the engine. The problem was always in the input. The engine was doing exactly what it should: propagating the incoherence of a premise set that should never have reached it. Explosion is not a bug. It is the correct diagnostic signal that a precondition has been violated. The correct response was always a gate, not a patch.

Author

Tom Passarelli

License

CC0. This work is in the public domain.