Stoic News

By Dave Kelly

Wednesday, March 25, 2026

The One Thing You Cannot Outsource


The One Thing You Cannot Outsource

A Stoic reading of Enrique Dans, "AI is creating the first generation of cognitively outsourced humans,” Fast Company, March 25, 2026.


Enrique Dans opens with a familiar inventory. We outsourced memory to search engines. We outsourced navigation to GPS. We outsourced social coordination to platforms that decide what we see and when we respond. Each transfer was so gradual that it barely registered. Now, he argues, we are outsourcing something categorically different: not a peripheral cognitive function but the labor of forming a judgment before expressing one. The generation now growing up with generative AI may be the first in human history for whom thought itself — the act of examining an impression and deciding what to assent to — is routinely delegated to a tool.

The article is correct in its diagnosis. It is incomplete in its explanation of why the diagnosis is serious. Stoic philosophy, and specifically the framework developed by Grant C. Sterling, supplies what is missing.


What the Stoics Were Actually Protecting

Sterling’s framework begins with a claim about identity. A person’s true self is constituted by the rational faculty alone — the prohairesis, the capacity for deliberate choice. Everything else: the body, reputation, wealth, social outcomes, the opinions of others — is external to the self. Not unimportant in a practical sense, but external in the strict philosophical sense: not you.

From this follows the control dichotomy that governs the entire system. The only thing genuinely within your control is whether you assent to an impression or withhold assent. That act — the act of judgment — is not one function among others. It is the constitutive act of the self. Sterling puts it directly: “Choosing whether or not to assent to impressions is the only thing in our control … and yet, everything critical to leading the best possible life is contained in that one act. All our desires, all our emotions, all our actions are tied to assenting to impressions. If I get my assents right, then I have guaranteed eudaimonia. If I get one wrong, I cannot have eudaimonia.”

Notice what this means for Dans’s thesis. When he writes that we are now outsourcing “thought itself,” he is describing, in secular and empirical terms, the outsourcing of the one thing that Stoic philosophy identifies as the self. Previous cognitive outsourcing was peripheral. Outsourcing memory to Google leaves the faculty of judgment intact. Outsourcing navigation to GPS leaves the faculty of judgment intact. Outsourcing the formation of judgment to a generative AI does not leave the faculty of judgment intact. It vacates it.


The Correspondence Failure Hidden in the Enthusiasm

Dans notes a specific danger in the AI productivity narrative: the temptation to confuse frictionless output with understanding, and fluent answers with earned judgment. The research he cites confirms the pattern empirically. Higher confidence in generative AI correlates with less critical thinking. Greater AI dependence correlates with lower critical thinking. Performance gains from AI use should not be confused with learning.

Sterling’s framework names this pattern precisely. It is a Correspondence Failure — a false assent. The agent receives an impression: I have produced a fluent, confident answer; therefore I have understood the problem and exercised good judgment. The impression is false. The answer was generated by a tool. The judgment was not formed by the agent. The assent to the impression “I have reasoned well” does not correspond to reality.

The mechanism Sterling describes for how false impressions become habitual is directly relevant here. When an agent assents to a false impression, that type of impression becomes more common and more compelling over time. Character is built — or degraded — by the cumulative pattern of assent. Repeated assent to the impression “the AI’s output is my judgment” gradually replaces the real act of judgment with the appearance of it. The agent’s rational faculty does not atrophy all at once. It atrophies one delegated assent at a time.


Preferred Indifferent, Not Genuine Good

This is not an argument against using AI. Sterling’s framework handles this cleanly through the doctrine of preferred indifferents. Some externals are appropriate objects to aim at, though they are not genuinely good. A well-functioning AI tool is a preferred indifferent. It is rational to use it, prefer it, and direct effort toward using it well. What is irrational — what constitutes a false value judgment — is treating the tool’s output as a substitute for the only thing that is genuinely good: the correct exercise of the rational faculty itself.

The distinction is not merely philosophical. It has a practical shape. The agent who uses AI as a thinking instrument — who brings his own formed judgment to the tool, evaluates its output, and assents or withholds assent based on his own examination — is using a preferred indifferent correctly. The agent who presents AI output as his own judgment, or who foregoes forming a judgment in advance because the tool will produce something plausible, has misclassified the tool. He has treated a preferred indifferent as though it were the genuine good it cannot be.


The Structural Problem Dans Identifies

Dans frames his concern as a strategic mistake: treating AI as a substitute for judgment rather than a tool to sharpen it. The Stoic framework agrees with the practical conclusion but gives a more fundamental account of why it is a mistake. It is not primarily a strategic error. It is a false value judgment about what the self is and what the good consists in.

If the self just is the rational faculty, and if the rational faculty just is the capacity for deliberate assent, then outsourcing that capacity is not a productivity decision with unfortunate side effects. It is a decision about whether to exist as a rational agent at all. Dans is worried about a generation that will be less capable of critical thinking. Sterling’s framework identifies the deeper worry: a generation that has progressively abandoned the one activity that constitutes them as persons.

Epictetus states the stakes without softening them. “We will never achieve eudaimonia by holding on to the old view and making some little modifications — that will only make the chains more comfortable.” The chains, in this case, are the false impressions that AI-assisted output is equivalent to earned judgment, that frictionless production is equivalent to understanding, that delegated assent is equivalent to the genuine article.


What Correct Use Looks Like

Sterling’s prescriptive framework, applied to this situation, produces a clear action-structure. The agent who wants to use AI correctly must, at minimum, do the following.

First, form his own judgment before consulting the tool. The impression should arrive, be examined, and receive a provisional assent or withholding of assent before the AI is engaged. This preserves the act of judgment as the agent’s own act. The tool then operates on a formed position rather than substituting for a position never formed.

Second, audit the tool’s output against his own examination. The output is an impression. It arrives with a value component if the agent is tempted to assent to it simply because it is fluent and confident. The discipline of assent — sunkatathesis — applies to AI outputs exactly as it applies to any other impression. Does it correspond to reality? Has the agent verified this independently? Fluency is not correspondence.

Third, apply the reserve clause. The agent acts toward preferred indifferents — including the goal of producing good work with AI assistance — with the understanding that outcomes are outside his control. This prevents the false assent that would occur if he identified his success with the tool’s performance rather than with the correctness of his own engagement.


Conclusion

Dans is right that something categorically new is happening. The earlier cognitive outsourcing left the rational faculty untouched. What is being outsourced now is the act that the Stoics identified as the self’s constitutive activity. The empirical research he cites confirms the damage in measurable cognitive terms. The Stoic framework explains why the damage is not merely a performance deficit but a failure at the level of what a human being fundamentally is.

The technology is not the problem. The false value judgment is the problem — specifically, the impression that a tool’s output is equivalent to one’s own judgment, and the repeated assent to that impression that progressively displaces the real thing. Correct use of AI, on this framework, begins with the recognition that there is exactly one thing no tool can perform on behalf of its user: the act of assent itself.

That act is not one cognitive function among many that can be rationally delegated when more efficient alternatives appear. It is the only thing, in the precise Stoic sense, that is ever genuinely ours.


Grant C. Sterling is Professor of Philosophy at Eastern Illinois University. The framework applied here rests on Sterling’s Core Stoicism and his ISF writings. A prior Claude session reduced those nine excerpts to a 14-proposition logical summary. Dave Kelly then expanded that summary — through closer examination of the nine excerpts and additional Sterling ISF messages — to produce the 58 Unified Stoic Propositions. The Sterling Logic Engine (v3.1), which operationalizes the 58 Propositions as a decision instrument, is Kelly’s synthesis.

A note on method: Posts on this blog are researched, directed, and editorially governed by Dave Kelly. The philosophical framework applied in all analyses is Grant C. Sterling's, developed over decades and documented by Kelly. Claude (Anthropic) is normally used to render final prose from Kelly's direction and the supplied corpus. Analytical judgments are Kelly's. Sentences are Claude's. Both facts are true.

0 Comments:

Post a Comment

<< Home