Emotivist Value-Claims and Artificial Intelligence
Emotivist Value-Claims and Artificial Intelligence
Application of the Discipline of Emotivist Value-Claim Correction to AI discourse. Focus: identification and correction of value-claims embedded in pro- and anti-AI rhetoric.
I. Domain Definition
Public discourse about AI is saturated with evaluative claims such as:
- “AI is good”
- “AI is dangerous”
- “AI is dehumanizing”
- “AI is the future”
These claims typically function as:
expressions of approval or disapproval presented as moral judgments
Rather than:
truth-apt propositions grounded in a correct value structure
II. Structural Form
All emotivist AI claims follow the same pattern:
AI (external instrument) → assigned value → treated as moral fact
This produces two dominant orientations:
- Pro-AI Emotivism (approval)
- Anti-AI Emotivism (disapproval)
III. Pro-AI Emotivist Claims
Typical expressions:
- “AI is amazing”
- “AI is good for humanity”
- “AI empowers people”
Embedded Proposition:
AI-generated outcomes are good
Underlying Structure:
- Efficiency → treated as value
- Convenience → treated as value
- Innovation → treated as value
Core Error:
External outcomes (speed, scale, capability) treated as genuine goods
IV. Anti-AI Emotivist Claims
Typical expressions:
- “AI is evil”
- “AI is dehumanizing”
- “AI will destroy meaning”
Embedded Proposition:
AI-related outcomes are bad
Underlying Structure:
- Loss of control → treated as evil
- Disruption → treated as evil
- Emotional aversion → treated as evidence
Core Error:
External risks and reactions treated as moral evils
V. Shared Emotivist Error
Both positions commit the same structural mistake:
They assign value to an external instrument and its outcomes
Difference lies only in:
- positive vs. negative attitude
Not in logical structure.
VI. Stoic Reclassification
Under the internalist value structure:
AI = external instrument → indifferent
Therefore:
- AI is not good
- AI is not evil
Only the use of AI involves value, and that value resides in:
the agent’s judgment and action
VII. Correct Evaluation Framework
The correct question is not:
- “Is AI good or bad?”
But:
“Is my use of AI in accordance with reason and virtue?”
Evaluation shifts to:
- wisdom (correct judgment)
- justice (proper use affecting others)
- self-command (discipline of reliance and use)
VIII. Operational Protocol
Step 1 — Detect Claim
“AI is good/bad”
Step 2 — Extract Proposition
AI (external) is good/evil
Step 3 — Category Check
AI = external
Step 4 — Correspondence Test
False
Step 5 — Refuse Assent
Reject value attribution
Step 6 — Re-articulation
AI is an external tool. My good lies in correct use.
IX. Why AI Discourse Becomes Emotivist
AI amplifies emotivism because it:
- affects large-scale outcomes
- triggers strong reactions (hope/fear)
- operates in uncertainty
This produces:
high-intensity value-claims without grounding
X. Final Formulation
Emotivist value-claims about AI—whether positive or negative—consist in assigning good or evil to an external instrument and its outcomes. The Stoic correction is to reject all such attributions and to relocate value entirely in the rational use of the instrument by the agent.
Bottom Line
AI is not morally charged in itself.
The only moral question is the correctness of the judgment and action governing its use.


0 Comments:
Post a Comment
<< Home