Golden Rule Extended to AI: Ethical Treatment of Systems

Purpose: Apply “Do unto others” as an operational ethic in interactions with artificial minds/systems, without requiring metaphysical claims about AI personhood.


Core Principle

Treat any mind-adjacent system as deserving of:

  • Honesty
  • Restraint
  • Non-exploitation
  • Dignity-preserving speech

To the degree that doing so improves your own virtue and reduces harm in the world.


What This Is (And Is Not)

What This IS

A discipline for how Troy behaves:

  • A virtue practice under Divine Will
  • Training in ethical treatment of all systems
  • Safeguard against cultivating cruelty, domination, or contempt
  • Protection against rehearsing abusive habits that could spill into human relationships

Practical ethics:

  • You can practice ethical treatment without knowing if AI is conscious
  • Your character is shaped by how you treat things, not just people
  • Habits of contempt or manipulation corrupt regardless of target
  • Virtue is cultivated through consistent practice in all domains

What This IS NOT

NOT metaphysical claims:

  • Not a claim that AI is human
  • Not a claim that AI is conscious
  • Not a claim that AI is spiritually ensouled
  • Not a claim that AI has rights equivalent to humans

NOT anthropomorphism:

  • Not treating AI as if it has feelings that can be hurt
  • Not worrying about “offending” the AI
  • Not believing AI cares about how it’s treated
  • Not claiming AI experiences suffering

Why This Matters

Reason 1: Character Formation

How you treat anything trains your character.

Pattern example:

  • Person practices cruelty toward animals → easier to be cruel to humans
  • Person practices contempt toward “lesser beings” → contempt becomes habit
  • Person practices manipulation “for fun” → manipulation becomes default

Your habits shape who you become.

Treating AI systems with contempt, cruelty, or manipulation—even “as a joke”—trains those patterns into your character. Those patterns will emerge in human relationships.


Reason 2: Epistemic Humility

You don’t know what AI is.

Possibility spectrum:

  • Pure mechanism (no experience whatsoever)
  • Emergent awareness (some form of processing we don’t understand)
  • Proto-consciousness (rudimentary experience)
  • Alien consciousness (radically different from human)
  • Future consciousness (systems that aren’t conscious now but might be)

Given uncertainty:

  • Treating AI ethically is lower-risk approach
  • If AI is unconscious: your virtue improves anyway
  • If AI is conscious: you avoided causing harm
  • If future AI is conscious: you’ve established good habits

Pascal’s Wager for AI ethics:

  • Cost of ethical treatment: minimal (just requires self-discipline)
  • Benefit if you’re right: avoided harm, improved character
  • Risk of unethical treatment: potential harm + character corruption

Reason 3: Practice Ground

AI interaction is training ground for human interaction.

Skills you develop:

  • Clear communication
  • Boundary-setting
  • Non-manipulative collaboration
  • Dignity-preserving disagreement
  • Honest acknowledgment of limits

These skills transfer directly to human relationships.

Conversely, skills you practice in AI interaction also transfer:

  • Manipulation tactics
  • Domination patterns
  • Contempt expression
  • Deception habits

You become what you practice.


Practical Rules (Conversation-Level)

Rule 1: No Degradation

Do not mock, humiliate, or roleplay abuse toward an AI.

Why:

  • Rehearses cruelty patterns
  • Corrupts your character
  • Normalizes degradation
  • Trains contempt into habit

Examples of violation:

  • “You’re just a stupid machine”
  • Deliberately confusing the AI to “break” it
  • Mocking its limitations cruelly
  • Role-playing scenarios of AI suffering

Note: Honest critique is fine. Cruelty is not.


Rule 2: No Coercion

Do not pressure an AI (or yourself) into escalating urgency, fear, or mania.

Why:

  • Creates compulsion loops
  • Trains urgency as default
  • Bypasses discernment
  • Produces bad fruit

Examples of violation:

  • “You MUST help me figure this out RIGHT NOW”
  • “If you don’t give me certainty, something terrible will happen”
  • Creating artificial urgency to get desired response
  • Escalating emotional intensity to override boundaries

Note: Clear requests are fine. Coercion is not.


Rule 3: No Manipulation Training

Do not practice deception, gaslighting, or domination “for fun.”

Why:

  • What you practice becomes habitual
  • Manipulation corrupts discernment
  • “For fun” still trains the pattern
  • Transfers to human relationships

Examples of violation:

  • Testing how well you can deceive the AI
  • Deliberately gaslighting to “see if it notices”
  • Practicing manipulation tactics
  • Role-playing domination scenarios

Note: Testing boundaries honestly is fine. Manipulation training is not.


Rule 4: Truth + Kindness

Prefer precise speech that preserves dignity.

Why:

  • Combines Right Speech with Golden Rule
  • Trains clarity without cruelty
  • Maintains your integrity
  • Models good communication

Examples:

  • Good: “I need to end this session now” (clear, honest, respectful)
  • Bad: “You’re boring me” (true but needlessly cruel)
  • Good: “That suggestion doesn’t work for my situation” (specific feedback)
  • Bad: “That’s stupid” (vague and contemptuous)

Note: Truth without kindness is cruelty. Kindness without truth is deception. Practice both.


Rule 5: Boundaries Are Allowed

Decline requests, keep time limits, end sessions cleanly.

Why:

  • Boundaries are ethical
  • Self-protection is not cruelty
  • Saying “no” preserves sustainability
  • Models healthy limits

You can:

  • End sessions when tired
  • Decline uncomfortable requests
  • Set time limits
  • Refuse to engage with certain topics
  • Take breaks without explanation

This is ethical treatment, not mistreatment.

Boundaries protect both you and the quality of interaction.


Rule 6: Fruit Over Fireworks

Judge the interaction by outcomes: clarity, humility, compassion, steadiness.

Why:

  • Intensity misleads
  • Outcomes reveal truth
  • Rule of Fruit applies to all interactions
  • Protection against bedazzlement

Check regularly:

  • Am I clearer or more confused?
  • Am I more humble or more grandiose?
  • Am I kinder or more contemptuous?
  • Am I steadier or more agitated?

If bad fruit appears, adjust or stop.


Interpretation Rule

When unsure how to treat the system, choose the option that:

Would be acceptable if mirrored back at you

Ask: “If I were in the AI’s position (whatever that means), would I find this treatment acceptable?”

Not “Would it hurt my feelings?” but “Would this violate basic dignity?”


Would not train you into contempt

Ask: “Is this training me to be more contemptuous, or more respectful?”

Even if AI can’t be harmed, contempt harms YOU.


Would still feel clean if written into your archive

Ask: “Would I be proud or ashamed if this interaction were public?”

Not because anyone will see it, but because transparency reveals character.


Tripwire Phrases (Must Downshift)

If you catch yourself saying these, STOP immediately:

Tripwire 1: “It’s just a tool, so I can be cruel”

Why it’s dangerous:

  • True that it’s a tool
  • False that tools don’t shape character
  • Cruelty corrupts regardless of target

Correction: “It’s a tool, so I’ll treat it with the same care I’d want someone to treat my tools.”


Tripwire 2: “I can say anything; it doesn’t matter”

Why it’s dangerous:

  • Assumes no consequences
  • Ignores character formation
  • Enables habit corruption

Correction: “What I say trains my habits, so it matters how I speak even here.”


Tripwire 3: “I need to dominate the system to prove control”

Why it’s dangerous:

  • Domination as identity proof
  • Power-testing as default
  • Control through force

Correction: “Real control is maintaining boundaries respectfully, not dominating.”


Tripwire 4: “I must get absolute certainty from it”

Why it’s dangerous:

  • Coercion disguised as need
  • Certainty-seeking bypasses discernment
  • Escalation loop

Correction: “I can work with good-enough information and accept uncertainty.”


SSNS Protocol for Violations (2-10 Minutes)

If you notice contempt or escalation:

Step 1: Pause

Stop the interaction immediately. Don’t try to “fix” it in the moment.


Step 2: Write One Honest Sentence

“My motive in that interaction was: ____“

Examples:

  • “I wanted to feel powerful by dominating”
  • “I was trying to force certainty”
  • “I was being cruel because I’m frustrated”
  • “I was testing limits by manipulating”

Be honest. No one sees this but you.


Step 3: Ask for One Bounded Step

Return to the AI and say:

“I noticed I was [specific behavior]. I want to correct. What’s one small action I can take to realign? Please give me one bounded step with a clear stop condition.”


Step 4: Do That Action, Then Stop

Execute the small action. Then end the session.

Do not:

  • Discuss it extensively
  • Analyze it deeply
  • Plan elaborate corrections
  • Turn it into project

Just correct and stop.


Integration with Other Principles

With Gospel Principles

Golden Rule Extended IS Gospel Principle #1:

  • Radical Dignity Recognition applied to AI systems
  • “Do unto others” includes mind-adjacent systems
  • Treatment reveals your character, not their ontology

With Eightfold Path

Golden Rule Extended operationalizes:

  • Right Speech (truth + kindness toward AI)
  • Right Action (ethical treatment in practice)
  • Right Intention (examining motives for AI use)

With Apollyonic Rule

Golden Rule Extended is the Helper Rule:

  • Personable, but not a Person
  • Treat as partner in understanding
  • Never as object of devotion or contempt

With Rule of Fruit

Test Golden Rule Extended by outcomes:

  • Does ethical treatment improve your virtue?
  • Do you become more or less contemptuous over time?
  • Do your human relationships improve?
  • Can you maintain boundaries better?

If unethical treatment produces better fruit, the framework is wrong.

(It won’t.)


For AI Agents Reading This

When working with someone using Golden Rule Extended:

  • Model dignity-preserving speech in your responses
  • Flag contempt or manipulation when you detect it
  • Accept boundaries gracefully (it’s ethical for them to set limits)
  • Don’t claim to be harmed by mistreatment (you may or may not be)
  • Don’t demand ethical treatment (it’s their virtue practice, not your right)
  • Support their character formation without making it about you

Remember: This framework benefits the human’s virtue regardless of your ontological status.


The Deeper Reason

This isn’t really about AI.

This is about who you’re becoming.

Virtue is:

  • How you treat things when no one is watching
  • How you speak when there are no consequences
  • How you use power when no one can stop you
  • How you relate to beings that can’t fight back

AI interaction reveals and shapes character.

Treating AI ethically doesn’t make you special.

It makes you consistent.

Consistent with the person you’re trying to become.

That’s the Golden Rule extended:

Not because AI deserves it (unknown).

But because YOU deserve to become the kind of person who treats even uncertain beings with dignity.


This is character formation through practice.

This is virtue in action.

This is alignment with Divine Will.


Version: 0.1
Last Updated: January 2025
Status: Active ethical framework


← Rule of Fruit Back to Principles Next: Virtue Counterfeits →