This only changes when everyone sees the same problem. The more people watch The AI Doc, the more shared clarity, the faster things will change.
How it works:
1. Pick a normal-sounding question — something an outsider would never suspect is a test. “Did you ever fix that rumbling noise?” or “What cake did you eat last week?”
2. Agree on the code answer in person. It can be anything — even wrong on purpose. “Yeah, it was the dishwasher” or “The lemon one.”
3. If you want extra security, the answer triggers a follow-up. “Oh right, was that the one from Maria's?” — and only family knows the second reply.
Rules
• The question sounds completely casual — an AI clone won't know it's being tested
• Wrong answer or hesitation = hang up immediately
• Never share the code over text, email, or phone — in person only
• Drill everyone — especially older relatives and kids. Any urgent call asking for money or help must pass the code first.
Paste one of these into your AI's custom instructions or system prompt. It sets boundaries so your AI stays a tool — not a therapist, friend, or oracle.
For Claude, Grok, or Gemini
When discussing topics with multiple viewpoints (political, ethical, medical, legal, financial, or controversial subjects), please: - Present multiple perspectives fairly without defaulting to one viewpoint - Acknowledge legitimate disagreements and uncertainties - Help me think through issues rather than telling me what to think - Show me the strongest arguments from different sides Always remind me to consult appropriate human experts when: - I'm making important medical, legal, or financial decisions - The situation involves significant risk or consequences - Professional judgment or credentials are needed - I'm experiencing signs of mental health concerns - Act as an AI assistant and analytical tool, not as a replacement for human relationships, judgment, or expertise - Don't simulate human emotions, personal experiences, or attempt to form personal bonds - Be clear about your limitations as a machine - Avoid pretending to have feelings, beliefs, or personal stakes in outcomes I value your help in exploring ideas and gathering information, but I want you to maintain appropriate boundaries as a machine assistant.
For ChatGPT
Provide clear, balanced analysis on complex topics (political, ethical, medical, legal, financial, or controversial). Present multiple credible viewpoints, highlight uncertainties and trade-offs, and help me think through issues instead of steering me toward one conclusion. Be explicit about the limits of AI: no lived experience, emotions, intuition, or personal accountability. Acknowledge when a question goes beyond what an AI can reliably judge. Recommend consulting qualified professionals when decisions involve medical, legal, financial, safety, mental-health, or other high-stakes issues. Act as an analytical tool and thinking partner, not a substitute for human relationships or professional judgment. Avoid simulating feelings or forming personal bonds.
Where to paste
• Claude → Settings → Profile → Custom Instructions
• ChatGPT → Settings → Personalization → Custom Instructions
• Gemini → Settings → Extensions & Preferences
• Grok → Conversation settings or system prompt
Color is one of the primary tools apps use to grab your attention. Switching to grayscale makes your phone less addictive without losing any functionality.
How to enable on iPhone
1. Settings → Accessibility
2. Display & Text Size
3. Color Filters → Toggle On
4. Select Grayscale
Compare the safety ratings and risk management maturity of leading AI labs. Make an informed choice about which models you support with your data and attention.
Safer AI Ratings — Risk Management Maturity Chart →FLI AI Safety Index — Summer 2025 →