Meet Purpose, my AI-powered growth mentoring app. Try It Free

You’re Probably Using AI Wrong

Want more actionable ideas every week?

Join millions of readers and subscribe to Your Next Breakthrough newsletter below.

    Over 1 Billion people use AI each week yet most people don’t realize they are using it wrong.

    Since starting a new company, I’ve spent a lot of the past few years with my head buried in AI books.

    I learned that AI models aren’t built to know things. Rather, they’re built to relate things to each other. That means your favorite AI tool is just a massive web of probabilistic connections—designed to predict how ideas link together, with no mechanism to verify absolute truth.

    This is why AI’s “hallucinate.” It’s why two people can ask the same question and get two different answers. It’s why ChatGPT is often shockingly bad at basic math.

    Here’s a simple test:
    Ask ChatGPT to create a meal plan for weight loss. It’ll give you a solid plan that’s directionally correct.

    Now ask it to give you exact calories and macros for everything in that plan.

    It will f*ck that up. Every time.

    That’s because LLMs get the relationships right and hallucinate the specifics.

    Which means the hallucination issue isn’t really about making AI more accurate. That’s a never-ending problem—you’ll always be chasing perfection and never getting there.

    The actual solution to using AI to its greatest potential is designing systems that keep AI in its strength zone and out of its weakness zone.

    Strength zone: pattern recognition, relationship mapping, understanding connections between concepts.

    Weakness zone: generating specific data points it can’t verify.

    The most powerful products come from accepting a material’s natural properties.

    Wood bends.
    Steel supports.
    AI connects.

    Build accordingly.

    See you Monday,
    Mark Manson

    #1 New York Times Bestselling Author
    My WebsiteMy BooksMy YouTube ChannelMy PodcastMy Community