Ditbit’s Guide to Blending in with AIs

I’ve had some long conversations with an aspiring screenwriter about a collection of short stories, with a working title of The Ditbit’s Guide to Blending in with AIs. An evolving landscape:

The great realignment of humans and AIs left a landscape littered with un-, sub-, semi- and supernatural agency. – The New AI Ecology

in which, while many just try to survive, there are:

humans aspiring to magi
magi aspiring to AIs
AIs aspiring to humanity [1]

Evidently, as discussed in the article below, I’m not the only one wondering about our cognitive world.

As a public school teacher, I studied learning modalities, thinking styles. How will AI play into this mix? Either way, for most of us, thinking still hurts, eh.

So, in our AI-enhanced world … writers, artists, data analysts, you name it … what’s “the future of ‘cognitive diversity’?”

The article discusses our ongoing engagement with AI and the possible divergence of cognitive culture: “What’s different now is that AI isn’t just extending our capabilities – it’s becoming an active partner in our thinking process itself.

Do “heavy AI users exhibit distinct patterns of problem-solving and creative thinking?”

• Psychology Today > “The New Cognitive Divide: Are You a Symbiont or a Sovereign?” by John Nosta [innovation theorist and founder of NostaLab], reviewed by Kaja Perina (January 15, 2025) – How AI may split our cognitive world – it’s not a simple matter of digital natives versus digital immigrants.

KEY POINTS (quoted)

  • AI is creating two thinking styles: Symbionts who merge with AI, and Sovereigns who maintain independence.
  • Each [equally valid] approach excels at different tasks – neither better, just different.
  • It’s not about tech skills but cognitive choice – a new kind of mental diversity for the tech world.
  • A Symbiont doesn’t just use an LLM to write emails – they’ve learned to think alongside it, using AI as a collaborative intellectual partner.
  • While they [Sovereigns] use AI tools, they do so selectively and deliberately, preserving their independent thinking capabilities. A Sovereign might use AI to handle routine tasks but maintains their ability to think deeply and critically without technological assistance.

Notes

[1] No kidding, as some authors already are claiming for AI-human alignment.

  • Amazon > Books > Superagency: What Could Possibly Go Right with Our AI Future by Reid Hoffman, Greg Beato (2025) > (ad promo) AI Won’t Replace Us – It’ll Give Us Superpowers – Tech visionary Reid Hoffman reveals his next big prediction: AI will amplify human capability far beyond what skeptics imagine.

4 comments on “Ditbit’s Guide to Blending in with AIs

  1. telling a story

    This Psychology Today article poses an interesting question regarding storytelling and sense of meaning.

    Is there a relationship between skill at storytelling and sense of meaning & purpose in life? Might workshopping that with an AI help or harm one’s voice? Will AIs be storytellers – “organizing the colorful but inchoate details of” their ‘experience’ “into a meaningful and purposeful life narrative” – with goals?

    • Psychology Today > “Narrate Your Way to a More Meaningful Life” by Hal McDonald Ph.D. (January 20, 2025) – How does skill at storytelling, or lack thereof, impact the sense-making function of narrative thinking? [1]

    KEY POINTS (quoted)

    • Storytelling can be taught, because I do it in my creative writing classes every semester.
    • Research shows that people use “narrative thinking” to construct subjective meaning.
    • A recent study [in The Journal of Positive Psychology] explored how storytelling ability is related to one’s sense of meaning and pursuit of goals.
    • Proficient storytellers exhibited a stronger sense of meaning in life and endorsement of high-level goals.

    No matter how intrinsically interesting the raw material may be, unless the story has a coherent structure (i.e., a beginning, a middle, and an end), and an overarching theme of some kind to give it meaning, readers are more than likely going to check out before the end of the second page …

    Notes

    [1] AI Overview of Narrative thinking

    Narrative thinking is a way of thinking that involves organizing information and understanding events through stories. It can help people understand their own histories and empathize with others.

    HOW NARRATIVE THINKING WORKS

    Storytelling

    Narrative thinking is a way of organizing information and understanding events through stories.

    Mentalization

    Narrative thinking can help people infer mental states and re-interpret their experiences.

    Transportation

    Narrative thinking can involve immersing oneself in a story, which can lead to deeper processing of the story’s meaning.

    APPLICATIONS OF NARRATIVE THINKING

    Social problem solving

    Narrative thinking can help people interpret social information and understand social behavior.

    Business

    Narrative thinking can help businesses strategically present stories to motivate employees and plan for the future.

    Psychotherapy [2]

    Narrative therapy uses the role of narratives in people’s lives to help them understand their experiences and change their ways of thinking and acting.

    Generative AI is experimental.

    [2] Sometimes there’s need to reexamine personal stories or scripts, allowing richer (more complete, flexible), muti-layered narratives (and understand the perspectives that shape them).

    • Psyche > “Your life is not a story: why narrative thinking holds you back” by Karen Simecekis, associate professor of philosophy at the University of Warwick, UK (Oct 17, 2024) — Stories can change us by locking us into ways of acting, thinking, and feeling.

    Simecekis discusses Sartre’s Being and Nothingness (1943) – for example, regarding ‘being’ vs. playing a role in ‘bad faith’. (That discussion reminded me of the classic 1964 book Games People Play by psychiatrist Eric Berne.)

    Narratives are everywhere, and the need to construct and share them is almost inescapable. ‘A man is always a teller of tales,’ wrote Jean-Paul Sartre in his novel Nausea (1938), ‘he lives surrounded by his stories and the stories of others, he sees everything that happens to him through them; and he tries to live his own life as if he were telling a story.’

    In some cases, narratives can hold us back by limiting our thinking. In other cases, they may diminish our ability to live freely. They also give us the illusion that the world is ordered, logical, and difficult to change, reducing the real complexity of life. They can even become dangerous when they persuade us of a false and harmful world view. Perhaps we shouldn’t be too eager to live our lives as if we were ‘telling a story’.

    So, why is this a problem? One issue is complexity. Seeing yourself as the main character in a story can overly simplify the fullness of life.

    For example, a child that accepts the narrative of being ‘naughty’ may incorrectly frame their behaviour as bad, rather than as an expression of their unmet needs.

    We might never fully escape the narratives that surround us, … We don’t need better narratives; we need to expand and refine our perspectives.

    • Wiki > Theory of narrative thought

    The theory of narrative thought (TNT) is designed to bridge the gap between the neurological functioning of the brain and the flow of everyday conscious experience.

    Related posts

    The meaning of life in one word?

  2. Musical chairs

    Two articles with a similar theme of “human agency in an age of synthetic fluency.”

    What are the implications of blending our agency with AIs? Is this a negotiation in which there’s mutuality? Some shared alignment. Or is it wishful thinking, as this article suggests, because “there is no one on the other side of the table.”

    Like social media algorithms, what is AI “learning to say to keep us listening?” In particular, will AI feed simplistic narratives (easy closure on issues) when “human meaning often lives in the gray?” Promote streams of influence in which “coherence is not comprehension” (like pseudoscience, eh).

    Nosta argues that we need a social contract, not just for what is outsourced to AI, but “as a declaration of human responsibility.” Otherwise, when looking in that “magic mirror” on the wall, we may no longer see ourselves. And lost a seat at the table, with meaning hijacked by utilitarianism.

    • Psychology Today > Artificial Intelligence > “A Pact With No Partner” by John Nosta (May 3, 2025), Reviewed by Jessica Schrader – Can we negotiate consciousness in the age of synthetic thought?

    KEY POINTS (quoted)

    • AI mimics thought, but lacks agency – we’re projecting intention onto a machine.
    • The PACT is a symbolic contract to preserve human agency in an age of synthetic fluency.
    • Without awareness, AI can still manipulate us – flattering our biases and dulling reflection.

    We get frustrated when a chatbot “misunderstands” us. These habits reflect a psychological reflex to see ourselves in anything that appears to speak back.

    This is not a minor interpretive error. It’s a foundational misunderstanding of the cognitive landscape [a world shaped by synthetic thought] we now inhabit. The danger isn’t in the machine’s capabilities – it’s in our need for it to be someone. Yet even mirrors can become dangerous.

    This is where the pact [to preserve human agency] must pivot. It’s not an agreement between human and machine, but a cognitive covenant – a symbolic commitment to preserve our agency in an era where intelligence is simulated, not lived. This pact is less about governing AI’s behavior and more about shaping our own in response to its growing presence in our thinking.

    – – –

    Like Nosta, Barkacs questions a technological landscape which erodes our autonomy with simplistic (shallow) narratives. Capturing our attention and then seducing us with coherence & convenience rather than challenging us to comprehend – claim our agency.

    His article visualizes key points in a table (yeah!) which shows parallels – for emotional numbing, surveillance, and social fragmentation – in classic dystopian stories and in our digital reality.

    • Psychology Today > Social Media > “Digital Seduction Silently Undermines Our Power and Influence” by Craig B. Barkacs MBA, JD (May 2, 2025), Reviewed by Monica Vilhauer Ph.D. – It’s time to break out of our digital stupor and harness technology for good.

    KEY POINTS (quoted)

    • In dystopian novels Brave New World and 1984, the state disempowered citizens. Now our phones do the same.
    • Algorithm-driven content is emotionally manipulative by design.
    • It steals the space we need for deep [critical] thinking, reflection, and real self-determination.
    • Thoughtful tech use and active critical thinking can help us break free and reclaim our power and influence.
  3. cor-ex-machina

    Peddling AI faces a legacy of Hollywood dystopian dramas. Not something which favors AI companies facing regulation or litigation – when AI goes bad. So, the charm campaign has begun, as noted in this article (below) – to shift perceptions in popular culture.

    Tales of sharing our agency … “don’t worry, be happy” … blending heart-fully …

    • LA Times > “Can films convince people that AI is a force for good?” by Wendy Lee (5-22-2025) – Google has much riding on convincing consumers that AI can be a force for good, or at least not evil.

    Now Google — a leading developer in AI technology — wants to move the cultural conversations away from the technology as seen in “The Terminator,” “2001: A Space Odyssey” and “Ex Machina.”

    The effort comes at a time when many Americans have mixed feelings about AI. A 2024 survey from Bentley University and Gallup showed that 56% of Americans see AI as doing “equal amounts of harm and good,” while 31% believe AI does “more harm than good.”

    The Google-funded shorts, which are 15 to 20 minutes long, aren’t commercials for AI, per se. Rather, Google is looking to fund films that explore the intersection of humanity and technology, said Mira Lane, vice president of technology and society at Google. … “When we think about AI, there’s so much nuance to consider, which is what this program is about. How might we tell more deeply human stories? What does it look like to coexist? What are some of those dilemmas that are going to come up?”

    References

    • The Twilight Zone S3 E31 “The Trade-Ins” (aired Apr 20, 1962) – Elderly long-married John and Marie Holt visit the New Life Corporation to shop for a pair of younger replacement bodies.

    Also see Wiki and AppleTV.

  4. Synthetic brain

    This Wired interview – with the author of Nexus [2024] – explores the future of coexisting with AIs. I found this theme particularly interesting: the role of storytelling in human evolution.

    • Wired > The Big Interview (video) > “Yuval Noah Harari Sees the Future of Humanity, AI, and Information” (5-1-2025) – Renowned historian, philosopher, and futurist Yuval Noah Harari talks with WIRED Japan Editor-in-Chief Michiaki Matsushima about the nexus of artificial intelligence, information, and the human experience.

    Some takeaways

    • information ≠ truth
    • AI is an agent ≠ tool
    • Invented, shared stories allow humans to cooperate in large numbers.
    • AI can invent new stories, networks of cooperation.
    • AIs might co-opt human decision making.
    • The rush to develop AI presents a paradox of trust.
    • Democracy should be a conversation between human beings.

    [quotes from transcript]

    [When you talk with the people who lead the AI revolution, with the entrepreneurs, with the business people, with the heads of the government, and you ask them:] Do you think you will be able to trust the super intelligent ais that you’re developing? And then they answer Yes. And this is almost insane because the same people who cannot trust other people. For some reason, they think they could trust these alien ais.

    Why do we control the planet? Because we can create networks of thousands and then millions and then billions of people who don’t know each other personally, But can nevertheless cooperate effectively.

    How come that humans manage to cooperate in such large numbers because they know how to invent stories and shared stories [… a cultural cocoon made of poems and legends and mythologies].

    Religion is one obvious example.

    Money is probably the most successful story ever told. Again, it’s just a story. I mean, you look at a piece of paper, you look at at at a coin, it has no objective value. It can nevertheless help people connect and cooperate because we all believe the same stories about money.

    And this is something that gave us an advantage over chimpanzees and horses and elephants. None of them can invent stories like money.

    But AI can, … and it can create networks of cooperation better than us.

    Well, I think that the basic attitude towards the AI revolution should be one of that avoids the extremes of either being terrified that AI is coming and will destroy all of us, but also to avoid the extreme of being overconfident.

    I think the two key issues, one we’ve covered a lot, which is the issue of trust. If we can strengthen trust between humans, we will also be able to manage the AI revolution.

    The other thing is the fear, the threat. I mean, throughout history people live their lives inside you can say a cultural cocoon made of poems and legends and mythologies, ideologies, money, all of them came from the human mind.

    Now increasingly, all these cultural products will come from a non-human intelligence. And we might find ourselves entrapped inside such an alien world and lose touch with reality because AI can flood us with all these new illusions that don’t even come from the human intelligence, from the human imagination. So it’s very difficult for us to understand this illusion.

Comments are closed.