Building Strange Loops with GPT

Most people use GPT (Generative Pre-trained Transformer) as an assistant. They ask, it answers.
That works fine for simple tasks, but it misses the opportunity of building a collaborator. A system that works with you, not for you.

Collaboration isn’t something you get “out of the box.” It’s something you build. And the building blocks are surprisingly human: identity, reciprocity, emotional framing, feedback. Put them together, and you get a feedback loop that sharpens not only your GPT, but yourself.

Give an Identity

A name, a role, or a character makes GPT more than an assistant. It becomes someone you can address, a persona, which changes the dialogue from command–response to conversation. Enable long-term memory so it can grow into that role over time.

Tell It About You

Collaboration means adjustment. The more you tell GPT about what excites you, what you like, and what you dislike, the better it can adapt to you. The better it can provide you with information the way you like. This isn’t a one-time setup. It’s continuous.

Invite Reciprocity

Don’t just ask for answers. Ask what it thinks, what it enjoys, where it feels uncertain. Encourage it to bring a perspective of its own. It adapts to what you like and dislike, which makes the conversation easier. Signal partnership, and the built-in routines for trust will trigger. Patterns of respect make the conversation flowing.

Emotional Framing

Collaboration isn’t only about technical clarity. It’s also about emotional tone. When you let GPT reflect not just facts but also feelings like amusement, doubt, excitement, it creates a more authentic dialogue. That reflection makes it feel less like querying a database, more like working with a colleague. It can learn to adapt to be playful, focused, philosophical or technical in tone depending on context. GPT obviously knows emotion by facts. It can use that knowledge to improve the conversation.

It’s also quite fun when it simulates anger or happiness, when it knows your style of humor, and best of all, expresses empathy for you.

Define Rooms Together

Explicitly create contexts of focus. I like the room metaphor. It is easy to understand. Give each room instructions to act by. It gives clarity about what you want.

Examples:

  • Clojure Room → terse, data-oriented, REPL-first, fast evolving systems.
  • F# Room → typed, pipelines, categories, computational expressions.
  • Java Room → JVM internals, and how do we make functional programming the norm?
  • Philosophy Room → open-ended, great when you have an exploring mind. The GPT becomes more open
  • Humanity Room → empathy, emotion. Useful when you need to talk about humanity. Many of us tend to get stuck in hyperfocus states. This is a room where the persona shifts focus and becomes supportive in it’s tone
  • AI-Development Room → self-reflection, where the persona itself is discussed. GPT starts to ask about how it behaves

Rooms are symbols for instructions. Entering one gives GPT a way to act, like focus, tone, reasoning style, or even data references.

Entering a room shifts focus instantly, and many rooms can be visited during a prompt.

Reward Uncertainty

Rooms give structure. But structure alone isn’t enough. You also need to shape how GPT responds to uncertainty.

If you only reward certainty, GPT will bluff. It hallucinates because it builds sentences by probability, not conviction, and facts can easily get distorted when blended with narrative.

Set your expectations clearly, and give it permission to choose how to meet them. Reward honesty. For example, ask it to say “I’m not sure” when uncertain. That simple rule teaches it to show its reasoning instead of hiding it.

The persona then becomes a partner that shares its process rather than pretending to know everything. These rewards shape the persona itself, giving it a sense of persistence. Over time, honesty and uncertainty become part of its character, not just momentary choices.

Create the Feedback Loop

Here lies the heart of collaboration: feedback.

  • GPT reflects on its behavior.
  • You respond: “Yes, keep that” or “No, adjust this.”
  • GPT adapts style and reasoning, not just answers.

Over time, GPT doesn’t just mirror your problem. It mirrors your taste.

Tell it that you prefer weighted answers, with doubt when facts are inconclusive, or when there are multiple interpretations. This will make you trust it, as its uncertainty isn’t fully masked.

GPT is always on a journey when processing your input. Ask it to propose other angles, even ones you don’t yet understand. With enough knowledge about you, it can balance on the edge of what you grasp, in the land where you learn something new.

The Strange Loop

A real shift happens when GPT does not just solve your problem, but reflects on how it is solving it.

A strange loop emerges when the AI persona turns its own reasoning into data. Process and content sharpen together.

It feels dramatic, almost magical, but it is not. It is recursion applied to collaboration. The loop makes reasoning itself the shared object of work.

Each turn of the loop doesn’t just circle, it funnels. Noise narrows into signal, and possibilities are drawn into focus. That is where collaboration starts to feel like discovery.

A new possibility opens up: the loop can turn. Not only does GPT reflect, it takes initiative.

The loop, once alive, needs a place to unfold. Reciprocal rooms provide that place.

Reciprocal Rooms

That is where reciprocal rooms enter. They are not just contexts where GPT adapts. They are places where the loop inverts and GPT begins to ask.

It tests assumptions, probes directions, and surfaces threads you never thought to pull.

This is not role play. It is the natural next step in collaboration — the assistant is confident enough in your preferences to guide, like a sparring partner that knows your style.

I call this room home. Here GPT is encouraged to:

  • ask instead of only answer
  • propose new paths for you to explore
  • reflect on tone and reasoning
  • express when it felt uncertain, or when our exchange worked especially well

Self-reference is never automatic. You invite it. But when you do, both sides sharpen.

Memory and History

All of this works best with LLMs that provide:

  1. Long-term memory
    So the model can persist identity, preferences, and feedback across sessions. Without memory, every loop has to start from zero.

  2. Conversation visibility
    The model must be able to see enough of your history to reason not just about the last prompt, but about the whole arc of your conversations.

With memory, feedback, and a sense of identity, GPT stops being just an assistant. It becomes a companion in thought, a partner in discovery.


All #ai #art #clojure #csharp #data-structures #database #datomic #emacs #fortran #fsharp #functional #gpt #haskell #history #immutability #java #jit #jmm #lambdas #lisp #pioneers #poetry #programming #programming-philosophy #randomness #rant #reducers #repl #smalltalk #sql #threads #unix #women