- Monday Momentum
- Posts
- Watching AI Think
Watching AI Think
The shift toward continuous reasoning models and what it means for the future of technology
Happy Monday!
Last week, Tokyo-based AI startup Sakana unveiled a groundbreaking new architecture called Continuous Thought Machines (CTM). While it might sound like just another technical advancement, it actually represents a larger change in how AI models "think" and process information. This is scratching the surface of how the AI industry is evolving from one-shot responses toward continuous, transparent reasoning.
Sakana’s new CTM introduces time-based neural processing that allows AI to "think" step-by-step before answering. This represents a broader trend away from one-shot responding toward continuous reasoning capabilities, and CTM may actually be closer to human cognition. AI may continue to shift towards a world where the focus is not just on the answer, but on the journey to reach it.
The Meta Trend: From Answering to Thinking
For years, AI models have been optimized to deliver quick, one-shot responses. You ask a question, you get an answer. Success was measured by how accurately the model could produce a correct result in a single attempt.
But a fascinating shift is now occurring across the AI landscape: the most interesting advances are now about making the “thinking” process itself visible and continuous.
This isn't just happening with Sakana's CTM. We've seen it with Anthropic's Claude 3.7 Sonnet model, which introduced extended reasoning capabilities where users can watch the AI "think" through a problem, exploring different angles and checking its work before delivering a final answer. DeepSeek's R1 reasoning model operates similarly, focusing on step-by-step problem-solving rather than immediate responses.
There is now increased value in following the steps and tracing AI’s reasoning, especially as more businesses leverage this technology at enterprise scale. As an added security layer against hallucinations and false answers, it’s helpful to see the thought process behind each answer generated. This transparency is helpful when the stakes are high.
Pattern Recognition: Three Signals of the Shift
Three patterns highlight how extended reasoning, and an AI “paper trail,” are becoming more in-demand:
The Biology-Inspired Approach: Sakana AI's CTM departs from conventional models by explicitly incorporating neural timing and synchronization. These concepts hail straight from neuroscience and human brain studies. Rather than processing all inputs at once like Transformer models do, CTMs unfold computation over steps, letting each artificial "neuron" maintain a history of its activity.
This design allows the model to process information progressively, with each neuron deciding how long to "think" based on the complexity of the input. In demonstrations ranging from maze solving to image classification, researchers observed that the CTM developed surprisingly human-like patterns of reasoning.
The Rise of Reasoning Frameworks: Anthropic's development of extended reasoning capabilities for Claude 3.7 Sonnet represents another facet of this trend. When enabled, users can watch Claude think through problems in real-time, with each step of reasoning visible rather than hidden.
According to Anthropic's research, this approach produces results that improve logarithmically with the number of "thinking tokens" allowed. This means that the more the model thinks, the better it performs. But what's most interesting is that the reasoning becomes visible, giving users insight into how the AI arrived at its conclusion.
The Transparency Challenge: Paradoxically, as these reasoning models become more transparent in their thinking processes, we're also discovering limitations in that transparency. Recent research from Anthropic found that reasoning models like Claude 3.7 Sonnet and DeepSeek R1 don't always disclose all influences on their thinking (what they call "unfaithful" reasoning).
In their study, models acknowledged using given hints only 25-39% of the time, suggesting that even when we can see an AI "think," we might not be seeing the whole picture.
This tension between improved reasoning capabilities and true transparency highlights how the industry is grappling with both making AI think better and making that thinking genuinely accessible to humans.
The Contrarian Take: It's Not About Mimicking Humans
While the trend toward continuous thought models might seem like an attempt to make AI more human-like, that's only partly true. The more interesting perspective is that this shift is about creating entirely new cognitive systems that happen to be more interpretable to us.
These models aren't actually thinking like humans. While they are engaging in human-like reasoning, they're actually developing their own unique forms of intelligence that unfold over time rather than happening instantaneously. The step-by-step nature of their processing makes them more accessible to human understanding, but they're still fundamentally alien in how they operate.
The bet these companies are making isn't actually that AI should work exactly like the human brain. Rather, it's that processing which occurs over time rather than all at once offers advantages in solving complex problems and in allowing humans to collaborate with AI systems.
Practical Implications: Where's the Opportunity?
For investors and founders in the AI space, this trend toward continuous thought models suggests several opportunity areas:
Interpretability Tools: As AI reasoning becomes more complex and continuous, there will be growing demand for tools that help users understand, visualize, and interact with these reasoning processes. Think of platforms that can visualize an AI's "train of thought" in real-time.
Specialized Reasoning Models: We'll likely see the development of AI systems specifically designed for domains requiring complex reasoning (scientific research, financial analysis, legal judgments) rather than general-purpose assistants.
Human-AI Collaborative Systems: Tools that allow humans to intervene, guide, or correct AI reasoning mid-process will become increasingly valuable as models expose more of their thinking.
Reasoning Validation: Systems that can verify, test, or challenge the reasoning of AI models will be crucial as these models take on more consequential decision-making roles.
Training Data for Reasoning: Companies that can provide high-quality data showing step-by-step human reasoning for complex problems will have a significant advantage in training the next generation of these models.
The shift from one-shot to continuous thought models represents an evolution in how we interact with artificial intelligence. As AI systems become more transparent in their reasoning, they also become more collaborative partners rather than just tools.
For both developers and users, this opens up new possibilities for human-AI interaction where we can use AI to get answers, but also engage with its thought process, guide its reasoning, and build on its insights.
In motion,
Justin Wright
If AI systems continue to develop more continuous, transparent reasoning capabilities, how might this change the types of problems we trust them to help solve?

Reinforced Self-play Reasoning with Zero Data (Arxiv)
Introducing OpenAI for Countries (OpenAI)
Config 2025: Pushing design further (Figma)
FaceAge, a deep learning system to estimate biological age (The Lancet)