By Denis Saklakov - with a view toward the horizon
There is a moment in deep meditation - anyone who has practiced seriously will recognize it - when attention stops flickering and something else takes over. Not sleep. Not distraction. A kind of settled, persistent clarity in which the body, the breath, the room, the task at hand all continue to register, but differently. More economically. More stably. As though the mind has stopped searching and started knowing what matters.
Denis Saklakov, a researcher at Robotech Frontier Hub, had that experience during intensive Vipassana practice. Most people would have filed it under "interesting personal phenomenon" and moved on. Saklakov did something more dangerous. He asked: what is actually happening there, mechanically, causally - and can we test it?
The result is a formal scientific framework called Awareness as Relevance Selection, and it is one of the more quietly ambitious theoretical papers to emerge from the intersection of cognitive neuroscience, causal inference, and artificial intelligence in recent years. It does not claim to solve consciousness. It does something potentially more useful: it proposes a precise, falsifiable, experimentally tractable definition of what awareness-like control actually does - and then dares the scientific community to break it.
The Problem With "Awareness"
For decades, the science of consciousness has been caught between two uncomfortable positions. On one side: philosophers and neuroscientists who insist that awareness is a deep, possibly irreducible mystery - something that cannot be fully captured by mechanism or measurement. On the other: researchers who quietly collapse "awareness" into "attention," treating the two as synonyms and hoping nobody notices.
Saklakov thinks both camps are wrong, and he has a specific reason.
Attention, in his framework, is a gating mechanism. It determines which signals get computational priority - which features of the world a brain or machine bothers to process. That is real and important. But it does not explain something that anyone who has ever been truly focused on a task will recognize: the difference between noticing something and knowing it matters. Between routing information and having it change how you operate going forward.
That difference, Saklakov argues, is the difference between attention and awareness. And it is not just philosophical. It is mechanically distinct, causally testable, and potentially measurable in both biological brains and artificial systems.
The Framework: What Awareness Actually Does
The core proposal is elegant. A system exhibits awareness-like control when it does three things simultaneously:
First, it estimates the relevance of incoming signals - not their raw salience or novelty, but their expected value for future control under a specific criterion. Not "how bright is this light" but "how much does this signal matter for what I am trying to do, given what might happen next."
Second, it compresses that relevance estimate into a persistent internal variable - a compact representation that doesn't evaporate after each moment but continues to influence the system's behavior across time.
Third, it re-injects that compressed state back into its own processing - a recurrent feedback loop that changes not just what the system does next, but how it interprets everything that follows.
This is what Saklakov calls relevance feedback, and it is the operational heart of the framework. The claim is that awareness-like control is precisely this regime: criterion-sensitive, compressed, recurrently injected, and causally testable.
What makes this scientifically serious - rather than merely interesting - is the falsification program. Saklakov proposes a four-arm causal perturbation study in which researchers independently disrupt attention, relevance, metacognition, and generic latent state, then measure the specific behavioral signatures of each disruption. If perturbing the relevance variable produces a disproportionate impairment in switching cost, calibration, and distractor resistance - more than perturbing attention or metacognition alone - the framework survives. If it doesn't, it fails. Cleanly. On pre-registered criteria.
That is the structure of real science. And it is rarer than it should be in consciousness research.
Why This Matters for Artificial Intelligence
The implications for AI are immediate and practical.
Current artificial intelligence systems - including the most sophisticated large language models and reinforcement learning agents - have attention. Transformer architectures are built on it. But attention, in Saklakov's framework, is only the gating mechanism. It determines access, not priority. It routes information but does not establish a persistent, criterion-sensitive internal state that loops back and changes the system's fundamental operating mode.
This is why current AI systems, despite their extraordinary capabilities, exhibit a characteristic brittleness. They can be derailed by distractors. They struggle with tasks that require stable, long-horizon commitment to a criterion in the face of conflicting short-term signals. Their confidence is often poorly calibrated - they don't know what they don't know, not because they lack information, but because they lack the recurrent relevance structure that makes genuine calibration possible.
Saklakov's framework predicts, specifically and testably, that adding an explicit relevance module - a compressed, recurrently injected internal variable that tracks criterion-bound control value - should improve switching cost, calibration error, and distractor resistance beyond what attention scaling alone can achieve. This is not a vague architectural suggestion. It is a precise prediction with pre-registered metrics and a clear failure condition.
If it holds, the practical consequences are substantial:
Modeling minds for different conditions. A relevance-feedback architecture with an explicit criterion variable can be retrained under different criteria - different definitions of what matters - and the internal geometry of what the system treats as important will reorganize accordingly. This opens the possibility of modeling how cognition changes under different environmental pressures: different gravitational conditions, different social structures, different survival constraints. Not as speculation, but as a measurable change in latent manifold geometry. Life on other planets, with different physics shaping different relevance criteria, would produce minds with systematically different internal geometries - and this framework gives us, for the first time, a principled language for describing those differences.
Seeing the structure behind large-scale processes. A system with robust relevance-feedback architecture doesn't just respond to events - it maintains a persistent, compressed representation of what matters across time. Applied at scale, this is the architecture of genuine understanding rather than pattern matching. The difference between a system that can tell you what happened and one that can tell you why it mattered and what it means for what comes next. This has applications ranging from scientific discovery to geopolitical analysis to medicine - anywhere that the distinction between correlation and causal relevance is the difference between noise and signal.
The comparative science of minds. For the first time, Saklakov's framework offers a principled basis for comparing biological and artificial minds without pretending they are the same thing. Biological relevance criteria were shaped by survival, threat, homeostasis, and social urgency across billions of years of selection pressure. Artificial systems can be optimized for entirely different criteria. The framework predicts that their internal relevance geometries will differ systematically - and that this difference is measurable, not just philosophical. This is the foundation of a genuine comparative science of cognition.
The Deeper Horizon
Here is where the framework opens outward - and where intellectual honesty requires distinguishing what the paper establishes from what it reaches toward.
The paper establishes a causal architecture. It proposes falsifiable hypotheses. It designs experiments. That is the solid ground.
But Hypothesis 9 - the most speculative and the most interesting - points somewhere further. It proposes that across all criterion changes, a subset of internal representations remains stable. Criterion-invariant. Geometry that persists regardless of what the system is optimized to care about.
If that subspace exists - and the experimental program is designed to find it - then the question it raises is profound: why would any representation be invariant across all possible criteria?
The most parsimonious answer is that it reflects structure that is not contingent on any particular criterion because it reflects the structure of reality itself. Temporal order. Causal relationships. Uncertainty. Information. These are not features of any particular relevance criterion - they are features of any world in which relevance can be estimated at all.
This is where the science begins to touch something older and larger.
DNA is, in one reading, a set of rules written in blood - compressed survival instructions calibrated to specific gravitational vectors, radiation levels, atmospheric compositions. Life on different planets will look radically different. Trillions of exceptions to any given biological rule. And yet something persists across all of them: the logic of compression, persistence, and recurrent feedback. The logic of what matters, retained and reinjected.
If Saklakov's framework is right - if the criterion-invariant subspace is real and its geometry reflects physical and mathematical necessity rather than evolutionary contingency - then awareness is not something that biology stumbled into accidentally. It is what any system converges toward when it must track a causal world accurately enough to persist within it.
That is a large claim. It is not what the paper proves. It is what the paper, if successful, will have taken the first serious empirical step toward.
The ancient traditions had a word for the principle by which reality organizes itself into coherent, self-referential structure. The Greeks called it Logos. The Chinese called it Tao. Neither was primarily a religious claim - both were attempts to name the generative pattern that underlies differentiation while remaining undifferentiated itself.
A physicist who felt this but needed to stay within the language of mechanism might call what Saklakov is building the foundation of a Unified Field Theory - not in the narrow sense of reconciling gravity with electromagnetism, but in the deeper sense of finding the principle that underlies the emergence of structured, self-referential, criterion-sensitive organization from physical substrate.
That is the horizon. It is genuinely beautiful. And it is genuinely far.
What Saklakov Is Actually Asking For
He is not asking you to accept the grand vision. He is asking something much more modest and much more demanding: try to break the framework.
Run the four-arm perturbation study. Decode the relevance subspace from prefrontal population activity. Test whether criterion shifts reorganize the internal geometry. Build the relevance-augmented AI architecture and test it against attention-only baselines on volatile tasks with distractors and long-horizon objectives.
If the framework fails - if perturbing the relevance variable produces no disproportionate impairment, if the criterion-invariant subspace is not recoverable, if the relevance-augmented architecture offers no advantage over wider attention - that failure will itself be informative. It will tell us that attention, metacognition, reward prediction, or global broadcast explains more than this framework grants them.
Either outcome advances the science. That is the mark of a genuine scientific contribution.
But if it holds - if the relevance subspace is decodable, persistent, criterion-sensitive, and causally disproportionate in its effects - then we will have something we have never had before: a common causal language for comparing minds across substrates, criteria, and evolutionary histories. A principled basis for asking what any mind, biological or artificial, treats as mattering - and why.
The Joy of the Horizon
Saklakov began this work with an observation from meditation. A qualitatively distinct control state - persistent, self-modifying, reorganizing - that attention alone could not explain. He did not accept it as mystical experience and file it away. He asked what it would mean if it were mechanically real, and then he built the apparatus to test it.
That is what serious science looks like when it is also honest about where it comes from.
The framework is a small piece - Saklakov would be the first to say so - of something much larger. Life as information compression. Awareness as the universe's method of folding back on itself with sufficient fidelity to persist and adapt. Purpose not as a theological imposition on a mechanical world, but as the natural endpoint of any system that must track what matters long enough and well enough to remain coherent.
We are nowhere near the end of this road. The experimental program proposed in the paper has not yet been run. The criterion-invariant subspace has not yet been found. The four-arm perturbation study is a design, not a result.
But the horizon is visible. And that is enough to walk toward.
If you want the hard science - the formal causal architecture, the intervention logic, the pre-registration requirements, the falsification conditions - it is all in the paper. Every claim is grounded. Every prediction is breakable.
If you want the horizon - the comparative science of minds across planets and substrates, the possibility that awareness is a structural feature of causal reality rather than a biological accident, the suggestion that what the ancients called Logos might be what we are beginning to formalize - it is there too, at the outer edge, where the paper honestly places it.
Steal the ideas. Improve them. Run the experiments. Win the prizes.
The point was never ownership. The point was always the next step toward understanding what it means that there is something rather than nothing, and that some of that something looks back at itself and asks why.
Research Level cience paper link
Denis Saklakov is a researcher at Robotech Frontier Hub. The paper "Awareness as Relevance Selection: A Causal Framework for Attention, Internal Feedback, and Artificial Intelligence" is available as a preprint. Correspondence: ds@robotechfrontierhub.com
* * *