Ultrametric AI

Token-level explainable attention via p-adic valuation trees and Spencer-Brown's Laws of Form. Interactive proof of concept.

zero learned parameters · fully auditable · browser-based

Enter a sentence. Each word is encoded as the product of semantic primes (good=2, bad=3, not=5, very=7, but=11). The ultrametric distance between words is the max difference of their p-adic valuations. Attention = e-distance/T — purely geometric, fully auditable.

Temperature: 1

Spencer-Brown's Laws of Form begins with "Draw a distinction." Marks (#) and enclosures ([ ]) are the primitives. Two rules — Calling (##→#) and Crossing ([[A]]→A) — are the complete engine. Below, build expressions and see reduction in action.

#

Mark

"Something is here"

[ ]

Enclosure

Creates inside/outside

##→#

Calling

Redundancy condenses

[[A]]→A

Crossing

Boundaries cancel

2
1

Sentence → Distinction Encoding

The cocycle cognitive architecture hypothesis: neural representations maintain consistency via cocycle conditions on the Bruhat-Tits tree. The strong triangle inequality must hold: d(a,b) ≤ max(d(a,c), d(b,c)) for any three concepts.

Triangle Cocycle Check


Bulk Consistency Audit

In the Syntactic Token Calculus, particles are stable patterns — expressions that cannot be reduced by Calling or Crossing. Each stable pattern is a particle candidate.

Standard Model Particles as Syntactic Patterns


Semantic Primes as Patterns

These five primes (good=2, bad=3, not=5, very=7, but=11) form the basis of the ultrametric attention encoding. Each word's prime product determines its position on the Bruhat-Tits product tree. Words with identical prime assignments have zero ultrametric distance — they are syntactically identical in this encoding.