Testing the Framework

Predictions, experiments, and what would count as falsification

Guide
A framework that cannot be tested is not physics. What does observer-centrism predict that other frameworks do not?

Questions this page addresses

  • What testable predictions does the framework make?
  • How does holographic noise differ from other Planck-scale predictions?
  • What does the framework predict about dark matter?
  • What should we not find, if the framework is right?
  • What would count as falsification?

The preceding chapters traced a derivation chain from three axioms through spacetime, quantum mechanics, the particle spectrum, and holography. The logical coherence of that chain matters — but it is not enough. A framework that only recovers what is already known, however elegantly, is philosophy. Physics requires predictions that can be tested and potentially falsified.

Observer-Centrism makes ten predictions. Two are quantitative, with specific numbers attached. Eight are qualitative “negative” predictions — things that should not happen if the framework is correct. Several are already consistent with existing data. Others await future experiments.

The Headline: Holographic Noise

The framework’s most distinctive prediction concerns the texture of spacetime at the smallest scales.

If spacetime is ultimately a discrete network of relational invariants — as the framework derives — then the continuum approximation introduces irreducible position uncertainty at the Planck scale. This is holographic noise: a fundamental fuzziness in the positions of objects, arising not from measurement limitations but from the discrete structure of spacetime itself.

Other approaches to quantum gravity predict similar noise. What distinguishes the framework’s prediction is its angular structure. The noise is not isotropic — it has a specific directional pattern. When two interferometers are placed at the same location but rotated relative to each other by an angle β\beta, their cross-correlated noise signal follows Γ(β)=cosβ\Gamma(\beta) = \cos\beta.

This angular dependence is derived, not assumed. It arises from the causal structure of the coherence dependency graph — noise correlations are stronger along shared null (light-cone) directions than along spacelike directions.

The Fermilab Holometer experiment has already tested one version of Planck-scale noise, ruling out an isotropic model proposed by Craig Hogan. The framework’s prediction survives this constraint because its noise is not isotropic — it has the specific angular structure that the Holometer’s perpendicular configuration is insensitive to.

The critical test: rotate two co-located interferometers and measure how their correlated noise varies with angle. The framework predicts a clean cosine dependence. A confirming result at the predicted amplitude would be strong evidence. A null result at sufficient sensitivity would be genuine falsification.

Dark Matter Granularity

If dark matter consists of stable observer loops — as the framework requires of any persistent structure — those loops have a characteristic closure scale. This sets a minimum granularity: a minimum dark matter halo mass below which structures do not form, determined by quantum pressure from loop closure.

The framework predicts a specific scaling for this minimum mass: MJmDM3/2M_J \propto m_{DM}^{-3/2}, where mDMm_{DM} is the dark matter particle mass. This differs from the scaling predicted by warm dark matter models (Mminm4M_{min} \propto m^{-4}), providing a way to distinguish the two.

The prediction is consistent with observed suppressions of small-scale structure — the “missing satellites” problem in galaxy formation, where fewer small dark matter halos are observed than simulated. The framework predicts these are structural signatures of dark matter’s loop closure scale.

Eight Things That Should Not Happen

Beyond the two quantitative predictions, the framework makes eight qualitative predictions — all of which take the form “this will not be observed” or “this specific structure will be found.” Negative predictions are underappreciated in physics, but they are powerful: every null result that matches the prediction is evidence, and any positive result is falsification.

No fourth fermion generation. Three generations correspond to three independent rotation axes in three dimensions. A fourth would require a fourth axis, which does not exist. Current data from the LEP collider is consistent — precision measurements of the Z boson decay width constrain the number of light neutrinos to three.

No supersymmetric partners. Supersymmetry requires a continuous transformation between integer and half-integer winding classes. These classes are topologically discrete — no continuous bridge exists. The LHC has found no evidence of supersymmetry to TeV scales. The framework predicts this continues at all accessible energies.

Proton stability. Grand Unified Theories predict proton decay by embedding the Standard Model gauge group in a larger symmetry that is broken at high energy. The framework predicts no such larger group exists — the Standard Model gauge structure is the direct output of the three-dimensional coherence geometry. Proton decay will not be observed. Current experimental bounds exceed 103410^{34} years.

Exact unitarity. Unitary evolution — the preservation of quantum mechanical probability — is coherence conservation applied to quantum dynamics. It admits no exceptions, no corrections, no scale at which it fails. No objective collapse, no nonlinear modifications to quantum mechanics, no violation in black hole evaporation. Precision tests with matter-wave interferometry probe this directly.

The great desert. No new physics between the electroweak scale (246 GeV) and the Planck scale (101910^{19} GeV). No GUT scale, no intermediate symmetry breaking, no new particles in the vast energy desert between TeV and 101610^{16} GeV. This is a strong prediction — falsifiable by future colliders discovering new particles at intermediate scales.

No gauge coupling unification. The three Standard Model gauge couplings do not converge to a single unified coupling at any energy scale. There is no Grand Unification energy. The couplings are fixed by the division algebra structure at their respective crystallization scales. Minimal SU(5) GUT is already excluded by proton decay limits; the framework predicts all GUT models will fail.

No QCD axion. The strong CP problem (θ<1010|\theta| < 10^{-10}) is resolved by the octonionic structure of SU(3): non-associativity forces θ=0\theta = 0 exactly, without dynamical relaxation. No axion particle exists. The Peccei-Quinn mechanism is unnecessary. Axion search experiments (ADMX, CASPEr) will continue to find null results.

Normal neutrino mass ordering and Majorana nature. Neutrinos are Majorana particles (their own antiparticles), following from the pseudo-real representation structure of SU(2)LSU(2)_L. The mass ordering is normal (m1<m2<m3m_1 < m_2 < m_3), following from the same universal winding-axis hierarchy that orders charged fermion masses. JUNO is expected to determine the ordering; next-generation 0νββ0\nu\beta\beta experiments (LEGEND-1000, nEXO) will test the Majorana prediction.

What Counts as Falsification

A framework should make clear what would disprove it. For Observer-Centrism:

The discovery of a fourth fermion generation would directly contradict the three-dimensions derivation. The discovery of supersymmetric particles would contradict the topological argument against continuous winding-class bridges. Evidence of proton decay would contradict the absence of grand unification. A confirmed inverted neutrino mass ordering would falsify the universal winding-axis hierarchy. Any violation of unitarity would contradict coherence conservation itself — the most fundamental axiom.

The holographic noise prediction is more nuanced. A null result at sufficient sensitivity (a factor of ~10 below the predicted amplitude) would rule out the framework’s specific noise model. A detection with the wrong angular dependence would falsify the specific causal structure prediction.

What the Framework Does Not Yet Do

Honest accounting requires listing what is not accomplished.

The framework does not compute the specific masses and coupling constants of the Standard Model from first principles. It explains why the hierarchy exists, why the pattern is logarithmic, and why certain structural features (three generations, two statistics classes) take their values — but the precise numbers remain beyond current reach.

The mathematical formalization is incomplete. The coherence geometry needs a rigorous definition as a structure on causal sets. The bootstrap mechanism needs to be formalized as a dynamical system. The fixed-point equation of the self-consistent universe needs to be stated precisely enough for rigorous analysis.

Whether the fixed-point solution is unique — whether the axioms determine one universe or a landscape of possible universes — is the deepest open question. The interlocking constraints are highly restrictive, and the three-dimensions argument suggests uniqueness, but a proof requires the mathematical formalization that does not yet exist.

These are not deficiencies to be embarrassed about. They are the frontiers of an active program — the places where the framework’s structural arguments point toward specific mathematical problems that need solving. The derivation chain from axioms to predictions is complete and rigorous. The frontier is in deepening the mathematical foundations and sharpening the quantitative predictions.

On solid ground: The holographic noise prediction is quantitative, with a specific angular correlation function that can be tested. The negative predictions are all falsifiable, and several are already consistent with existing null results (no SUSY, no fourth generation, proton stability, no axion). The neutrino predictions (normal ordering, Majorana nature) are testable by JUNO and next-generation 0νββ0\nu\beta\beta experiments within the decade. The framework’s predictions are genuine — they were derived from the axioms, not fitted to the data.

Work in progress: The holographic noise amplitude depends on a parameter (αH\alpha_H) that is bounded but not precisely computed. The dark matter granularity prediction is semi-quantitative — it gives the scaling law but not the exact dark matter particle mass. Computing Standard Model parameters from first principles and proving the uniqueness of the cosmological fixed point are the major open problems.

This guide has traced the framework from its three axioms through the derivation of spacetime, quantum mechanics, the particle spectrum, and holography, ending with testable predictions. The full derivations provide the mathematical details. The predictions dashboard tracks the experimental status. The framework is not finished — but the direction is clear, and the next steps are well-defined.