Reachability Labs
research-driven diagnostics for hidden process failure
Research-backed diagnostics

Existence is not reachability.

A goal can still be real after every path to it has closed. Reachability Labs measures when a process loses access to viable futures — before ordinary metrics can see it.

Going backward is rarely free. In real systems it means rework, scrap, wasted compute, schedule slip, or a changed problem. Diagnostics are available now. Software is in development.

Interactive: drag through rising pressure. Toggle process strength. Watch the gap open.
Available now
Diagnostics Services

Bring a failing process. Get a receipt-backed analysis of where it loses the path, how it fails, and what changes are likely to help.

In development
Software

A diagnostic platform for boundary detection, trap analysis, and process comparison. Early-access and pilot conversations are welcome.

Why it is credible
Research

Oracle-verified trap analysis, confirmed scaling predictions, and a second combinatorial domain all support the core finding.

The path closes before the goal disappears.

Most systems don't fail because the destination vanished. They fail because the route died first.

Classical maps tell you where valid outcomes exist. They do not tell you when a real process has already committed itself out of every workable route. By the time that failure is obvious, teams have already paid in rework, scrap, wasted compute, or schedule slip.

What most people measure
Destination structure
How many valid outcomes exist? Where does the landscape itself become impossible? Those questions matter — but they start from the map, not from the traveler.
What Reachability Labs measures
Journey structure
Where does a real process stop being able to reach valid outcomes? What kind of trap did it enter? Is the process still locally mobile after the future has already collapsed?
Δ

The gap between existence and reachability is the core object.

A route can close long before the goal disappears. That loss has its own geometry, hidden states, warning signs, and costs. The diagnostics are built to find it.

Paid diagnostics for processes that fail in ways ordinary metrics miss.

By the time failure is visible, teams may already have burned compute, material, time, or schedule. The work tells you where a process loses reachability, what kind of trap dominates, and what a realistic upgrade is likely to buy.

Available now

Diagnostic studies

Bring a solver, planner, search pipeline, optimizer, or comparable constructive process. Reachability Labs instruments the run, analyzes where and how the path closes, and returns a decision-grade report.

Boundary location and regime identification
Failure fingerprint and hazard analysis
Trap geometry: shallow vs deep failure modes
Trajectory and branch-level comparisons
Variant comparison when a stronger process exists
Concrete engineering recommendations
Software in development

Constructive reachability software

The long-term product is a software platform for measuring constructive accessibility directly: boundary detection, trajectory diagnostics, trap analysis, variant comparison, and adapter-based transfer in one instrumented workflow.

Boundary and width estimation
Trajectory and trap diagnostics
Variant and ladder comparisons
Receipt-backed audit trails
Adapter-based domain support
Research-grade outputs for internal teams

Planned path: service-led development now, documented software release later, with publication-grade validation and a formal release track including JOSS.

How a diagnostic works.

Define the target. Instrument the process. Compare variants. Locate the failure mode.

Step 1
Define success
What counts as a valid outcome for this process?
Step 2
Instrument
Capture trajectories, decisions, and failure receipts
Step 3
Compare
Baseline vs stronger process, or multiple process families
Step 4
Diagnose
Identify where the path closes and what kind of trap dominates
Convergent across variants

Landscape-side · structural

Results that give the same answer regardless of which process you use are candidates for structural claims about the problem itself. These are features of the constraint geometry, not your process.

Changes across variants

Process-side · procedural

Results that shift when you change the process are tied to the interaction between your method and the problem. The diagnostic tells you which failures are structural and which your process can actually fix.

Where this applies

Different domains, same failure pattern: the target remains possible while the current process has already cut itself off from it. The same core diagnostic logic applies wherever a process builds toward valid outcomes under accumulating constraints.

SAT & constraint solving

The flagship 3-SAT work measures a sharp constructive accessibility boundary and resolves its fingerprint through bridge checks, rewind probes, and late-frontier diagnostics.

Why it matters: It shows the diagnostic object is real in a canonical benchmark with strong auditability.

Graph coloring

A second oracle-backed adapter already shows a large process-side collapse band inside an alive landscape, confirming that the process-versus-landscape distinction is not confined to SAT and that the measurement contract can transfer.

Why it matters: It validates the adapter program and transfer-calibration logic.

Compiler and EDA workflows

Register allocation, routing, proof search, and related workflows are all sequential constrained processes that can lose reachable futures before their surface diagnostics reveal why.

Why it matters: These are real engineering domains where hidden process failure has direct cost.

Scheduling and planning

Forward planners and schedulers commit to actions and resource allocations under accumulating constraints. They often fail for reasons that are visible only very late in the run.

Why it matters: The diagnostics translate naturally to process-side bottlenecks in planning systems.

Channel and frequency assignment

Conflict-constrained assignment domains, including graph-coloring-like formulations in communications, fit the same instrument logic well.

Why it matters: The existence of valid assignments and the reachability of them by a process are often very different things.

Emerging AI reasoning workflows

Structured decoding, planning-like inference, and sequential reasoning systems are natural future domains because they also accumulate commitments while local metrics remain incomplete.

Why it matters: The long-term product opportunity is likely larger than SAT itself.

Research that supports the offer.

The papers matter because they establish that the diagnostics are not guesswork. The current stack includes a flagship benchmark, a first validation lane, a process ladder, an internal theorem scaffold, and a second adapter showing that the core measurement logic transfers.

Flagship result

A measured constructive accessibility boundary in random 3-SAT

A sharp process-indexed boundary in a canonical benchmark, with late visible failure, exact-prefix deadness, trap depth, and strong within-instance path dependence.

See evidence ↓
Validation lane

Exact-all-local and pool-stability checks

New validation runs keep the constructive boundary in the same narrow band, strengthening the case that the flagship signal is not just a pool artifact.

See validation ↓
Process ladder

P1 shows that stronger forward processes can move the boundary

A stronger forward rung stays alive far beyond the baseline window, showing that added structure can move the accessibility boundary rather than merely smooth the curve.

Why the ladder matters ↓
Transfer calibration

Graph coloring as a second oracle-backed adapter

A second combinatorial domain now shows the same deeper process-vs-landscape split with a different surface morphology, strengthening the adapter-first program.

See domains ↓

Internal theorem scaffold: a parallel theorem program continues to formalize hidden structural state, delayed manifestation, and the distinction between landscape-side existence and process-side reachability. See the interactive benchmark view below.

The shortest useful vocabulary for what this work is measuring.

Start here if the language is new. The goal is to make the problem legible in plain terms first and technical terms second.

In the figures, the goal is the valid endpoint, the process is the method, and reachability is the live relation between them.

Opacity is symmetric. A doomed trajectory is dead but can't tell — it keeps moving, locally healthy, while its prefix has no satisfying completion. A surviving trajectory is alive but can't tell — it occupies a local state where most of its neighbors are already dead. Toggle between them: same formula, same local view, opposite truths. Three objects measure what's happening: G (forfeiture — how much available room commitment has cost), W or C (the walking-dead interval or its surviving dual), and O (opacity — the information gap between local view and ground truth).

Concept 01

Map and route

A maze can still have exits even after the route you are on can no longer reach any of them. The map can stay alive while the route dies.

Why it matters: map existence and route viability are different facts.
Concept 02

Existence is not reachability

A goal can remain valid after the current process has lost the route to it. Reachability asks whether this process can still get from here to a valid completion.

Why it matters: it gives a process-indexed boundary rather than only a static threshold.
Concept 03

Hidden failure

A process can keep moving after its viable future is already gone. Local signals may look healthy even when the route is already dead.

Why it matters: local "health" is not the same as future viability.
Concept 04

Hidden survival and trap depth

The opposite can happen too: a route can look doomed while a narrow pass still exists. Some failures are shallow mistakes; others are deep structural traps.

Why it matters: local appearance is not enough; recovery budget matters.
Concept 05

The ladder

Stronger processes get more sight, more inference, more correction, more reversibility, or some combination. The ladder asks what each added capability actually buys.

Why it matters: this lets us measure what process upgrades actually buy.
Concept 06

Adapters

K-SAT and graph coloring are calibrated domains, not the whole story. The point is to measure the same deeper object across different terrains.

Why it matters: the methodology is meant to travel.

Validated. Oracle-verified. Receipt-backed.

The research-facing benchmark view. This interactive figure shows the measured A0 constructive success curve, the existence-side reference, and the internal structure of the gap. Toggle system size to see how the boundary sharpens. Drag to measure the separation at any matched success rate.

Constructive Accessibility from Committed Prefixes in Random 3-SAT
Michael Richard Nothem · Reachability Labs

Nine system sizes through n = 1,000 anchor the flagship result. Oracle-verified trap analysis, bounded rewind, trajectory-space diagnostics, a confirmed scaling prediction, and a second combinatorial domain all support the core finding.

submission-ready preprint

Artifact ledger

Public artifacts, versions, and mirrors. The site is the hub — OSF, GitHub, Zenodo, and arXiv are the mirrored destinations.

Artifact Version Status Primary Mirrors
Flagship paper
Constructive Accessibility from Committed Prefixes in Random 3-SAT
v1.0 Public OSF Zenodo · arXiv forthcoming
Public demo
Runnable constructive accessibility diagnostic
v0.1 Public GitHub Zenodo release forthcoming
Interactive figures
Epistemic, G·W·O, and scientific benchmark figures
v4–v6 Public Website GitHub
Supplement
Extended data, validation runs, coloring adapter
v1.0 Public OSF Zenodo

Verify a download (SHA-256)

Pick a file you downloaded. This computes SHA-256 in-browser and checks it against the expected hash. Nothing is uploaded.

The lab.

Michael Richard Nothem
Founder · Process Engineer · Researcher

Reachability Labs grew out of a process engineering background and a research program that kept arriving at the same question: why do well-instrumented processes still fail in ways their own metrics cannot explain?

The answer turned out to be structural. A process can lose access to viable futures long before any local signal reveals the loss. The diagnostics, the research, and the software direction all follow from that finding — measured first in random 3-SAT with oracle-verified trap analysis, then extended to graph coloring, and now being built into a general-purpose diagnostic methodology.

The work spans computational combinatorics, constraint satisfaction, process diagnostics, and the emerging theory of constructive accessibility. The lab is currently a focused research-and-services operation. The intention is to grow it as the methodology and software mature.

Bring a process, not a perfect story.

If your process keeps moving but outcomes still collapse, the fastest path is a diagnostic engagement. If you are interested in the coming software, ask about early access or pilot partnerships.

Good fit if: your process keeps moving but fails late, local metrics still look healthy while outcomes collapse, you are comparing stronger variants and do not know what they actually buy, or you need more than a benchmark score.

Diagnostic engagement

Bring a process, not a perfect story. The work can begin from your current implementation and current uncertainty.

  • A runnable process: solver, planner, optimizer, decoder, scheduler, or similar system
  • Your definition of success or valid completion
  • Some description of the constraint structure
  • What you have already tried and where it broke
Request diagnostics

Software and pilots

The software direction is real and in progress. Reach out if you want early access, a pilot relationship, or to discuss what deployment would need to look like for your domain.

  • Early-access and pilot conversations
  • Software roadmap questions
  • Research-backed product development
  • Selected research and grant conversations
Ask about software
Reachability Labs is a research-driven diagnostics business. Services are available now. Software is in development.