Frequently Asked

What Dysrupt Labs is, what it is not, and how the operator-staked forecaster panel complements the consensus surveys institutional macro desks already use.


How does this differ from the monthly consensus surveys we already use?

Dysrupt Labs is a complement, not a replacement. Monthly consensus surveys aggregate point estimates from a roster of bank economists once per release cycle. The Dysrupt operator-staked forecaster panel produces a continuous, conviction-weighted distribution updated in real time by 900+ contributors with a 7+ year median tenure on the platform.

The two data products answer different questions. Surveys tell a desk where the sell-side consensus has settled. The Dysrupt panel tells a desk where conviction is moving and where a regime-conditional cohort is diverging from the crowd. Most macro desks that adopt the panel run it alongside their existing consensus inputs — the value sits in the divergence layer, not in re-stating the headline.


How is this different from traditional alternative data?

Traditional alt data — credit card panels, satellite imagery, web scrapes, foot traffic — measures observable economic activity. Dysrupt Labs measures something different: the microstructure of human conviction under Knightian uncertainty. The signal is generated by an operator-staked panel of expert forecasters whose individual track records and behavioural signatures are known, scored, and weighted.

It is forecaster microstructure data — a category that does not yet exist in standard vendor catalogues. The data product is delivered in formats compatible with the Neudata DDQ standard and includes MNPI policy documentation. Where traditional alt data answers what is happening in the economy, the Dysrupt panel answers what an expert cohort believes is about to happen, and how confidently.


What is the academic basis for the signal?

Two peer-reviewed publications underwrite the methodology. Gruen, Mattingly et al. (2023) in eBioMedicine validated the ML forecasting architecture on a DARPA-funded NGS2 programme dataset. Bossaerts, Mattingly et al. (2024) in the Journal of Financial Markets documented price formation and dataset credibility on the Almanis platform. Both papers were independently replicated.

A third paper, sole-authored, extends the framework to regime-conditional accuracy, Kyle-lambda microstructure, and the three-inference-mode framework. It is in preparation for the Journal of Financial Markets. In March 2026, the divergence and scored-divergence signals were replicated on a structurally different public forecasting venue using a constant-product automated market maker — establishing that the signal is a property of human behaviour under uncertainty rather than an artefact of a single platform's mechanism design.

Research →

Why hasn't this been done before?

Three preconditions had to converge. First, an operator-staked forecaster panel run continuously for long enough to accumulate 7+ year tenure on a meaningful roster — Almanis has been live since 2019 and the underlying programme since 2008. Second, the methodological infrastructure to identify a regime-conditional cohort within the panel and score their separation from the crowd consensus — established in the 2023 and 2024 peer-reviewed publications.

Third, a regulatory environment in which an operator-staked structure is clearly distinguishable from peer-to-peer wagering venues. Dysrupt Labs is the first vendor to bring all three preconditions together as a sellable data product in a format institutional buyers can diligence.


Is Dysrupt Labs a prediction market?

No. Dysrupt Labs operates an operator-staked forecaster panel. Participants forecast against a stake provided by the operator under a logarithmic market scoring rule; they do not wager against each other. This is an architectural and regulatory distinction, not a marketing one. The structure was chosen at inception specifically to keep the platform inside the research and information-services boundary.

The data product sold to institutional buyers is forecaster microstructure data, not market access. Buyers receive a continuous time series of crowd consensus, cohort divergence, and scored divergence — not a brokerage relationship, not a venue.


What are the three signals?

Signal 1 — General Consensus. The headline crowd forecast aggregated across 900+ forecasters. Tracks the public consensus benchmark for each release. Peer-reviewed.

Signal 2 — Divergence. The separation between the crowd consensus and an ML-identified cohort whose accuracy advantage is regime-conditional. Lives in the microstructure. Private to Dysrupt Labs.

Signal 3 — Scored Divergence. The z-scored magnitude of Signal 2, weighted by cohort track record. When it spikes (z ≥ 1.65), the consensus typically revises toward the cohort estimate. Documented in private whitepaper (2026).

Signal detail →

How does diligence work?

Qualified institutional evaluators can request access to an NDA-gated data room. Contents include the two peer-reviewed papers, the current working paper, a Neudata-standard DDQ, MNPI policy documentation, signal methodology, backtest methodology, track record tables, and sample signal history. The standard request route is an NDA returned to karlmattingly@dysruptlabs.com; data room access follows same day.

Get in touch

karlmattingly@dysruptlabs.com

Past performance is based on backtested data and is not indicative of future results.