The most valuable company in prediction markets won't be a platform. It'll be the market maker.

No spam. Unsubscribe anytime.

← Back to home

The real value in prediction markets isn't the platform. It's the market maker.

17 Feb 2026

I. Who Are We?

We are a market-making firm for prediction markets. We use AI to quote prices on event contracts across politics, economics, climate, technology, geopolitics, and culture, and we put real capital behind those prices. Y Combinator listed AI-native hedge funds as a category in their Request for Startups (Spring 2026).

Prediction markets let millions of people simultaneously price the probability of real-world events: will inflation exceed 3%, will a hurricane hit Florida, will a ceasefire hold. The price of each contract, between $0 and $1, reflects what the market collectively believes. But without a liquidity provider, these markets sit dead. A buyer shows up wanting a contract at 70¢ and nobody is selling, or someone is selling at 90¢ - a price so far from any reasonable estimate that the trade never happens.

A liquidity provider solves this by standing in the middle: buying from anyone at 69¢, selling to anyone at 71¢. The provider earns 2¢ on every round trip (the spread) as payment for being permanently available and absorbing the risk. But the provider's entire survival depends on the accuracy of its probability estimates[1]. Accuracy creates a compounding advantage: the more accurate the estimate, the tighter the spread the provider can offer, which means cheaper trading, more volume, and more revenue.

II. The Contrarian Claim

The most valuable company in prediction markets will not be a platform. It will be the market maker.

Right now, the money and attention are flowing into platforms. Kalshi raised $1 billion at an $11 billion valuation. ICE invested up to $2 billion in Polymarket at a $9 billion valuation. In the last six months of 2025, at least six major entrants launched competing products. Robinhood already contributes more than half of Kalshi's trading volume and has announced plans to launch its own exchange. The platform layer is getting crowded fast.

Platforms are infrastructure - they list contracts, match orders, handle custody, and collect a fee on each trade. When six well-funded companies offer functionally identical infrastructure, the only way to compete is on price. The closest precedent is traditional stock exchanges: NYSE and NASDAQ used to make most of their money from trading fees, but after years of rebate wars, that revenue shrank to a rounding error. They survived by becoming data businesses - selling proprietary market data feeds to the institutional traders and market makers who actually capture the trading economics. This is exactly the bet VCs are making on prediction market platforms today.

The billions flowing into Kalshi and Polymarket are a bet on three things:

  1. Institutional market makers will enter prediction markets and provide the liquidity that makes order books function;
  2. Probability data for real-world events will be valuable to other markets - enterprise customers (government, defense, industrials, logistics), risk managers & AI agents - the same way ICE pipes NYSE data into Bloomberg terminals;
  3. Prediction markets will become hedging instruments for traditional finance.

All three bets depend on someone actually producing the prices - a gap we intend to fill. A market maker earns the spread on its own volume, as a principal, by taking risk and being right about probability more often than not. Unlike a platform, its margins do not compress with competition; they expand with scale.

Every trade is a data point that makes the next price quote slightly more accurate, which allows a slightly tighter spread, attracting more volume, generating more data. A market maker with ten years of pricing data and flow patterns has an advantage no amount of capital can shortcut.

III. Economics of Market Makers

Platforms collect a fee on volume they do not control. At current take rates (~1%), with six major competitors offering identical infrastructure, fees will compress. The best-case platform profit at $1 trillion in volume is roughly $700 million.

Market-maker economics are structurally different. The benchmark is Citadel Securities: ~$10 billion in net trading revenue in 2024 at ~60% EBITDA margins, capturing roughly 3 basis points per dollar of equity volume. In prediction markets, spreads are orders of magnitude wider. This is not unusual - it is what every new asset class looks like before institutional market makers compress them. Bitcoin spreads in 2017-2018 ran 50-100+ basis points on major exchanges; today they are under 5 basis points. Prediction market spreads are at a similar stage. Even after compression at scale, blended capture should settle around 150 basis points, roughly 50x wider than Citadel's capture in equities.

The math: $1 trillion in volume × 20% market share × 1% net capture = $2 billion in annual revenue. At 55% margins, that is ~$1 billion in profit - from a single market maker. The 20% share is conservative: in the long tail, the intelligence-driven maker's share approaches 100%, because no one else can price those contracts. Citadel captures 35% of U.S. retail equity flow against dozens of competitors.

Critically, the two variables - capture rate and volume - move in opposite directions. If liquidity increases, spreads compress but volume rises to more than offset it. If the market grows slowly, spreads stay wide and per-dollar capture remains high. The revenue is the product of two variables that hedge each other.

IV. Why Now?

The asset class grew ~40x in a single year in 2025. The run rate exiting January 2026 annualises to roughly $250 billion[2]. Reaching $1 trillion requires only a 4x from that pace over three years. The two dominant exchanges have raised a combined ~$3.6 billion at valuations of $11 billion and $9 billion respectively. This level of institutional commitment - ICE, CME, every major consumer finance app - changes the risk profile of the entire category. The question is no longer whether prediction markets will exist at scale. It is who captures the economics once they do.

But roughly 85% of volume today is sports. Non-sports - politics, economics, weather, culture, everything the institutional thesis depends on - accounts for less than $1 billion of $20 billion on Kalshi alone. The entire forward case (ICE's $2 billion bet, the $8 billion-by-2030 revenue projection) requires non-sports to grow from ~10% to over 50% of volume. That growth depends on liquidity. Currently, the long tail sits with wide spreads and thin activity. The most valuable probabilities - Will CPI exceed 3%? Will the Fed cut in June? Will a ceasefire hold in Sudan? - are trapped where no one is making markets.

The few institutional market makers who have entered - SIG in April 2024, Jump in early 2026 - focus on sports and a handful of high-volume categories. No one is pricing the long tail, because it is heterogeneous in a way that equities and options are not. Pricing GDP growth is macroeconomics. Drug approval is biostatistics. A ceasefire holding is geopolitics. Each contract requires a different analytical framework. Until recently, no system could reason across dozens of unrelated domains simultaneously.

That is changing. ForecastBench tracks LLM performance against human superforecasting teams on real-world prediction questions. The best commercially available LLM currently scores within 20% of elite human superforecasters, and the gap closes steadily with each model generation. At the current rate, LLMs reach parity with the best human forecasting teams by late 2026.

V. The (True) Compounding Moat

The model compounds its own defensibility: liquidity → participants → trading signals → sharper pricing → tighter spreads → more liquidity. This is the same flywheel that made Citadel Securities nearly impossible to displace in equities - except in prediction markets the moat is deeper, because the pricing problem is harder. In equities, any well-capitalised firm with fast infrastructure can quote competitive spreads on homogeneous instruments. In prediction markets, the long tail is heterogeneous by definition.

The firm that has priced ten thousand contracts across meteorology, geopolitics, and macroeconomics has a training dataset no new entrant can replicate without years of live market participation. Every day a market maker is live, it collects something that cannot be bought later: order book snapshots, flow patterns, and the record of how prices moved in response to real-world events. You cannot go back and buy the order book from March 2026. No market maker has yet started building this flywheel. We intend to start first to build and test this intelligence with live P&Ls as prediction markets scale.

VI. Beyond Trading

Expansion Paths

The trading business alone justifies building the intelligence layer. But what we are actually constructing is a real-time probability oracle calibrated against live markets across every domain of human knowledge. That asset does not currently exist anywhere.

  • Insurance ($7T in annual premiums): every premium is a probability estimate. A system that produces continuously updated, market-calibrated probabilities on insurable events is a pricing engine every reinsurer and catastrophe modeler would pay for.
  • Financial derivatives ($600T+ notional): every option, structured product, and exotic swap is a bet on future states. A system that reasons about joint probabilities across correlated events is a pricing co-pilot for every structured products desk.
  • The agent economy: When AI agents make decisions on behalf of people and companies, they need a probability layer to reason against - the intelligence layer becomes the API every agent calls.
  • Data licensing: ICE's highest-margin business is selling market data; it invested $2 billion in Polymarket because it sees event probabilities as a new data category.

The sequencing (think DeepSeek): market-making first (self-funding, builds the data asset), then data licensing and API access once the probability oracle has a verified track record, then insurance and derivatives partnerships once domain-specific calibration is proven.

Historical Context

Historically, the firms that provided this kind of intelligence - calibrated probability estimates across domains (risk intelligence, economic consulting, strategy consulting) - employed thousands of human experts. That model could never scale beyond $100B+ in revenue across 10-20 service-based firms, and even then, revenue per employee was low.

It was never a good enough standalone investment to build a general-purpose prediction engine from scratch, which is why the best ones were always built internally inside financial firms - proprietary tools that never left the trading floor. Prediction markets change that equation. For the first time, there is a clear commercial reason to build this intelligence layer as an independent product - it is financed and validated by exceptional P&Ls that market makers often display and the opportunity is open to anyone who moves first.

Why This Is Not a Capital-Intensive Business

Citadel and Jane Street need tens of billions in trading capital because they earn 3 basis points on trillions of dollars of volume - razor-thin margins that only work at enormous scale. Prediction market market-making captures 50-150x wider spreads on fully collateralized contracts with no leverage and no margin calls.

The expensive input is not capital but intelligence: LLM compute that scales with cloud infrastructure, not with balance sheet size. The business is designed to self-fund from early months - microstructure profits generate cash from day one, funding both the intelligence layer and organic expansion of trading capital.

A Note on LLM Progression

LLMs do not need to outperform quantitative traders from day one. What they do immediately is make it significantly faster to find alpha in new markets and in markets where the relationships between assets evolve quickly. Over time, they begin doing this autonomously - and eventually, they start beating human teams on raw performance. The progression is: faster research today, autonomous execution tomorrow, superhuman accuracy after that.

Notes

  1. If the market maker quotes far away from the fair value of the token, one leg of the trade will hit. If fair value is 90c, the sell orders at 71c will hit (which is the same as buying the opposite token at 29c). If the buy at 61c doesn't hit since the market moves away from that price, then we have paid 29c for an asset which will resolve at 0c.
  2. There are a few key considerations here: (1) Independent studies have pointed out that most public estimates of volume are often double-counted, with several new exchanges being opened and platforms like Robinhood trying to divert more volume to their own exchanges. We're unsure how much of this is being double-counted and how much is independent volume. (2) This is particularly important for how the growth trajectory over the next three to four years looks. (3) We are planning to compute these numbers and track these indicators in-house and revert with more accurate estimates.
  3. This document was drafted with the assistance of Claude Opus 4.5 (Anthropic). All ideas, claims, data points, and strategic reasoning are original and reflect work conducted internally by the Anthral Labs team. The underlying research, methodology, and datasets will be open-sourced on GitHub shortly. For early access or inquiries, please contact us at contact@anthral.com.