Category
By Ivan Ivashchenko (CTO)
Read time
17 min read
Published on
August 14, 2025
Share

This article is Part 2 of our series on advanced portfolio margining.
If you haven’t read Part 1 check out our CTO’s blog post here: Pascalprotocol.com/blog-and-news/portfolio-margin-for-adults

Clearing Possibilities: Pricing the Risk You Haven’t Taken Yet

Previously, we walked through the elegant world of portfolio margining-how Pascal Protocol evaluates your entire portfolio to compute your actual risk with a VaR-based engine that outclasses isolated/cross toys. Cross-asset, correlation-aware, volatility-weighted, gamma-constrained.

- Real VaR.

- Real math.

- Real risk management.

It prices portfolios with respect to diversification and hedges. The result is obvious: measure risk properly and you need less collateral for the same exposure. Today, we keep going-lifting the hood on how the Pascal team built a clearing layer that actually clears.
If you took only one thing from the first article, let it be this: capital efficiency without truthful risk is just leverage cosplay.

Our engine’s job isn’t to make margin low-it’s to make it correct, block after block, even when markets are loud and messy.
Clearing should be boring in good times and brutally honest in bad times. If it surprises you, it’s a bug. After the last article, you might have thought,

“OK, margin calculation is solved.”

Yeah… no. Let’s be honest-what we covered solved maybe 10% of the nightmare.

That was the easy part. It accounted for the positions you have, not the chaos you’re planning. We politely pretended limit orders didn’t exist.
Remember that tidy little margin formula from last time?
$$ M = \sqrt{\sum_{i}\left((M_i^{+})^{2}+(M_i^{-})^{2}+2\gamma_{i}M_i^{+}M_i^{-}\right)+\sum_{i}\sum_{j}2\beta_{ij}(M_i^{+}+M_i^{-})(M_j^{+}+M_j^{-})} $$
All that glorious math only works if you freeze the portfolio — it evaluates positions you already hold after orders have filled. It works in a neat world where every trade is already settled and known.
But reality doesn’t freeze, it evolves. A portfolio comes with limit orders and each order is a conditional threat. They may become positions, or not. They may fill partially, in the worst possible sequence, right when someone with better latency front-runs your good intentions.
Market orders are trivial for margining: they either execute now or they don’t. We evaluate the portfolio as if executed, require the right margin, and reject if you can’t cover it. Limit orders live in a twilight zone. They are maybe-positions, executed or not eventually later. Possibilities. Quantum exposures.

Schrödinger’s PnL — not exactly a position, but not harmless either. They expose risks almost like positions, but in the universes where they actually fill. And a real clearing layer has to model every universe simultaneously.
Computing risk across hundreds of unfilled orders — each potentially interacting with every other — is hard. Effectively NP-hard, because every limit order introduces a branching future. So every DeFi protocol just… ignores it. Solving this means answering the ugliest question nobody wants to touch:

What is the worst-case margin requirement
if any combination of my limit orders gets filled?

Not your current static portfolio, but your portfolio in every future universe timeline, with any subset of orders that gets filled, partially filled, or front-runs you into oblivion. That “maybe” carries real risk-even if it’s the kind of risk DeFi protocols don’t like to think about.

Pascal Protocol treats it for what it is: conditional exposure.
We’re the clearing layer; we clear the risk that exists, not just the risk that already happened.
TradFi quants calculate this using insanely complicated solvers, keeping the methods as dark secrets. Sometimes it's heuristic; sometimes they raise the planet’s temperature, melting hundreds of CPUs.
We somehow have to do it on-chain-deterministically, in a manipulation-resistant way, and within a block gas budget. That’s why DeFi has been sweeping crumbs under the rug for years. But at Pascal Protocol, we don’t get to ignore hard problems. We clear them too.

Why It’s So Hard

To solve the hard problem the right way, you first have to understand why it’s hard. The fastest way is to burn down the comforting illusions about “easy” fixes. At first glance, it seems obvious:
A limit order is just a virtual position. Pretend it’s filled, compute margin, done.
That only works in a toy world where orders are independent, fills are atomic, and prices graciously move one tick at a time. Reality declines to sign that SLA.
Let’s walk through the intuitive cases and watch each “simple” rule die in the wild. It starts simple; it ends in combinatorics hell.

Case 1: Order increases position → margin goes up

You’re long, you drop a bid below market. If it fills, you’re longer, exposure goes up, margin increases. So yeah, feel free to treat your order as a virtual position. No controversy.

Case 2: Order reduces position → margin goes down (maybe)

You’re long, you post a sell to flatten. If it fills, exposure shrinks. Should required margin go up? Nope. Should required margin drop now? Also nope. Until it fills, your current position still needs its buffer. Required margin can’t magically fall just because you promise to close later.

Takeaway: A reducing order doesn’t create additional margin requirements, but they also don’t buy you a discount in advance. So far so good-updated thumb rule:
Treat increasing orders as virtual positions, and ignore reducing orders.
Right?

Case 3: Portfolio is perfectly hedged → order breaks the hedge

Now the fun begins. Suppose your portfolio is beautifully hedged: long here, short there-perfectly offset. Your portfolio variance is near zero.

Place any new order-long or short-on either leg, and if it fills, your hedge is gone. Yes, even if it was a reducing order.
Risk spikes, even if exposure in one instrument goes down, because hedging is a portfolio property.
Lesson learned: Even orders that reduce notional can increase risk if they disturb a hedge. The last rule of thumb just died.
You might be tempted to fall back on heuristics:
Count orders that increase portfolio risk. Ignore orders that reduce portfolio risk.
Are we finally done?

Case 4: What Breaks Everything

So far, we’ve been looking at one order at a time. Now scale to reality: because real portfolios aren’t one- or two-dimensional, and the limit orders you already have don’t live in isolation.
Imagine you’re a market maker with 45 open positions across 15 indexes and 300 resting limit orders. Some increase risk, some reduce it, some hedge each other, some anti-hedge each other. And now you add one more order.
How does the new order affect your entire portfolio? It might increase risk against some positions, reduce it against others.

But what about the other orders? Because the way this new order interacts with every position actually depends on the potentiality for all your other existing orders to be filled.
To know the real effect of adding a single order, you’d have to consider-and re-evaluate-risk for every subset of existing orders that could fill with it. That’s not “a few” scenarios. It’s now:
$$ 2^{300} \approx 2 \times 10^{90} $$
That’s “good luck brute-forcing \~20 billion times the number atom in the observable universe” territory. Breaking Bitcoin’s hash function and stealing all of Satoshi Nakamoto’s coins would be a rounding error compared to that-and we haven’t even touched partial fills.
That’s why the “just treat them as virtual positions” crowd doesn’t get far. The contribution of each limit order is non-linear, path-dependent, and interacts with every other order-reshaping the entire risk surface of your portfolio in ways that explode combinatorially.
For isolated or cross-margin systems, you can cheat: treat orders independently, keep computation local to a single contract. Simple math, simple life.

And cheating also scales well; it just scales the error. You either overcharge (killing capital efficiency) or undercharge (courting cascade liquidations). Overcharge is stealth insolvency for market makers; undercharge is loud insolvency for everyone.
Neither is clearing. Pascal Protocol made a promise: capital efficiency with no compromise. That means “be correct under stress.”

That means getting the margin just right-not too much, not too little-even when you have hundreds of open orders interacting in non-obvious ways. Traders route flow to where capital is priced sanely; spreads compress to whoever clears risk right.

And it compounds: correct pricing begets tighter quotes, which begets more fills, which begets better price discovery. Clearing quality is market structure alpha.

Pascal's Baseline

When we built the Pascal Protocol margin engine, the checklist was brutal:
  1. Truth, every block. Required margin must reflect actual risk, including the futures that haven’t occurred yet. If your limit orders could make the portfolio unsafe in any branch of the multiverse, we price it in. If your order is filled, your position is closed, or your contract is settled — the required margin adapts instantly. No matter how complicated your portfolio is.
  2. No over-collateralization. We never double-tax hedges or charge twice for the same risk. We never charge for limit orders that strictly reduce worst-case risk.
  3. No cliff effects. Margin cannot snap from “all good” at 10:59:59 and then explode to “liquidate now” at 11:00:00 the instant an order fills. If it can, the model was lying a moment before and underpricing real portfolio risk. Margin changes smoothly; fills don’t cause instant death spirals. That’s how liquidation cascades are born. We enforce Lipschitz continuity versus price movement: a one-tick event cannot change requirement beyond the tick’s worst-case delta/gamma budget.
  4. Zero heuristics, no dark magic. Deterministic, manipulation-proof, transparent. The weirdest possible portfolio construction still gets exactly the margin it deserves. No “if it looks scary, let’s just add 30%.” Same portfolio → same answer. Splitting orders, relabeling, or order-ladder tricks won’t change the truth.
  5. Calculated and verifiable — fully on-chain. Instant. Transparent. Fast. Auditable. No off-chain provers, no extra witnesses. No “trust us.” If you can read EVM storage, you can audit anyone’s margin in real time. Cheap enough to check hundreds of orders for a few thousand gas.

Sound impossible? Good

That means you’ve internalized the problem correctly and see why conventional systems-even CEXes-give up before they even try. But you wouldn’t be reading this if we hadn’t made it real.

Pascal Protocol is live.

It’s clearing thousands of portfolios right now.
What we’ve built isn’t just another DEX.

It’s a clearinghouse with teeth. We don’t just price positions; we price possibilities. It’s a different way to think about futures-when clearing is a first-class primitive.
What does all of that mean for traders? You can ladder deeply without being taxed for protection: post a 20‑level bid ladder under spot, a 15‑level ask ladder above, plus cross‑asset hedges, and your required margin is equal to the worst admissible net outcome of that set-not the absurd “sum of all maybes.”

You can’t game the system with clever order geometry, and you won’t get ambushed by discontinuities the moment liquidity shows up.
What does all of that mean for integrators? A single, auditable margin function that behaves sanely under stress and doesn’t turn into a blender at scale.
And in the next article, we’ll open the box. We’ll show exactly how the engine works-the full algorithm and math you can audit.

You’ll see how we turned exponential combinatorics into deterministic margin that’s cheap to verify on-chain and impossible to spoof.

Stay tuned!