
If you are building stablecoin contracts (or any on-chain withdrawal mechanism), you will eventually run into a big problem: the queue. When traffic is low, a queue can “work”, but when users grow by tens of times (or even hundreds of times), that “working” queue turns into a gas-burning machine: transactions fail, latency stretches, and the dev team ends up running an “operator job to clear the queue”. Technically, this is a major weakness: it’s slow, easier to attack, and it tends to require frequent hotfixes.
In this article, I’ll outline a practical design for a stablecoin backed by USDC/USDT, optimized for EVM. It uses two lanes: a fast lane for better UX, and a queue lane to protect liquidity. The key difference is that the queue lane does not run as a procedural queue (array/linked list/pointer). Instead, I turn the queue into a numeric comparison problem using a mathematical accumulation model, so claim/settle stays (O(1)).
In traditional contracts, a legacy queue forces the contract to iterate through each request, handle a bunch of edge cases (like partial payouts), and often requires a separate cleanup transaction. The result is gas growth (O(n)) (the deeper the request, the more expensive), many state branches that are easy to get wrong, and operational complexity (you end up relying on keepers/operators). My principle is: don’t optimize the queue by writing more queue-management code. Change the model so the queue disappears from the hot path.
Our redemption design is different in that it has two lanes. The first lane is instant redemption — immediate payout for good UX — but it is capped at 0.5% total supply / day so the system can’t be drained by a bank run.
You can think of this lane as a rate limit tied to the system’s scale: the larger the supply, the larger the “reasonable” fast-lane liquidity. Implementation-wise, you only need a day window by timestamp: dayId = floor(block.timestamp / 1 days), a daily cap instantCap = totalSupply * 0.005, and an accumulator instantRedeemedToday[dayId].
If a request fits within the cap and the oracle is healthy, it is paid instantly. If it exceeds the cap, it automatically goes into the queue.
The second lane is queue redemption for the portion beyond the cap. This lane protects liquidity and lets you process disbursements in batches as the vault releases liquidity. But I don’t “manage a queue”. I turn it into a comparison between two numbers. The intuition is simple: treat requests as “segments” on a cumulative number line; settlement is just checking how far capital has flowed on that line.
To do this, we define request (i) with size (r_i), and disbursement (j) (from the vault into the contract) with size (d_j). We have two accumulation functions:
On-chain, the minimal hot-path state is just two accumulators: totalRequested (\approx R(n)) and totalDisbursed (\approx D(current)). When creating request (n), I snapshot the cumulative total at that moment:
So request (n) occupies a “liquidity window” on the number line:
The key point is that (S_n) increases with request creation order, so FIFO becomes a property of the model rather than something you “maintain” with pointers. From here, claim/settle becomes a simple comparison. When a user claims request (n), we compare totalDisbursed against the snapshot:
In this setup, the first case is a full claim, the second case is a partial claim, and (\Delta) is the amount of liquidity that has “covered” the request window. To make it idempotent and prevent double-claiming, each request stores claimedSoFar, and the execution formula is claimable = min(covered, amount) - claimedSoFar (floored at 0). The result is no loops, no queue clearing, and no "to claim request #90 you must process 1–89 first."
Operationally, this EVM-first design makes two decisions to keep accounting clean and audit-friendly.
First, burn the stablecoin immediately when creating a redemption request. The user calls redeem(amount, preferInstant), the contract burns right away; if the instant conditions are met, it pays collateral immediately; otherwise it creates a queued request with snapshot (S_n). Burning immediately makes supply reflect redemption demand (especially important since you use supply to compute the 0.5%/day cap).
Second, liquidity comes from the vault: totalDisbursed only increases when collateral actually arrives from the vault into the contract. The critical rule is to never increase totalDisbursed unless collateral has been received, to avoid “accounting-only disbursements” that could overpay the system’s real liquidity.
Finally, you need a defensive layer for depeg scenarios. If you’re backed by USDC/USDT, your peg risk is tied to the collateral. You can’t “beat depegs” with code, but you can slow down bank-run dynamics and protect liquidity.
Policy-wise: if the oracle price < 0.99, then disable mint and disable instant redemption, while the queue still accepts requests so the system doesn’t fully freeze; claim/disburse runs behind guards and fails closed when the oracle is stale/down. For the oracle, you need at minimum a staleness guard (updatedAt) and fail-closed behavior on abnormal data.
If you want the contract to be “battle-tested”, then don’t rely on vibe coding. Lock invariants and write property tests/fuzzing like totalRequested and totalDisbursed must be monotonic; snapshots must increase by request order.
The claimedSoFar function must not exceed amount, total payouts must not exceed totalDisbursed, and the daily cap must never be exceeded. At that point, the queue is no longer a “data structure problem”. It becomes a question of “how far capital has flowed” — a numerical problem.