Most LPs will tell you the hardest part of private markets isn't picking managers. It's figuring out how much to commit, and when.
That's the pacing problem. And if you're allocating to private equity, venture, real estate, or any other closed-end fund structure, you've dealt with it whether you called it that or not. You commit capital today, but you don't control when it gets called or when it comes back. The GP does. Your job is to make sure you're not sitting on too much cash earning nothing, or worse, scrambling for liquidity when three capital calls hit in the same quarter.
A pacing model is the tool LPs use to plan for this. It forecasts future cash flows (contributions out, distributions back in) and projects how your actual allocation to private markets evolves over time. The goal is to figure out how much you should commit each year so that your portfolio hits and holds a target allocation, say 20% to PE, without wildly overshooting or undershooting along the way.
Why it matters
In public markets, hitting a target allocation is straightforward. You want 20% in equities, you buy equities. Done. In private markets, it doesn't work like that. The capital you commit isn't deployed all at once. A GP might call 25% in year one, 30% in year two, and trickle out the rest over the next three to four years. Meanwhile, distributions from older funds are pulling your actual allocation down. The result: your "committed" capital and your "at work" capital are two very different numbers, and the gap between them creates real planning headaches.
If you overcommit, you risk being overallocated when markets turn and your liquid portfolio shrinks. If you undercommit, you miss vintages, sacrifice diversification, and underperform your target. Skipping even a single commitment year can create gaps in your program that take a decade to smooth out.
Pacing models exist to thread this needle.
The Takahashi-Alexander model
The most widely used pacing model traces back to 2001, when Dean Takahashi and Seth Alexander at the Yale Investments Office published a paper called "Illiquid Alternative Asset Fund Modeling." The approach, now commonly called the Takahashi-Alexander model or just "the Yale model," became the industry standard.
The Yale model is elegant in its simplicity. It uses six parameters to project a fund's contributions, distributions, and NAV over its lifetime: a contribution rate, a "bow" factor that shapes the distribution curve, an annual growth rate, a yield, fund life expectancy, and the initial commitment amount. Contributions are modeled as a percentage of unfunded capital, declining over time as the fund draws down. Distributions accelerate as the fund matures, shaped by that bow factor, which you can think of as a proxy for fund duration. A higher bow means the fund holds assets longer before distributing.
The beauty of the Yale model is that it's intuitive. You can set it up in a spreadsheet, tweak assumptions, and get a reasonable picture of how a portfolio might evolve. That's why, over twenty years later, allocators from pensions to family offices still rely on it.
Beyond Yale
The Yale model isn't the only game in town, though it's the most common starting point.
Rules-based pacing is a simpler approach where an investor commits a fixed percentage of their portfolio to private markets each year. For example, committing 3% of total portfolio value annually to buyout until you reach a 15% target. It's easy to implement and maintain, but it doesn't account for the actual cash flow dynamics of your existing portfolio.
Stochastic models, introduce probability distributions into the mix. Instead of producing a single deterministic forecast, they generate a range of outcomes with associated probabilities. You get a 10th percentile scenario, a 50th, a 90th. That range is useful for stress testing and understanding the uncertainty inherent in any cash flow projection.
More recently, some firms have developed their own models that incorporate macroeconomic variables, strategy-specific assumptions, and larger historical datasets spanning thousands of funds.
Where they fall short
For all their usefulness, pacing models have real limitations that LPs should understand.
They're only as good as their assumptions. The Yale model requires you to set a growth rate, a contribution pace, a bow factor. Get those wrong and the output is misleading. As one HBS research team put it, the model's sensitivity to input assumptions can create overconfidence in forecasts, which leads to suboptimal portfolio decisions.
They produce point estimates. The standard Yale model gives you one answer. Not a range, not a probability distribution. One line on a chart. That single output obscures how wide the actual dispersion of outcomes can be, especially across strategies. A buyout fund and a venture fund called at the same commitment size produce dramatically different cash flow profiles.
They don't reflect your actual managers. Most calibrated pacing models use average historical behavior for a given strategy and vintage. They don't capture the idiosyncratic tendencies of your specific GPs. If your buyout manager calls capital faster than average or your venture manager takes 14 years to fully distribute, the model won't reflect that unless you manually adjust.
They live in spreadsheets. This is maybe the most practical problem. At most family offices and smaller institutional LPs, pacing models are Excel files maintained by one or two people. They're disconnected from the actual portfolio data, from the capital call notices coming in, from the quarterly reports being processed. Every time you want to update the model, someone has to manually pull numbers from a dozen different sources and plug them back in. That's not a modeling problem. That's an infrastructure problem.
They're static. Run the model in January, and by March the assumptions may already be stale. Markets move. GPs accelerate or slow their deployment. A fund that was supposed to be fully called by year five is still sitting at 60% drawn in year seven. Traditional pacing models don't auto-correct for this drift unless someone goes in and re-runs the analysis.
What good looks like
The best pacing programs we've seen combine a solid quantitative model with tight integration into the LP's actual workflow. The model should pull from live portfolio data, not a static snapshot from last quarter. It should update as capital calls and distributions flow in. And it should sit alongside the rest of the LP's investment data, not in a standalone file that only one analyst knows how to operate.
That's part of what we're building into Sonar. Pacing isn't a separate exercise from diligence and monitoring. It's connected to everything: the commitments you're tracking, the capital calls you're processing, the quarterly NAVs you're normalizing. When those data flows are unified, pacing becomes a living model rather than a quarterly chore.
The industry has relied on the same basic framework since 2001. The math hasn't needed a radical overhaul. What's needed is better infrastructure around it.