How machines will transform private capital markets | Oliver Gottschalg | Ep 83
Drawing on more than two decades of empirical research and extensive back-testing, Professor Oliver Gottschalg explains why algorithmic decision support can outperform “normal” human allocation, particularly in fund selection and secondaries pricing, without relying on better data or privileged access.
In this episode, we look at sets out how machine learning can be applied to private markets and why they are structurally better suited to algorithmic allocation than public markets.
The big take
- Machine learning can materially improve PE outcomes. The first step to demonstrate this: re-weighting existing fund portfolios – and that’s before we get to portfolio construction.
- Prediction now matters more than explanation once decision-making crosses a complexity threshold
- Secondaries are meaningfully inefficient, creating scope for systematic pricing and portfolio construction
- Data does not need to be perfect to be useful, but models must be rigorously back-tested
- Human judgement still matters, but only as a downside governor, not a source of optimism or narrative bias
Who is Oliver Gottschalg, and why does his work matter?
Oliver Gottschalg is one of the most widely cited academics in private markets. He holds degrees from Karlsruhe, Georgia State University and INSEAD, teaches private equity and buyouts at HEC Paris, and directs the HEC Private Equity Certificate. Alongside his academic work, he founded Gottschalg Analytics, a data and analytics platform used by LPs and GPs to analyse risk, return drivers and manager skill in private equity.
His work sits at the intersection of academic rigour and real-world allocation, making him unusually well placed to assess what machine learning can and cannot do in private markets.
Can algorithms really outperform human allocators in private equity?
Oliver shows that algorithmic decision support can outperform a large proportion of real-world allocation decisions, especially where humans are asked to weigh dozens of interacting variables consistently over time.
Using conservative back-tests on US public pension data, he demonstrates that modest re-weighting within existing fund commitments, guided by machine learning predictions, would have produced billions of dollars of incremental value. Crucially, this improvement does not rely on better access, new strategies or hindsight.
Why private markets suit machine learning better than public markets
A common objection to back-tested strategies is that alpha disappears once deployed. Private markets are structurally different.
Private equity transactions are discrete, opaque and slow-moving. Trades are not immediately visible, and algorithmic activity does not automatically move prices in the same way as public markets. This means predictive models are less likely to be arbitraged away quickly, particularly in areas such as secondaries.
How machine learning changes secondaries pricing
One of the most compelling parts of the discussion focuses on LP-led secondaries.
Oliver argues that the secondary market is inefficient in a technical sense: prices paid for fund stakes correlate weakly with ultimate forward returns.
Instead of relying on bottom-up portfolio company valuation, his approach asks different questions:
- How conservative or aggressive is a GP’s valuation policy?
- How much future value-creation capacity does the GP realistically have?
- Are team focus, strategy drift or asset mix likely to impair outcomes?
Machine learning allows these factors to be weighted simultaneously, creating a systematic approach to pricing and portfolio construction that could materially lower the cost of liquidity in private markets.
Imperfect data is not the real problem
Private equity data is incomplete, inconsistent and often delayed. Oliver is explicit about this. His point is that perfection is not the bar. The relevant question is whether imperfect data, treated consistently, still points investors towards relative outperformance.
After thousands of back-tests across different periods and market regimes, his conclusion is pragmatic: if a model survives conservative testing and structural change, it can be trusted as a decision support tool.
The trade-off between transparency and predictive power
As models become more complex, interpretability declines. Oliver is candid about this “black box” problem. Once a model incorporates dozens of interacting features, causal explanation becomes impractical.
Trust is therefore built not through narrative clarity, but through:
- disciplined training and validation
- conservative assumptions
- repeated evidence that predictions remain robust across cycles
This represents a cultural shift for an industry accustomed to storytelling.
The role of humans in an algorithm-assisted future
But relax, humans are not removed from the process. Instead, their role changes.
Humans should be able to override models only to reduce expected returns, never to inflate them. If there is information the model cannot yet see, such as team departures or governance issues, the human can step in. What the human should not do is override the model upward based on brand, relationships or confidence.
He likens this to advanced driver assistance systems: the machine does most of the work, while the human prevents catastrophic error.
What this means for LPs, GPs and the wider market
For LPs, this challenges the idea that superior judgement alone is a durable edge in fund selection. Discipline, calibration and governance may matter more.
For GPs and secondary managers, it raises uncomfortable questions about fee structures and defensibility as parts of the investment process become systematised.
At a market level, Oliver outlines a plausible path towards lower-cost, more scalable private market products, potentially expanding access without relying solely on distribution or regulatory change.
Why this episode matters
This is not a discussion about hype or distant futures. It is a grounded, evidence-led examination of what machine learning is already capable of in private equity, where it fits, and where human judgement remains essential.