Framework
Execution Context Matters
The main case study describes a live algorithmic trading program rather than a backtest. That distinction is foundational because live execution introduces slippage, fees, latency, drawdowns, and operational pressure that do not show up the same way in theoretical testing.
Historical Window
The results are tied to a specific 18-month period and should be evaluated as one closed historical record rather than a permanent expected baseline.
Operating Environment
Market regime, liquidity conditions, and implementation constraints all shape real-world outcomes. That context should travel with the return figures.
Measurement
What The Published Metrics Are Trying To Show
The site uses headline returns, monthly tables, drawdown figures, and trade counts together so the record is not reduced to one top-line number. That is a healthier methodology than quoting return alone.
- Total account growth gives scale.
- Monthly breakdowns give sequence and dispersion.
- Trade count provides operational depth.
- Drawdown and losing months reveal stress periods.
- Fees disclose real friction against gross performance.
For credibility, the site now pairs return claims with verification and risk pages so the reader can inspect the supporting logic instead of only seeing the strongest data point.
Economics
Fee Treatment And High-Water Mark Context
The case study explains that trading fees were real and that recovery periods were handled on a high-water mark basis. That is an important part of methodology because it shows that the record is being framed in economic terms, not just signal terms.
Trading Costs
The page references more than $788,000 in fees across the period, which signals that the public record is intended to reflect net friction instead of fantasy pricing.
Recovery Logic
The high-water mark note matters because it addresses how losses and subsequent recovery were treated in the operating model.