Your weekly reviews might look fine — solid win rate, reasonable discipline scores — and you could still be bleeding capital from a strategy that’s been broken for two months. That’s what monthly reviews are designed to catch.
The monthly cadence operates at a different zoom level entirely. Instead of asking “did I execute well this week?”, you ask “is this strategy still working, and am I evolving as a trader?” These are fundamentally different questions, and they require a structured process — not a casual scroll through your trade history.
Weekly vs. Monthly: Two Different Jobs
The weekly trade review is an execution audit. You ask: Did I follow my rules? Did I size correctly? Did I exit at the right levels? Did I revenge trade after a loss? These are tactical questions with answers visible in single trades.
The monthly review is a strategy audit. With 30–60+ trades across four weeks, you finally have enough sample size to ask statistical questions: What is this setup’s actual expectancy? Is my edge stronger in trending markets than choppy ones? Am I improving month over month?
Prop firm evaluation periods — typically 30-day cycles — are built around this same logic. Firms judge traders on max drawdown, consistency, and risk-adjusted returns across a full month. They do this because one or two weeks is noise; a month starts to be signal.
Crucially, the monthly review is also when you make capital allocation decisions: double down on what’s working, reduce size on struggling strategies, or retire a setup entirely. Those decisions cannot — and should not — be made at the weekly level.
Section 1: Equity Curve Analysis
Start by plotting your daily P&L as a running balance for the month. Don’t rely on the final number — the shape of the curve tells you things the number hides.
Identify drawdown start and end dates. A drawdown starts at the most recent equity peak and ends when you recover that peak. Measure peak-to-trough depth as a percentage: if your account dropped from $41,200 to $39,800, that’s a -3.4% drawdown. Compare this against your stated monthly max drawdown target.
Look for regime changes in your own curve. A flat first two weeks followed by a sharp decline in week three often signals a market condition shift your strategy wasn’t built for — not a random bad run. Check whether your drawdown clusters coincide with specific dates (FOMC meetings, major earnings clusters, or index expiration weeks).
Target benchmark: if your maximum monthly drawdown exceeds 3x your average daily gain, your position sizing or strategy selection needs attention.
Section 2: Best and Worst Trades — Process Score vs. Outcome Score
Pick your three best and three worst trades by outcome. Then score each one on process: did you follow your entry criteria, position sizing rules, and exit plan?
This separation is critical. A rule-following loser is not a mistake — it’s a valid data point about your edge. A rule-breaking winner is a warning sign — it teaches you the wrong lesson and incentivizes future undisciplined behavior.
Score each trade from 1–5 on process adherence. A trade where you sized correctly, entered on your criteria, and exited per your plan scores 5, even if it lost $400. A trade where you doubled your normal size because “it felt right” scores 1, even if it made $800.
Track your average process score monthly. If your process score is consistently above 4.0 but your P&L is negative, your rules need refinement. If your process score is below 3.0 but your P&L is positive, you’re relying on luck — a dangerous position. You can also analyse trades professionally to build a deeper scoring framework.
Section 3: Strategy-Level P&L Segmentation
This is the section most free templates skip — and it’s the most valuable one.
If you trade more than one setup, your total account P&L is a blended average that hides everything important. You need to segment performance by strategy and evaluate each independently.
Consider this scenario: a swing trader with a $40,000 account ends January with net +$600 (1.5% gain). Solid week-to-week results, 54% win rate. A monthly template reveals the truth: momentum breakouts on AAPL, NVDA, and SPY produced +$1,850 across 18 trades — 61% win rate, 1.8R average, clear positive expectancy. Meanwhile, VWAP mean-reversion day trades produced -$1,250 across 22 trades — 45% win rate, 0.6R average, negative expectancy.
Without segmentation, both strategies look like “the portfolio.” With it, the trader has hard evidence to cut or retool the VWAP setup before it destroys the breakout edge. A strategy needs roughly 30–100 trades before its win rate stabilizes into a meaningful signal — monthly cadence is often the first point where individual setups cross that threshold.
For each strategy, track: trade count, win rate, average R, expectancy (win rate × avg win - loss rate × avg loss), and total P&L contribution. Calculating these metrics correctly takes ten minutes but reveals what weeks of intuition cannot.
Section 4: Market Condition Audit
Label each of the month’s four or five trading weeks with a condition type: trending, choppy, or event-driven. Trending weeks have clean directional moves in SPY or your primary instruments. Choppy weeks have range-bound, noisy price action with frequent reversals. Event-driven weeks are dominated by catalysts — FOMC decisions, CPI prints, major earnings.
Once labeled, cross-reference each week’s P&L against its condition type. This reveals your true edge environment.
In the January example above, both of the trader’s losing weeks occurred during FOMC weeks. At the weekly level, those losses looked like execution errors or bad luck. At the monthly level, a clear pattern emerges: the momentum breakout strategy struggles in high-uncertainty, event-driven environments where follow-through is suppressed. The fix isn’t better execution — it’s reducing size or sitting out FOMC weeks entirely.
This market regime tracking insight is invisible unless you zoom out to the monthly level and deliberately label your conditions. Most traders never do this, which is why Barber and Odean (2000, Journal of Finance) found that retail traders who traded most actively underperformed the market by 6.5% annually — they were grinding away in unfavorable conditions without the data to know it.
Section 5: Goal Tracking
Monthly goals should be concrete and measurable — not “be more disciplined.” Set targets before the month begins and score yourself at the end.
Example monthly targets for a swing trader:
- Minimum 1.5R average reward-to-risk across all closed trades
- Maximum 3% drawdown from the month’s opening balance
- Process score average of 4.0 or above
- No more than 40 total trades (overtrading guard)
At month-end, compare actuals against each target. A binary pass/fail score is useful — it forces honest self-assessment rather than rationalization. If you hit 3 of 4 targets, identify the specific reason you missed the fourth and write one corrective action for next month.
This goal structure mirrors the framework prop firms use to evaluate traders across 30-day evaluation cycles: they care about drawdown, consistency, and risk management — not raw P&L. Adopting that standard in your own review makes you a better trader and, if you’re pursuing prop capital, prepares you for the prop firm challenge process.
Setting specific monthly trading goals with measurable benchmarks is what separates a review that drives improvement from one that just documents the past.
Key Takeaways
- Weekly reviews audit execution; monthly reviews validate strategy. Both are necessary, but they answer different questions at different levels of zoom.
- Strategy-level P&L segmentation — not blended account P&L — is the most important monthly metric. A $600 gain can hide a -$1,250 broken strategy dragging down a +$1,850 working one.
- Separate process scores from outcome scores. A disciplined loser is better data than a lucky winner, and your process score trend is a leading indicator of future performance.
- Label each week’s market condition (trending, choppy, event-driven) and cross-reference against your P&L to find which environments your edge actually works in.
- Set concrete monthly targets before the month starts — minimum R average, max drawdown %, process score threshold — and score yourself against them at month-end with no rationalization.
JournalPlus makes the monthly review template above actionable: it automatically segments P&L by strategy tag, plots your equity curve with drawdown annotations, and calculates per-setup expectancy without manual spreadsheet work. For traders running more than one setup, the strategy segmentation alone pays for the $159 one-time cost in the first month of clarity it provides.
People Also Ask
How is a monthly trading review different from a weekly review?
Weekly reviews are tactical — they focus on execution quality, rule adherence, and discipline. Monthly reviews are strategic — they validate whether your edge is intact, identify which strategies are working, and inform capital allocation decisions. The monthly cadence also provides enough trade volume for statistically meaningful analysis of individual setups.
How many trades do I need to evaluate a strategy's win rate?
A minimum of 30–100 trades is needed before a win rate becomes statistically meaningful. The monthly cadence is often the earliest point where a single setup accumulates enough trades to evaluate independently.
What metrics should I track in a monthly trading review?
Key metrics include: per-strategy win rate, average R (reward-to-risk ratio), expectancy, max drawdown, peak-to-trough equity curve depth, and a process score that separates rule-following trades from outcome-based judgement.
What is a trade process score?
A process score rates each trade on whether you followed your rules — independent of outcome. A trade that followed all entry, sizing, and exit rules scores high even if it lost money. This separates skill from luck and identifies whether losses come from bad setups or bad execution.
How do I audit market conditions in my monthly review?
Label each trading week as trending, choppy, or event-driven (e.g., FOMC weeks, earnings clusters). Then cross-reference those labels against your weekly P&L to identify which market environments produce your best and worst results.