Skip to main content
You are here:
< Back

 

Table of Contents


Welcome to DriveGTO

DriveGTO is a No-Limit Hold’em GTO solver built on a GPU-accelerated CFR+ engine. You define a heads-up post-flop scenario — a board, both players’ ranges, the pot, the effective stack, and an action tree — and the solver computes a Nash-equilibrium strategy: how often each combo in your range should bet, raise, call, or fold at every node in the tree, plus the EV of each action.

Once the solver finishes, you explore the result through a strategy grid (per-combo frequencies and EV across the full 13×13 starting-hand matrix), an equity chart, and a navigable game tree.

This manual is divided into two parts:

  • Part I — The Solver. The main DriveGTO application. Setting up scenarios, running solves, reading the results, advanced features like node locking. This is the bulk of the manual.
  • Part II — The GTO Trainer. A companion application that drills pre-solved spots from packs, with feedback and a progress dashboard. Most users use both: the solver to study specific spots, the trainer to drill the patterns into muscle memory.

If you’re brand new, the recommended first read is:

  1. The Main Window — what you’re looking at when the app opens.
  2. Setting Up Your First Solve — the Builder workflow end-to-end.
  3. Reading the Results — Example Walkthrough — concrete spot, concrete output, what every number means.
  4. Understanding Solver Output (in Part II) — the theory: indifference principle, EV vs. frequency, why mixed strategies exist.

Everything else can be read on demand.

Part I — The Solver

The Main Window

When you launch DriveGTO, the main window opens. This is your workspace for every solve: setup happens in dialogs launched from here, and once a solve completes, every result panel in this window populates with the output.

[ IMAGE PLACEHOLDER ] Main window — full screenshot of DriveGTOView with a completed solve loaded. Annotate the six regions: (1) Actions panel top-left with AI Solve button, (2) Overview/Equity chart middle-left, (3) Hands grid bottom-left, (4) Strategy grid right side spanning full height, (5) Game tree breadcrumb across the bottom, (6) toolbar/menu strip at the top. Filename: images/main-window-annotated.png

The window is a two-column layout. The left column (~60% of width) holds the solve results in three stacked panels. The right column (~40%) holds the strategy grid. A horizontally-scrolling game tree breadcrumb runs across the bottom.

The six regions

1. Actions panel (top-left)

Shows what actions are available at the currently-selected node in the game tree. The node selection is controlled by the breadcrumb at the bottom of the window.

For each available action, the panel shows:

  • A colored chip (red for fold, green for check/call, amber for bet, dark red for raise, purple for all-in)
  • The action label (e.g. “Bet 33%”, “Raise 50%”, “All-in”)
  • The aggregate frequency of that action across hero’s entire range at this node

The big red AI Solve button in the top-right of this panel is your primary action — it kicks off (or re-kicks-off) the solver against the current setup.

Below the action chips you’ll see a horizontal action distribution bar: a stacked bar showing the proportions of all actions across the range. Bet 33% might take 40% of the bar, check 50%, all-in 10%. Useful for “what is hero’s strategy mostly doing here?” at a glance.

2. Overview / Equity Chart (middle-left)

Two tabs:

Overview tab:

  • Community cards rendered as card images (3 on flop, 4 on turn, 5 on river)
  • Pot size in bb (big blinds) — labelled clearly so you don’t confuse it with chip count
  • Effective pot for the street (the pot at the start of this node’s street)
  • Pot odds % — the price you’d be getting on a call if you faced the current bet
  • Both players’ positions and remaining stacks

Equity Chart tab:

A line chart showing each player’s hot-and-cold equity binned by hand strength. The X-axis is “hand strength rank” (best hand on the left, worst on the right); two lines: IP equity and OOP equity at the current node. You can click the legend to toggle either line, and zoom/pan to inspect a specific region of the range.

[ IMAGE PLACEHOLDER ] Equity chart on a flop where IP has a range advantage. Show the IP line above OOP across most of the strength axis. Filename: images/equity-chart-example.png

This view is most useful at the root of a flop solve — it tells you “who has the equity advantage and how big is it” before you start digging into specific combos. A range with a 5-percentage-point equity edge plays very differently from one at 50/50.

3. Hands grid (bottom-left)

A 4×3 grid of specific hand combinations. When you click a cell in the strategy grid (right side), this grid populates with up to 12 specific combos for that hand category, plus per-combo action breakdowns.

For example, if you click “AKs” in the strategy grid, the Hands grid shows:

  • AhKh, AdKd, AcKc, AsKs (the four AKs combos that exist on a board that doesn’t block any of them)
  • For each combo: a set of small action bars or numeric frequencies showing what GTO does with that specific suited combo

This lets you see suit-specific deviations. AhKh on a heart-heavy flop might bet much more often than AdKd because of the flush draw blocker. The strategy grid shows the average across the category; the Hands grid shows the per-combo detail.

[ IMAGE PLACEHOLDER ] Hands grid populated with AKs combos showing one combo with stronger bet frequency due to backdoor flush draw. Filename: images/hands-grid-aks.png

4. Strategy grid (right column, full height)

The 13×13 grid of starting hands — pairs on the diagonal, suited combos above (s suffix), offsuit below (o suffix). This is the dominant interface for exploring a solution.

Each cell is tinted by the dominant action for that combo at the current node:

  • Red = fold
  • Green = check / call
  • Amber = bet
  • Dark red = raise
  • Purple = all-in (rare on flop, common on river)
  • Mixed colors (gradient) = the cell mixes actions; the gradient encodes the proportions

Three tabs above the grid:

  • Strategy — color-tinted by hero’s action at the current node. Default view.
  • IP Range — what’s in the In-Position player’s starting range at this node (after board-card removal). Cells are shaded by combo weight (1.0 = full inclusion, 0.5 = partial, 0 = excluded).
  • OOP Range — same for OOP.

Click any cell to populate the Hands grid (region 3) with the per-combo detail for that category.

[ IMAGE PLACEHOLDER ] Strategy grid for a flop SRP solve with full coloring — premiums in amber/dark red, marginal hands mixed, weak hands red. Filename: images/strategy-grid-flop.png

5. Game tree breadcrumb (bottom strip)

A horizontally-scrolling row of tiles representing the path through the game tree from root to your currently-selected node. Each tile shows:

  • Acting player position (BTN, BB, etc.)
  • Street (Flop / Turn / River)
  • The action token at that node (X, B33, R50, C, etc.)
  • A 🔒 lock icon if node-locking is configured at that node (see Node Locking)
  • Pot size after this action (so you can see the geometry develop down the line)

Click any tile to make that node the current one — the strategy grid, Actions panel, and Hands grid all update to reflect the new selection.

[ IMAGE PLACEHOLDER ] Breadcrumb showing flop → turn line: BB checks → BTN bets 75% → BB calls → turn deals → BB checks → BTN bets 50% (selected). Filename: images/game-tree-breadcrumb.png

This is how you navigate “what does the strategy look like here vs. there in the tree.” Want to see the turn after a flop check-call line? Click through the breadcrumb tiles. Want to compare two turn cards? Click the turn-card node, change the card via the card selector, and the strategy grid updates.

6. Toolbar / menu (top of window)

The menu strip at the top has:

  • File — New, Open, Save, Save As, Recent files
  • Edit — Undo, Redo, Copy strategy (copies a CSV-formatted strategy table for the current node)
  • Tools — Range Editor, Node Locking, Tree Builder, Settings
  • Help — About, Manual

The most-used items have hotkeys: Ctrl+S to save, Ctrl+R to open the Range Editor, F5 to start a solve.

What’s loaded at startup

By default, DriveGTO opens with a blank slate — no scenario configured. To start solving you need to launch the Builder dialog (Tools → Tree Builder, or Ctrl+B). The Builder is where you set up a scenario from scratch or from a preset.

If you’ve worked on a scenario before and saved it, File → Recent will list your last several files; opening one restores the scenario, ranges, and (if the solve completed) the cached solution.

Setting Up Your First Solve

The setup flow has three dialogs you’ll touch in order: the Builder, the Range Editor, and the Parameters dialog. Most users launch the Builder, set the scenario via Quick Setup, click Solve, and never touch the other two.

Step 1 — Open the Builder

Tools → Tree Builder or Ctrl+B.

[ IMAGE PLACEHOLDER ] Builder window — full screenshot showing all sections: card selectors at top, Quick Setup panel, configuration grid, IP/OOP range buttons, Solve button at bottom. Filename: images/builder-full.png

The Builder is the central setup dialog. Everything you need to define a scenario is here.

Card selection (top of Builder)

Three buttons across the top — one each for the Flop, Turn, and River cards. Clicking opens a card-selector dialog where you tick the cards on the board.

For most flop solves you’ll set the three flop cards and leave Turn and River blank — the solver enumerates them from the deck during the solve. If you specifically want to study a single turn card (e.g. “what happens on the brick 2c?”), click the Turn button and lock it. The lock icon appears on the button to indicate the card is locked.

[ IMAGE PLACEHOLDER ] Card selector dialog with the spades suit highlighted, showing the deck of 52 cards. Hero hovering over Ks. Filename: images/card-selector.png

Tip: for a flop solve, just set the three flop cards. Locked turn/river cards drastically slow down the solve (the solver expands fewer subtrees because it can’t isomorphism-collapse the locked card). Use them only when you specifically want a turn-card study.

Quick Setup (the preset shortcut)

[ IMAGE PLACEHOLDER ] Quick Setup panel — Pot Type dropdown showing “SRP”, Initial Raiser “BTN”, IP “BTN”, OOP “BB”, Apply Ranges button highlighted. Filename: images/quick-setup-srp.png

Quick Setup populates the entire scenario in three clicks. It’s the right starting point for 95% of solves.

Field What it does
Pot Type Dropdown — SRP (single-raised), 3BP (three-bet), 4BP (four-bet). Sets pot size and effective stack to the canonical values for that pot type.
Initial Raiser Position of the player who raised first pre-flop.
IP / OOP The two players in the heads-up matchup.
IP Player Type Dropdown — Tight, Standard, Loose, Custom. Drives the default range applied.
OOP Player Type Same.
Apply Ranges Click after picking everything above. Populates both players’ ranges from baseline 6-max presets.
Edit Presets Customize the player-type presets if you want to tweak what “Tight” or “Standard” means for your study.

After clicking Apply Ranges, the IP and OOP range buttons (lower in the Builder) update to show the combo count and percentage of hands now in each range.

Configuration grid

Below Quick Setup is a grid of numeric inputs for finer control:

Field Default Meaning
Pot varies by Pot Type Pot size at the start of the flop, in bb. SRP=6, 3BP=19 (BTN-vs-SB) or 24.5 (BTN-vs-BB), 4BP=~50.
Effective Stack varies Behind-pot stack at the start of the flop, in bb. SRP=97, 3BP=~88-91, 4BP=~75.
Accuracy Threshold 0.5% Stop the solve when total exploitability drops below this. Lower = tighter solve = slower. 0.5% is a good default.
All-in Threshold 95% Auto-add all-in action only when a normal raise would commit ≥ this fraction of stack. Keeps gratuitous shoves out of the tree on deep stacks.
Advanced Settings Opens the full Parameters dialog for bet menus, raise sizes, raise depth, iteration count.
Lock Path Opens the Node Locking tree editor. Skip on first solve.

If Quick Setup got the pot type and player types right, you’ll usually leave this whole grid alone.

IP / OOP Range buttons

Two large buttons side by side (left=IP, right=OOP). Each shows:

  • The combo count (e.g. “284 combos”)
  • The percentage of starting hands (e.g. “21.4%”)

Click either button to open the Range Editor — most users skip this on first solve since Quick Setup → Apply Ranges already populated reasonable defaults.

Solve button (bottom)

The full-width button at the bottom of the Builder. Click it and the solver kicks off.

If the configuration is incomplete (missing flop cards, empty range), the button is disabled and a red label tells you what’s missing.

Step 2 — Edit ranges (if needed)

If the player-type presets don’t match the spot you want to study, click the IP or OOP Range button to open the Range Editor.

[ IMAGE PLACEHOLDER ] Range Editor — full screenshot. Left sidebar with Saved Ranges and Player Type Presets tabs visible; main 13×13 grid taking up most of the window with mixed-shading combos selected. Bottom shows Range % slider and Coverage display. Filename: images/range-editor-full.png

The Range Editor is split into a left sidebar and a main grid.

Left sidebar — two tabs

Saved Ranges tab: lists ranges you’ve created and saved. Double-click a saved range to load it into the grid. Each row has a delete button if you want to clean up.

[ IMAGE PLACEHOLDER ] Saved Ranges list with custom entries like “BTN open 2.5x”, “BB defend vs BTN open”, “SB 3-bet vs CO”. Filename: images/saved-ranges.png

Player Type Presets tab: the same player-type system Quick Setup uses, exposed manually.

  • Scenario dropdown — switches between SRP / 3BP / 4BP. Different scenarios have different opening/defending ranges.
  • Sections per player type, each with a 3×3 grid of preset buttons:

– OPENS (vs field) — opening range from this position – DEFENDS — calling range against an opener – 3-BETS — re-raising range – 4-BETS — etc.

Click a preset button to load it into the main grid. You can edit it from there if you want a tweak.

Main grid — the 13×13

The standard hand-selection grid: pairs on the diagonal, suited combos above, offsuit below.

  • Click to toggle a combo in/out of the range.
  • Drag to multi-select.
  • Shading indicates weighted inclusion: full color = 100%, half-faded = ~50% weight, faded out = excluded.

Range Text box (above the grid)

Lets you enter ranges in PokerStove notation directly:

22+, A2s+, A9o+, K9s+, KTo+, Q9s+, QJo, J9s+, JTo, T9s, 98s, 87s

Hit Enter and the grid updates. This is the fastest way to import a known range from another tool.

Range % slider (bottom-left)

Drags a “play this many of the strongest combos” cutoff. Useful when you know “I want about 25% of hands in this range” but don’t want to hand-pick.

The Coverage % label next to it shows the actual percentage selected (it can differ slightly from the slider position because the grid rounds to whole combos).

Confirm / Cancel

Click Confirm to save the range and return to the Builder. Click Cancel to discard.

If you’ve made changes, the Range Editor will prompt before discarding.

Step 3 — Fine-tune parameters (optional)

If you need full control over bet menus, raise sizes, or raise depth, click Advanced Settings in the Builder. The Parameters dialog opens.

[ IMAGE PLACEHOLDER ] Parameters dialog — Pot/Stack inputs at top, IP/OOP tab strip, per-street bet/raise size editors visible for the Flop tab. Filename: images/parameters-dialog.png

Top section: pot, effective stack, all-in threshold (same fields as Builder, exposed redundantly so you can edit either place).

Below that: a tab strip for IP and OOP, each with a sub-tab strip for Flop / Turn / River. For each (player, street) combination:

Field Meaning
All-in checkbox Force an all-in option at this node. Default off — the solver auto-adds based on the all-in threshold.
Bet Sizes List of bet sizes (as % of pot) the solver may bet first on this street. Default for v1 packs is [33, 75].
Raise Sizes Raise multiples available when facing a bet. Default [50] (raise by 50% of pot on top of the call).
Donk Sizes OOP-only on Turn/River. Donk = OOP bets after checking the prior street. Default empty (no donks).

The wider the menu, the more accurate the solution but the bigger the tree and the slower the solve. [33, 75] for bets and [50] for raises is the v1 standard — covers small/big sizing without exploding the tree.

Click OK to save, Cancel to discard.

Step 4 — Click Solve

Back in the Builder, click the big Solve button at the bottom.

The button changes to a “Solving…” indicator with a Stop button next to it. Behind the scenes, the app:

  1. Builds the game tree from your config
  2. Streams the tree spec to the GPU-accelerated CFR+ solver (profile_driver.exe)
  3. Polls the solver’s stdout for Iter: N, Total exploitability X%, and Output size: N nodes lines
  4. Updates the in-window progress display each iteration
  5. When exploitability drops below your accuracy threshold (or the iteration cap hits), the solver dumps the strategy JSON, the app loads it, and the result panels populate

Typical solve times (RTX 4060 laptop, 100bb stacks, standard [33,75] bet menu, accuracy 0.5%):

Scenario Solve time
Flop SRP, 250 combo range 25-45 s
Flop 3BP (smaller tree) 15-30 s
Turn-locked SRP 5-15 s
River-locked SRP 1-5 s

Wider ranges, deeper raise trees, or tighter accuracy thresholds scale up. A 30-iteration solve on a wide-range flop can take 90+ seconds.

If the solver fails (rare ~0.5-2% rate, transient GPU issues, see Troubleshooting), the app falls back to CPU-only solving automatically. The Stop button is always available if you want to cancel mid-solve.

Reading the Results — Example Walkthrough

This is the most important section of the manual. We’ll work through one concrete spot end-to-end: setting it up, running the solve, and reading every panel of the output.

The scenario

100bb single-raised pot, BTN opens to 2.5x, BB calls.

  • Pot at flop: 6 bb
  • Effective stack: 97 bb
  • BTN range: roughly the top 40% of hands (22+, A2s+, A9o+, K9s+, KTo+, Q9s+, QJo, J9s+, JTo+, T9s+, 98s, 87s, 76s)
  • BB range: roughly the top 50% (22-JJ, A2s-AKs, A8o-AKo, K8s+, KTo+, Q8s+, QTo+, J8s+, JTo+, T8s+, 98s+, 87s+, 76s+)
  • Flop: Kh9c2d — dry, king-high, no draws beyond gutshots and backdoors

To set this up: open Builder → Quick Setup → Pot Type SRP, Initial Raiser BTN, IP=BTN, OOP=BB, both player types Standard → Apply Ranges → set the flop cards Kh, 9c, 2d → Solve.

The solve takes ~25 seconds on a healthy GPU. When it completes, the result panels populate.

What the Equity Chart shows

[ IMAGE PLACEHOLDER ] Equity chart for the K92r flop. The IP (BTN) line sits above the OOP (BB) line across most of the hand-strength range — characteristic range advantage for the pre-flop raiser on a high-card flop. Filename: images/example-k92r-equity.png

Switch to the Equity Chart tab. You’ll see two lines:

  • IP (BTN) line — sits above the OOP line for most of the strength range. BTN has more KK, AA, AK in the range and fewer trash hands.
  • OOP (BB) line — runs below, with a slight tail-up at the very strong end (BB has KK and 99 too, just at lower frequencies).

Read: BTN has a clear range-equity edge here — characteristic of the pre-flop raiser on a king-high dry flop. This shapes the strategy: BTN can c-bet aggressively (lots of value, opponent’s range can’t keep up), and BB has to defend with a wide-but-cautious calling range (no flush draws to put BTN in tough spots).

The Strategy Grid (right side)

The grid lights up with action colors. At the root flop node (BB to act first, no bets yet), the strategy is:

[ IMAGE PLACEHOLDER ] Strategy grid at flop root, BB to act. Mostly green (check) — BB checks 100% to face BTN’s c-bet. Filename: images/example-k92r-bb-root.png

Almost the entire grid is green — BB checks 100% at the root. Why? In SRPs, BB rarely donks (leads out of position into the pre-flop raiser). The solver’s strategy here is “always check, let BTN c-bet, then react.”

Click the breadcrumb at the bottom to advance to the next node (BTN’s c-bet decision). The Actions panel updates to show BTN’s options:

  • Check
  • Bet 33%
  • Bet 75%

The action distribution bar shows roughly 30% check, 50% bet 33%, 20% bet 75% — BTN c-bets ~70% of their range, mostly small.

[ IMAGE PLACEHOLDER ] Strategy grid for BTN’s c-bet decision on K92r. Strong hands (KK, 99, AKs, AKo, KQ) in dark amber (bet 75%); medium pairs and broadways in lighter amber (bet 33%); some bluffs in light green (check); marginal trash in green (check). Filename: images/example-k92r-btn-cbet.png

The grid shows:

  • Strong value (KK, 99, 22, AKs, AKo, KQs, KQo) — mostly amber (bet 75%), some at 33%
  • Medium pairs (TT-JJ, 88, 77) — mixed: bet 33% sometimes for thin value/protection, check often
  • Top-pair-marginal (KJs, KTs, K9s) — check or small bet
  • Backdoor flush + gutter combos (QJs, QTs, JTs, T9s) — mostly check (will improve later, no need to bet now)
  • Weak high-card (A5s-A2s, suited connectors that whiffed) — split between check and bet 33% as bluffs

This is a textbook range-advantaged c-bet strategy: bet the value strong, mix in some bluffs from the right backdoor candidates, check some medium hands as range protection.

The Hands Grid (bottom-left)

Click on the AKo cell in the strategy grid. The Hands grid populates with the offsuit AK combos.

[ IMAGE PLACEHOLDER ] Hands grid showing AhKc, AhKd, AhKs, AcKd, AcKh, AcKs, AdKc, AdKh, AdKs, AsKc, AsKd, AsKh — 12 combos with their action breakdowns. Bet 75% dominant on all combos, slight variation by suit-blocker. Filename: images/example-k92r-ako-detail.png

You’ll see all 12 AKo combos. The action breakdowns are nearly identical across them — this hand pure-bets large on this board (top pair, top kicker, value-heavy spot). Suit doesn’t matter much because there are no flush draws on Kh9c2d.

Now click 76s. The Hands grid shows 7d6d, 7h6h, 7c6c, 7s6s.

[ IMAGE PLACEHOLDER ] Hands grid for 76s. 7d6d shows higher bet frequency than 7c6c due to backdoor flush + gutter. Filename: images/example-k92r-76s-detail.png

Now you’ll see suit-specific deviations:

  • 7h6h — backdoor flush draw + gutter to 8. Bet 33% ~70%, check ~30%.
  • 7d6d — backdoor + gutter. Bet 33% ~65%, check ~35%.
  • 7c6c — pure gutter, no backdoor. Bet 33% ~25%, check ~75%.
  • 7s6s — pure gutter. Bet 33% ~25%, check ~75%.

The pattern: suits with backdoor equity bluff more often than suits without. The solver knows that 7h6h has more equity to barrel turn cards (any heart turns it into a flush draw, plus the gutter remains) than 7c6c does, so it bluffs heart/diamond combos more.

This is one of the most powerful uses of the Hands grid — finding the suit-specific patterns that the strategy-grid average hides.

Reading the EV column

In the Hands grid, each combo also shows EV per action. For 7h6h on this c-bet decision:

Bet 33%   EV: +0.42 bb
Check     EV: +0.41 bb

The EVs are nearly tied — within 0.01 bb. That’s the indifference principle at equilibrium: when 7h6h mixes between two actions, both actions have (essentially) the same EV. The mixing exists to make BB’s calling/folding decisions tough — not because either action is intrinsically better.

For AKo on the same decision:

Bet 75%   EV: +1.84 bb
Bet 33%   EV: +1.42 bb
Check     EV: +1.18 bb

Here the EVs are NOT tied. Bet 75% dominates by 0.4 bb over bet 33% and 0.6 bb over check. AKo is a pure-action hand at this node — bet 75% is the chip-maximal play and the strategy frequencies should reflect that (and they do: AKo bets 75% ~85% of the time).

Rule of thumb:

  • Pure-action hands (~60-80% of any range) — one action dominates by ≥ 0.3 bb. EVs are NOT tied. Strategy is mostly that one action (≥ 80%).
  • Frequency hands (~15-30%) — actions are tied to within 0.05 bb. Strategy mixes 2-3 actions at meaningful rates. The mix is calibrated to make villain indifferent.
  • Trap / edge hands (~5-10%) — board-dependent surprises. Slow-played monsters, blocker-driven bluff-catches.

For the deep theory on why mixed actions tie in EV at equilibrium, see Understanding Solver Output in Part II.

Navigating to the turn

Click the breadcrumb tile labeled BTN bet 75%. The current node advances. Now you can see BB’s response.

[ IMAGE PLACEHOLDER ] Strategy grid for BB facing BTN’s 75% c-bet. Mostly red (fold) with a band of green (call) for any pair / backdoor / overcards, and a thin sliver of dark red (raise) for sets and combo draws. Filename: images/example-k92r-bb-vs-cbet.png

BB’s strategy: fold most of the bottom of the range, call any pair or backdoor draw, raise rarely with sets (KK, 99, 22) and the strongest combo draws.

To explore the turn: click the BB-call subtree, then click any turn card in the breadcrumb. The strategy grid recomputes for that turn-card branch. You can rapidly compare turn cards by clicking through them.

Saving the result

File → Save (or Ctrl+S) writes the entire scenario plus the cached solution to a .gto file. Re-opening it later restores everything — board, ranges, parameters, and the cached strategy without re-solving.

File → Export Strategy → CSV dumps the per-combo strategy table for the current node to a CSV file. Useful for spreadsheet analysis or sharing with a coach.

Node Locking (Advanced)

Most solves let the solver pick equilibrium frequencies for both players. Node locking lets you override the solver at specific nodes — e.g. “force BTN to bet 75% with 100% frequency on the flop, then let the solver compute BB’s best response.”

This is for advanced study: how should BB defend against a known opponent strategy? What’s the EV cost of a deviation from GTO? How exploitative can BB get against a fixed BTN strategy?

[ IMAGE PLACEHOLDER ] Node Locking dialog — six colored action buttons across the top showing “Bet 75% — 100%, others — 0%”. Per-action sliders below. Filename: images/node-lock-dialog.png

Lock-Path (Tree-wide view)

Tools → Lock Path (or the Lock Path button in the Builder).

Opens a tree view of the entire game tree. Each node shows:

  • Acting player + position
  • Action label (the action that leads to this node from its parent)
  • Locked-action distribution bar (if locked) — visual breakdown of the forced frequencies
  • A lock icon button

Click the lock icon on any node to open the per-node locking dialog. Set frequencies, click OK, the node is now locked. The locked-action bar appears in the breadcrumb of the main window so you don’t lose track.

[ IMAGE PLACEHOLDER ] Lock-Path tree view with several nodes locked, lock icons visible, action bars showing the forced frequencies. Filename: images/lock-path-tree.png

Per-node Lock Dialog

When you click “lock” on a specific node, a dialog opens showing all available actions at that node. For each action:

  • An action button (colored by action type) with the action label and current locked frequency
  • A slider + numeric input below — drag or type to set the frequency 0-100%

The frequencies must sum to 100% across the action set. The dialog auto-balances the others as you change one.

Buttons:

  • Unlock — clears the lock on this node (back to solver-determined).
  • Cancel — discards changes and closes.
  • OK — saves the lock.

After locking, re-run the solve. The solver respects the locks and computes the best response for the unlocked nodes.

Common locking workflows

1. Exploitative best-response. Lock the entire opponent’s strategy (using a previously-solved equilibrium) and let your side optimize. The solver gives you the maximally-exploitative strategy against that fixed opponent.

2. Sensitivity to one node. Solve normally, then lock one node to a different frequency and re-solve. Compare your side’s EV — the difference is what you’d lose / gain by adopting that deviation.

3. Studying preflop-flop bridges. Lock BTN’s c-bet at a specific frequency (say “bet 75% always”) to study what BB’s defense should look like if BTN never small-bets. Useful for studying simplified strategies that mid-stakes opponents actually use.

What locking can’t do

  • Lock at a frequency that’s mathematically impossible given range constraints (e.g. forcing a hand with 0% range presence to bet 100%). The solver throws a warning.
  • Solve at meaningful accuracy if you over-lock (lock too many nodes). Each lock removes a degree of freedom; if the resulting tree is over-constrained, the solver may not converge well. Sanity-check by inspecting EVs and exploitability after the solve.

Settings (Solver)

The solver has its own settings dialog separate from the Trainer. Tools → Settings.

[ IMAGE PLACEHOLDER ] Solver Settings — sections for Solver Engine, Display, Files. Filename: images/solver-settings.png

Solver Engine

  • Default accuracy threshold — exploitability target for new solves. Default 0.5%. Lower means tighter solves but slower.
  • Default max iterations — hard cap on iterations regardless of accuracy. Default 200.
  • GPU enabled — checkbox. Off forces CPU-only solves (equivalent to setting GTO_FORCE_CPU=1). Useful when you suspect a GPU issue or need a deterministic CPU-only result.
  • Thread count — for CPU-only solves and CPU-side post-processing. Default = number of physical cores.
  • Default bet sizes / raise sizes / raise depth — applied to new scenarios where you haven’t customized via the Parameters dialog.

Display

  • Strategy grid color theme — color choices for fold/check/bet/raise/all-in tints.
  • Show EVs — checkbox. When off, the Hands grid shows only frequencies (cleaner for fast scanning); when on, EV per action is shown alongside.
  • Action label format — “33% pot” (default), “33%”, or “1/3 pot”.

Files

  • Default save folder — where Save / Save As default to.
  • Auto-save — checkbox + interval. Saves the scenario every N minutes during a long study session.
  • Recent files limit — default 10.

Troubleshooting (Solver)

“Solve takes forever / never finishes”

Check:

  • Range width. A solve with 250×250 combos is much slower than 150×150. Wide ranges aren’t free — they multiply scratch memory and per-iteration work quadratically.
  • Bet menu width. [25, 50, 75, 100] produces a much bigger tree than [33, 75]. Trim the menu unless you specifically need that resolution.
  • Raise depth. set_raise_limit 4 (the engine default) blows up trees on flop. v1 packs use 2 (bet → raise → call/fold per street). Keep it at 2 unless you know you need more.
  • Accuracy too tight. 0.1% accuracy threshold is 5× slower than 0.5% for a marginal precision gain. Default 0.5%.
  • GPU not engaging. Check Settings → Solver Engine → GPU enabled is on. Look at the iteration log for [GPU iN] record=... lines. If you only see [CPU] per-iter logs, GPU isn’t being used.

“GPU exhausted” / out-of-memory mid-solve

Check VRAM via Task Manager → Performance → GPU. If allocated >7 GB on an 8 GB card during a solve, you’re hitting the ceiling. Mitigations:

  • Trim ranges (250 combos → 200 combos).
  • Win+Ctrl+Shift+B to reset the graphics driver if VRAM appears free in monitoring tools but the solver still complains. (Windows D3D12 device state can get stuck after several hundred solves.)
  • Toggle GPU enabled off in Settings — CPU fallback is slower but doesn’t have a VRAM ceiling.

“Solver crashes with 0xC0000409 / 0xC0000005”

Known transient solver bug, ~0.5-2% rate. The app’s PackBuildRunner retries once and falls back to CPU. For interactive solves, just hit Solve again — the failure is non-deterministic.

A few specific boards (AcKd8h, Ac8d4h, KcQd3h) crash reproducibly across builds. These are likely tickling a real solver bug; not yet root-caused. Workaround: use a slightly different board.

“Strategy results look obviously wrong (AA folds 100%)”

Almost certainly a mis-configuration:

  • Wrong pot/effective stack — check it matches the scenario you mean to solve.
  • Wrong ranges — verify with the IP/OOP Range tabs in the Strategy grid.
  • Solver hit max iterations without converging. Check the iteration log: was exploitability still high (> 5%) at the end? Increase max_iterations or loosen accuracy.

If everything looks right and the result is still nonsense, it’s likely a real bug. Save the scenario file and report it.

“I want to compare two solves side-by-side”

Open the second scenario in a new window (File → New Window). Solve it. Drag windows side by side. The strategy grids are identically-shaped so visual diffing works.

For a numeric diff: Edit → Copy strategy from each, paste into a spreadsheet, subtract.

“How do I export a strategy table for use elsewhere?”

File → Export Strategy → CSV writes the per-combo strategy table for the current node. The format is:

combo,bet33_freq,bet33_ev,bet75_freq,bet75_ev,check_freq,check_ev
AhKh,0.421,1.823,0.412,1.812,0.167,1.654
AhKd,0.456,1.798,0.398,1.789,0.146,1.623
...

For a tree-wide dump (every node), use File → Export → Tree JSON — this is the same JSON the solver itself produces, suitable for re-import or processing with external tools.

Part II — The GTO Trainer

The Trainer is a companion app for drilling pre-solved spots from packs. You install one or more packs, the Trainer hands you spots from those packs at random, you decide what to do with your hand, and you get instant feedback on how the choice compares to the solver’s solution.

The key idea: you don’t run the solver every time you want to study. The solver (Part I of this manual) is for creating solutions to specific spots; the Trainer is for drilling those solutions until the patterns are second nature. Most users use both — the solver for deep study of one position, the Trainer for fast repetition across many.

The next ten chapters cover the Trainer in detail.

Welcome to DriveGTO Trainer

[Image: Trainer home tile picker]

DriveGTO Trainer is a tool for studying No-Limit Hold’em strategy by drilling pre-solved spots from a GTO solver. You install packs of pre-solved scenarios (e.g. “100bb BTN vs BB SRP, flop”), the trainer hands you spots from those packs at random, you decide what to do with your hand, and you get instant feedback on how the choice compares to the solver’s solution.

The key idea: you don’t run a solver every time you want to study. You install one once and drill thousands of pre-solved spots fast, with feedback that tells you whether your decision lost EV against equilibrium.

What’s in the box

Every pack is built from one or more solver runs covering a specific scenario:

  • Format: 100bb cash, NLHE, flop-onwards (turn/river-only packs may come later)
  • Positions: a specific HU configuration (e.g. CO opens, BB calls — “CO vs BB SRP”)
  • Pre-flop context (implicit, defined by the spec): pot size, effective stack, both players’ ranges
  • Boards: either all 1755 canonical flops or a curated 500-board sample stratified by texture
  • Action tree: the bet sizes and raise depth the solver was told to consider

Your job during a session is to drill the flop decisions inside that pack. The trainer never asks you to play a turn or river you haven’t seen — those are out of scope for v1 packs.

How to navigate

The trainer has six main tabs (left side of the window):

Tab What it’s for
Home Tile picker — start a session in a specific mode, browse packs, or open the manual
Pack Browser Install / remove packs, see what’s available
Session The actual drill — one spot at a time with feedback
Nearest Solve Type in a board you saw at the table → get the closest pre-solved spot from your installed packs
Progress Dashboards: accuracy trend, weakness heatmap, study streak, recent attempts
Settings Feedback verbosity, LLM provider, sampling preferences

If you’re new, the recommended path is:

  1. Open Pack Browser, install one of the free packs (e.g. BTN-vs-BB SRP).
  2. Go Home and click “Quick Drill” or “Pack Focus” to start a session.
  3. Make a decision, read the feedback, advance to the next spot.
  4. After 20–30 attempts, check Progress to see where you’re leaking.
  5. Read Understanding Solver Output in this manual when the EV-vs-frequency thing starts feeling weird.

A note on what this trainer is not

  • Not a hand-history reviewer. It can’t import a hand from PokerStars and tell you what GTO would have done. (Use Nearest Solve to get close, though.)
  • Not a real-time HUD. It’s an offline study tool.
  • Not a one-and-done answer machine. GTO solutions are mixed strategies — the right play depends on combo, blockers, and what your opponent is doing. The trainer surfaces what the solver thinks; you build intuition over many drills.

If anything in this manual is unclear, the Glossary has plain-English definitions of every term, and Troubleshooting & FAQ covers common gotchas.

Pack Browser

[Image: Pack Browser with Installed and Available sections, mode chips visible on each row]

The Pack Browser is your library — it lists every pack you’ve installed locally, plus packs available to download from the DriveGTO server. From here you install, remove, and inspect packs.

What’s a pack, exactly

A .drvpack file is a zipped bundle containing:

  • Manifest — pack ID, version, name, description, supported modes, node count
  • `nodes.parquet` — every decision node in the solved tree (board, action history, pot, SPR, acting player)
  • `node_actions.parquet` — the action menu at each node (check, bet 33%, raise 50%, etc.)
  • `node_strategy.parquet` — the solver’s output: per-combo frequency + EV for each action
  • `node_ranges.parquet` — hero’s and villain’s starting ranges at each node, board-card-removal applied

When you install a pack, the trainer copies all four parquet files into its DuckDB database. Drilling a spot is just a series of SQL queries against those tables.

The two sections

The Pack Browser has two side-by-side sections:

Installed Packs

Packs already in your library. Each row shows:

  • Name + version (e.g. “100bb BTN vs BB SRP — Flop Drills” v1.0.0)
  • Pack ID (the canonical identifier — what installed_packs table stores)
  • Node count (total decision nodes — bigger pack = more drill variety)
  • Description + “Applies to” mode chips (which training modes this pack supports)
  • Remove button

Clicking Remove uninstalls the pack — drops every row tagged with its pack_id from the database. Your bookmarked spots tied to that pack are also cleaned up.

Available Packs

Packs from the DriveGTO server catalog you haven’t installed yet. The catalog is fetched from https://drivehud.com/GTO-packs/packs.xml on app launch. Each row shows:

  • Same metadata as installed packs
  • Install (free pack) — streams the .drvpack to local cache, then installs
  • Purchase (paid pack) — opens the purchase URL in your default browser

Paid packs require a license check before the Install button enables. If you’ve purchased a pack on the website, the trainer should auto-detect entitlement on next launch (license validation backend is in development).

Pre-flight conflict check

When you click Install on a pack, the trainer first checks for node ID collisions with already-installed packs. Two packs can legitimately produce the same node_id (the hash is over canonical spot fields, not pack ID, in older builds — packs built with v1.1.0.025+ are salted with pack_id and won’t collide).

If collisions are detected, you’ll see a message like:

“Pack conflicts with 1 already-installed pack(s) that share spot IDs: nlhe-100bb-old-pack-v1. Uninstall it first, or wait for a pack-id-salted build.”

Uninstall the conflicting pack, then retry. Or wait for the conflicting pack to be re-released with a salted hash.

Mode chips

Each pack declares which training modes it supports via the SupportedModes field in its manifest. When you start a session, modes that aren’t supported by any installed pack are greyed out.

The default v1 mode set is everything: Quick Drill, Pack Focus, Street Practice, Position Practice, Combo Focus, Texture Lab, Spot Builder, Exact Verify Drill, Timed Challenge, Free Play (Bookmarked).

A future “river-only” pack might restrict to [StreetPractice, SpotBuilder] — the chips show you that on the row.

Manual install from file

If you’ve got a .drvpack file someone sent you (not from the server catalog), use the standard installer flow — the trainer auto-detects new files dropped into %LocalAppData%\DriveGTO\trainer\packs\ on next launch. (PackManager.AutoInstallFromDirectory handles this.)

What to install

If you’re new and unsure where to start:

  • BTN-vs-BB SRP — most common single-raised pot in HU/6max play. Largest pack (covers all 1755 canonical flops).
  • BTN-vs-SB 3BP — gets you into 3-bet pot geometry without too much complexity (curated 500 boards).
  • CO-vs-BB SRP — slightly different opening range than BTN, useful for full-ring/6max comparison.

Install one or two and drill 100 spots in each before adding more. More packs = more variety but also more spots that don’t repeat — for building a habit you want repetition.

Session — Drilling Spots

The Session tab is where you actually study. The trainer hands you a spot, you make a decision, you get graded, you move on. This page explains every piece of what you see.

[Image: Annotated session view: pack header, pot/SPR readout, board cards, hero cards, action history breadcrumb, action buttons, range grid, feedback panel]

The spot card

The big oval in the middle is the table view. From top to bottom:

  • Villain seat — position label (BB / SB / BTN / etc.), remaining stack in bb. If villain has a bet in front of them, you’ll see a chip stack with the bb amount and an “ALL-IN” badge if the bet equals their entire stack.
  • Pot indicator — current pot size in bb. This is the pot as the spot starts, not after the next action.
  • Board cards — 3 cards on flop, 4 on turn, 5 on river (current packs are flop-only).
  • Hero seat — your position label, remaining stack, and dealer button if you have it.
  • Action buttons — your menu of legal actions for this combo at this node.

Above the table, the header tells you the high-level context: BTN vs BB · flop · pot 24.5 · SPR 3.6 for example. Pot and SPR are computed from the spec, not from any single combo — they’re properties of the spot, same for every combo at the node.

Action history breadcrumb

If the spot is mid-street (e.g. villain bet 75% pot and we’re now considering call/raise/fold), you’ll see a breadcrumb above the table showing what’s already happened on this street:

Flop: X-B75

Across multiple streets, segments are joined with |:

Flop: X-B75-C   |   Turn: X-B50-C

Token cheat sheet:

Token Meaning
x / X check
c / C call
f / F fold
b<n> / B<n> bet sized at n% of the pot
r<n> / R<n> raise sized at n% of the pot
allin / ALLIN all-in shove (rare in current packs — only on shallow-SPR spots)

So X-B75-R50 reads as: villain checked, hero bet 75% pot, villain raised by 50% of the new pot.

The breadcrumb explains why the pot is what it is. If you see “pot 32 · flop · SPR 1.5” with breadcrumb Flop: X-B75-R50 you know two raise levels have already happened — that’s why the pot is 5× the SRP starting pot.

Facing-action label

To the left of (or above, depending on layout) the action buttons, you’ll see a label like:

  • “Action on you” — no one has bet; you’re free to check or open
  • “Facing bet 75% pot” — single bet from villain
  • “Facing raise 50% pot” — villain bet, someone raised, now back on you (post-flop bet → raise sequence)
  • “Facing 3-bet 50% pot” — three bet/raise levels deep — this is rare with the current bet menu (raise_limit 2 caps it)
  • “Facing check-raise 50% pot” — you check-raised (or were check-raised), street started with a check

The label classifies the depth of the betting action so you know what you’re dealing with at a glance. The pot-percent suffix is the size of the most recent bet/raise, which is what determines your pot odds.

Hero cards + range grid

Below the table view, the trainer shows your specific hand for this spot (e.g. As Kc) and to the side, a range grid showing the GTO mix of actions across hero’s entire range at this node.

┌─────────────────────────────────────────┐
│  AA  AKs AQs AJs ATs A9s A8s A7s ...    │
│  AKo KK  KQs KJs KTs K9s K8s K7s ...    │
│  AQo KQo QQ  QJs QTs Q9s Q8s ...        │
│  AJo KJo QJo JJ  JTs J9s ...            │
│  ...                                    │
└─────────────────────────────────────────┘

Cells are tinted by the dominant action for that combo (red = fold, green = check/call, amber = bet, dark red = raise). This gives you a “what’s the rest of my range doing here?” read at a glance. Your specific hand is highlighted.

[Image: Range grid showing GTO mix at a flop node, with hero combo highlighted]

Making a decision

Click one of the action buttons. The trainer immediately:

  1. Pulls the strategy row for your combo at this node — frequencies and EV per action.
  2. Identifies the highest-EV action as “GTO best.”
  3. Computes EV loss: chosen_EV − best_EV (always ≤ 0).
  4. Assigns a grade tier:

EXACT — your pick equals GTO best (EV loss ≈ 0) – CLOSE — within ~0.05 bb of best – OK — between 0.05 and 0.5 bb – POOR — more than 0.5 bb off

  1. Writes the attempt to your user_attempts table for the Progress dashboard.

The result panel shows:

EXACT    EV loss: 0.00 bb · chosen EV -10.72 bb · GTO EV -10.72 bb
Call    6.5%   -10.72 bb   GTO best · Your pick
Raise   10.1%  -13.66 bb
Fold    83.5%  -12.25 bb

Wait — why is fold 83% if call is the GTO best? Read Understanding Solver Output. Short answer: in equilibrium every mixed action has the same EV; the ~1.5 bb gap you see is convergence residual. The trainer grades by EV (chip-maximal), not frequency.

Sessions and modes

Different “modes” change how spots are sampled. From the Home tile picker:

  • Quick Drill — random spots from any installed pack matching a chosen mode chip. No filters; widest variety.
  • Pack Focus — lock to a specific pack and drill until you’ve seen everything in it.
  • Street Practice — flop-only, turn-only, river-only. Useful when only certain streets are weak.
  • Position Practice — only spots where you’re in a specific position (BTN-as-hero, BB-as-hero, etc.).
  • Combo Focus — only spots that include a specific combo category (AKs, pocket pairs, suited connectors). Drill the spots where this hand class actually appears.
  • Texture Lab — only boards in a specific texture bucket (paired, monotone, ace-high dry, etc.).
  • Spot Builder — every filter visible at once. Pick exactly the slice you want.
  • Adaptive Difficulty — the trainer picks packs you score worst in (target ~55% accuracy, ≥5 attempts, 30–80% accuracy band).
  • Bookmarked — only spots you starred (☆ Bookmark button on the spot card).
  • Exact Verify Drill — runs a fresh CFR+ solve on the current spot and compares to the stored solution. Slow per spot but useful when you suspect a stored answer is wrong.
  • Timed Challenge — fixed N spots, your speed and accuracy are scored.

How spots are sampled

Within whatever filter the mode applies, the sampler picks one node by a bet-depth bucket weighting:

  • 40% of spots have no facing bet (you act first or after a check)
  • 40% of spots have a single bet to face
  • 20% of spots have a raise to face

This roughly matches real-game frequency — uniform sampling over decision nodes would over-represent raise spots ~4× their actual prevalence. If you want to drill more raise spots specifically, use Spot Builder with no bet filter.

Bookmarking

The ☆ Bookmark button on the spot card stars the (node, combo) pair for later. To revisit, switch to Bookmarked mode. Bookmarks are stored locally per user; uninstalling a pack cascades and removes its bookmarks.

Line play (turn / river continuation)

If a spot’s chosen action has a known follow-up node (e.g. you bet, villain calls, the next decision is yours on the turn), the trainer can continue along the line instead of dealing a fresh random spot. The “Next Spot” button becomes “Continue Line” in that case. Useful for studying multi-street decision trees in sequence.

You’ll see breadcrumb segments accumulate across streets so you don’t lose track of how you got there.

What to do if a spot looks wrong

  • Pot/SPR weird? — re-check the pack manifest; pre-2026-04-25 packs may have stale bet-menu pots. Reinstall the v1.1.0.046+ rebuild.
  • Villain card render issue? — file a bug via Settings → Feedback.
  • Strategy clearly broken (e.g. AA folds 100%)? — likely a convergence outlier. Try Exact Verify Drill mode on that spot to re-solve and compare. If consistent, file a bug with the pack ID + node ID.

Nearest Solve

The Nearest Solve tab answers a simple question: “I just played this board at the table — what does the closest pre-solved spot in my packs look like?”

You type a position + street + board (and optional SPR), the trainer searches every node in every installed pack for the best fuzzy match, shows you the top three matches with a similarity score, and lets you click one to drill it as a fresh session spot.

[Image: Nearest Solve search panel + result list with scoring breakdown]

Why fuzzy matching

A solver pack covers either all 1755 canonical flops (the BTN-vs-BB SRP pack) or a curated 500 stratified across textures. If you played Js9d4c and only the BTN-vs-BB pack has every flop, you’re guaranteed to find an exact board match there. But on the curated 500-board packs, your exact flop might not be in the set — and even if it is, the SPR or position might differ.

Nearest Solve handles this by scoring every node by similarity to your query and ranking. You don’t get a hard miss; you get the closest match with a transparent breakdown of how close it is.

Scoring formula

For each node in installed packs, similarity = weighted sum of:

  • 70% — rank distance. Distance between board ranks (treating cards as ranks A–2). On flops, sort both boards’ ranks and diff each position. Ks8h4c vs your Ks8h3c differs by 1 rank in slot 3.
  • 20% — suit pattern. Are both boards monotone? Two-tone? Rainbow? Suit pattern as a 3-tuple of suit equivalence classes — match means full credit, off-by-one means partial.
  • 10% — paired-ness. 5h5c2d is paired; 5h4c2d isn’t. Same paired-ness pattern means full credit.
  • +25% (tie-breaker) — SPR distance. If you specified an SPR in the query, nodes with similar SPR rank higher within the same board match group.

Top score: 100 (exact match). Anything ≥ 95 is a near-perfect hit. 80–95 is “close enough — strategies very likely portable.” Below 70 the trainer warns you the match is weak and you should interpret with caution.

Reading a result

Each top-3 row shows:

Score 92 ████████████░░░
Board: Ks8h4c (yours Ks8h3c) · SPR 3.4 (yours 3.2)
Pack: 100bb BTN vs BB SRP · BTN faces flop · pot 6.0
[Drill This Spot →]

The diff highlights what’s different between the matched board and yours. In the example above, only the lowest card differs by one rank (4c vs 3c) — that’s a near-equivalent texture for most strategic purposes. SPRs are also nearly identical.

Click “Drill This Spot” and the Session view loads the matched node as a fresh, ungraded spot. You make your decision against the matched solution.

When fuzzy matching breaks down

The score weights work well for typical SRP/3BP study, but watch for:

  • Wet vs dry boards9h7h6h (monotone wet) and 9d7s6c (rainbow) can score similarly under the rank-only weight even though strategies are very different. The 20% suit-pattern weight catches most of this; trust the score, but eyeball the board for big texture shifts.
  • Paired vs trips8h8c8s (trips) is its own category strategically; rank-only matching might pull Kh8c8s (top pair on paired board) which plays totally differently. The paired-ness weight helps; trust scores ≥ 85 here.
  • Cross-pack matches — Nearest Solve searches every installed pack. If your query is a 3-bet pot but the only matching board lives in an SRP pack, the strategy will not transfer. Always check the pack name in the result row.

Combining with Spot Builder

For systematic study of one specific board across packs (e.g. “how does AKo play on Q94 in BTN-vs-BB SRP vs CO-vs-BB SRP vs the 3BPs?”), use Nearest Solve with no SPR filter — it’ll surface the same board across multiple installed packs. Then drill them sequentially.

A common workflow

  1. After a session, you remember a hand: BTN, called BB’s 3-bet, flop came Js9d2c, BB checked, you bet 33%, BB raised to 50%. Now what?
  2. Open Nearest Solve, enter:

– Position: BTN – Street: flop – Board: Js9d2c – SPR: ~3 (3-bet pot SPR is in that range)

  1. Top match comes back: Js9d2c in BTN-vs-BB 3BP pack, SPR 3.7, score 100 (exact board) — the SPR diff costs you a few points.
  2. Click “Drill This Spot.” Session view loads this node. Now you can drill the entire range’s response to that same x-b33-r50 line, see what the GTO call/raise/fold mix is for your AKo (or whatever combo you held).

This is the one tool that turns the trainer from “drill random spots” into “study spots that actually came up in my game.”

Progress Dashboard

The Progress tab is your study report card. Every drill attempt gets logged to the local user_attempts table, and this dashboard slices that data into KPIs, trends, and weakness maps so you can answer:

  • Am I getting better?
  • Where do I leak the most chips?
  • Have I been studying consistently?

[Image: Progress dashboard with KPIs, sparkline, heatmap, streak calendar]

KPI strip (top row)

Four cards across the top:

┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ 🎯 ACCURACY   │ │ 📊 ATTEMPTS │ │ 💧 EV LOSS  │ │ 🏆 STREAK    │
│   62.5%      │ │    1,247    │ │  2.25 bb    │ │   8 days     │
│ ↗ trending up│ │  this week  │ │   per spot  │ │ Active today │
└──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘
  • Accuracy — % of attempts graded EXACT or CLOSE. Card accent tints red below 50%, amber 50–70%, green 70%+. The mini sparkline below the number shows the rolling trend.
  • Attempts — total drill count. Bigger number = more trustworthy stats below it.
  • EV loss — average bb lost per spot across all attempts. Lower is better. This is the dollar-equivalent metric — at NL100, 1 bb of avg EV loss across 1000 spots = $1000 left on the table over those decisions.
  • Streak — consecutive days with at least one drill attempt. Shows current and longest. The 🔥 icon turns gold at 7+ days.

Accuracy trend (sparkline)

Embedded under the Accuracy KPI is a rolling-window sparkline showing accuracy over time:

Accuracy ↗
  ┃        ╭──╮  ╭─
  ┃    ╭───╯  ╰──╯
  ┃ ╭──╯
  ┃─╯
  ┗━━━━━━━━━━━━━━━━━
   start         now

Buckets are auto-sized: minimum 5 attempts per bucket, target 20 buckets total. Trend arrow (↗ / → / ↘) reflects the first-vs-last delta if it exceeds 2%. Hidden until you’ve logged ≥ 10 attempts (sparkline with too few points is misleading).

EV-loss histogram

Distribution of how much EV you lose per spot:

  N
  ┃ ████ 380          ← 53% of attempts ≤ 0.05 bb (excellent)
  ┃ ██   145
  ┃ ██   120
  ┃ █    52
  ┃ █    38
  ┃ ▌    24
  ┗━━━━━━━━━━━━━━━━━━
    0  <.1 .1-.3 .3-1 1-2  2+
              EV loss (bb)

Buckets: 0, <0.1, 0.1–0.3, 0.3–1.0, 1–2, 2+. Bars are colored on a green→red severity ramp.

What to look for:

  • Healthy distribution — left-heavy. Most attempts ≤ 0.1 bb, long thin tail.
  • Bimodal — lots of zeros AND lots of 2+ bb errors. Suggests you’re exact on familiar spots but completely wrong on unfamiliar ones. Drill more variety.
  • Center-heavy — big chunk at 0.3–1.0 bb. You’re “kinda right” most of the time but rarely exact. Often a sign of frequency-mode vs EV-mode confusion (read the Understanding Solver Output section).

Weakness heatmap (street × position)

Grid showing accuracy by street and the position you were acting from:

            BTN     BB     SB     CO
   Flop    ▓▓░░   ▓▓▓░   ▓▓▓▓   ▓▓░░
   Turn    ▓▓▓░   ▓░░░   ▓▓▓░   ▓▓▓░
   River   ▓▓▓▓   ▓▓░░   ▓░░░   ▓▓░░

Cells colored red/amber/green by accuracy, alpha-scaled by how far from 50% they sit (so ambiguous N-low cells fade to grey instead of misleading you). Tooltip on hover shows N attempts, accuracy, average loss for that cell.

What to look for: a single dark-red cell is a leak. If “BB on the turn” is consistently red while everything else is green, that’s where to focus. Use Position Practice mode + Street Practice mode together to drill that slice.

Study streak calendar

GitHub-style 7×12 grid covering the last 12 weeks of activity:

M ░ ▒ ▓ ▓ ░ ▒ ▓ ▓ ▒ ▓ ▓ ▓
T ░ ▓ ▓ ░ ░ ▓ ▓ ▓ ▒ ▓ ▓ ▓
W ▒ ▓ ▓ ▒ ░ ▓ ▓ ▓ ▓ ▓ ▓ ▓  ← darker = more attempts that day
T ░ ▓ ▓ ░ ▒ ▓ ▒ ▓ ▓ ▓ ▓ ▓
F ▒ ▓ ▓ ▓ ░ ▓ ▒ ▓ ▒ ▓ ▓ ▓
S ░ ░ ▓ ░ ░ ▒ ░ ▒ ░ ▒ ▓ ▒
S ░ ░ ▓ ░ ░ ░ ░ ▓ ░ ▒ ▓ ▓

Green ramp by daily attempt count. Includes a current-streak counter (🔥 N-day streak) plus active-day total over the window. The legend strip below shows what each green intensity represents (0, 1–9, 10–24, 25+ attempts).

Building a streak is more about consistency than volume — 5 spots a day for 30 days beats 100 spots once a week for retention.

Recent attempts list

Below the dashboards, a scrolling list of your last ~50 attempts. Each row:

2026-04-26 14:32  ·  Q94r flop  ·  AKo  ·  YOU: Call  ·  GTO: Call  ·  EXACT  ·  0.00 bb
2026-04-26 14:31  ·  J84tt flop  ·  77   ·  YOU: Bet 75  ·  GTO: Check  ·  POOR  ·  -1.84 bb
2026-04-26 14:30  ·  T93m flop  ·  AsKs ·  YOU: Bet 33  ·  GTO: Bet 33  ·  EXACT  ·  0.00 bb

Row backgrounds tint by grade tier (red ramp for Close→Ok→Poor; neutral for Exact). EV-loss text shaded amber→orange→red up to 2 bb. Combo and board are rendered in monospace for glance-readability.

Click any row to jump back to that spot in Session view (replay mode — see your previous attempt and the GTO answer).

Mistake patterns (leaks)

When the trainer logs an attempt with non-zero EV loss, it tags the attempt with a mistake_pattern label inferred by MistakeClassifier. Common labels:

  • Over-bluff — you bet/raised when GTO mostly checks/calls
  • Under-defend — you folded when GTO mostly calls
  • Wrong size — right action class but wrong sizing
  • Slowplayed value — checked a hand that GTO bets ≥ 80%
  • Hero-call — called when GTO mostly folds
  • etc.

The “Top Leaks” panel groups your recent attempts by pattern and shows which leaks you commit most often. Drill plan: pick the top leak, switch to a mode that targets it (e.g. Combo Focus on bluff-catchers if your top leak is hero-calling), grind 50 spots in that filter, watch the leak’s count drop.

How long until the dashboard means something

Rule of thumb:

  • < 30 attempts: numbers are noisy. Accuracy can swing 20% from one good spot.
  • 30–100 attempts: trend visible but treat single cells in the heatmap with skepticism.
  • 100–500 attempts: actionable. Heatmap and histogram are reliable; streak is genuine.
  • 500+: detailed leak analysis (top leaks, pattern frequencies) becomes meaningful.

Settings

The Settings tab controls how the trainer behaves and how it talks to you. There aren’t many knobs, on purpose — every setting here matters.

[Image: Settings panel sections: feedback, LLM, sampling, data]

Feedback

Controls what shows up after each drilled spot.

  • Feedback mode

Off — just the grade and EV loss. Fastest, cleanest, best when you’re rapid-drilling. – Hardcoded — short canned explanations (e.g. “Bluff-catcher with no equity — fold is the standard play here”). No internet needed; no API costs. Pattern-based. – LLM — full natural-language explanation from your chosen LLM provider. Slower (1–3 seconds per spot) but more nuanced and contextual.

  • Show coach feedback — toggle the right-hand feedback panel on/off independent of mode (so you can have LLM feedback on but hide the panel for pure speed drills, then turn it back on when you want the explanation).
  • Auto-feedback for incorrect spots only — only generate explanations when your grade is OK or POOR. Saves LLM calls; you don’t need a paragraph telling you why your EXACT call was correct.

LLM provider

If feedback mode = LLM, choose your provider:

  • Anthropic Claudeclaude-haiku-4-5 is a good default (fast, cheap). Set ANTHROPIC_API_KEY in env or paste it here.
  • OpenAI ChatGPTgpt-4o-mini recommended. OPENAI_API_KEY.
  • Google Geminigemini-flash recommended. GEMINI_API_KEY.

Keys are stored locally (encrypted in your user profile). Switching providers mid-session is fine; the trainer doesn’t keep state across providers.

Cost expectation: at 100 drills with feedback per session, a Haiku/Flash/4o-mini call costs roughly $0.01–$0.05 total. Don’t sweat it.

Sampling

  • Spot sampling profile (planned) — currently fixed at 40/40/20 (no-bet / single-bet / raise-faced). A future release will expose these as a slider so you can drill more raise spots specifically if you want.
  • Adaptive Difficulty target accuracy — default 55%. Adaptive picks packs you score closest to this number with at least 5 attempts and accuracy in the 30–80% band.
  • Filter persistence — remember last-used filter in Session, or start blank. Default: remember.

Data

  • Database location%LocalAppData%\DriveGTO\trainer\drivegto.duckdb. Read-only here; if you need to back up or migrate, copy the file with the trainer closed.
  • Pack cache%LocalAppData%\DriveGTO\trainer\packs\. Holds the original .drvpack files post-install for re-extract or re-install. Safe to clear if disk-pressed; the database keeps the canonical data.
  • Export attempts — dump user_attempts to CSV. Useful if you want to chart in a spreadsheet or share with a coach.
  • Reset progress — wipes user_attempts only. Pack data remains. ⚠️ Irreversible. Confirms twice.
  • Reset everything — uninstalls all packs, wipes attempts, resets settings to defaults. The “factory new” button. ⚠️ Irreversible.

Save semantics

The Save button at the bottom commits any pending changes. Before that, edits show a yellow ● Unsaved changes indicator next to the button.

If you navigate away from Settings (click another tab, or close the window), the trainer prompts:

“You have unsaved changes. Save / Discard / Cancel?”

  • Save — commits and exits.
  • Discard — reverts and exits.
  • Cancel — stays on Settings, you keep editing.

Things you might expect to find here that aren’t

  • Theme / dark mode — coming in a later release. The trainer follows DriveHUD’s theme by default.
  • Per-pack mode toggles — pack-level “supported modes” come from the pack manifest, not settings. Edit the manifest if authoring custom packs.
  • Solver settings — the trainer doesn’t run the solver live; spots are pre-solved at pack-build time. To re-solve with different bet menus / iterations, you rebuild the pack via PackBuilder. See the Pack-Build Concepts manual section.

Understanding Solver Output

This section is the conceptual core of the trainer. If anything you’ve seen in a drill ever made you think “wait, why is GTO recommending fold 83% if the highest-EV play is call?” — read this all the way through. By the end, that screen makes sense.

What a “GTO solution” actually is

When you install a pack and drill a spot, what you’re seeing is the output of a CFR+ solver running over a defined game tree. CFR+ (Counterfactual Regret Minimization, Plus variant) is an iterative algorithm that approximates a Nash equilibrium for the no-limit hold’em subgame defined by:

  • Two players (you and your one opponent — heads-up)
  • A starting pot and effective stack
  • Two starting ranges (yours and theirs)
  • A board
  • An action tree (the menu of bet sizes and raise depths the solver was told to consider)

A Nash equilibrium is a pair of strategies — one for you, one for villain — where neither player can improve their EV by unilaterally deviating. If you change anything about your strategy while villain keeps their equilibrium strategy, your EV goes down or stays the same. Same in reverse for villain.

That’s the whole game theory premise. Everything below is consequences of it.

Strategies are mixed, not pure

The first thing that throws people: a GTO strategy isn’t “with hand X, do action Y.” It’s “with hand X, do action A with probability p, action B with probability q, action C with probability r.” The frequencies sum to 1.0.

Why mixing? Because pure strategies are exploitable. If you always raise AA and always fold 72o on a given board, villain can adapt by folding when you raise (no value for AA) and bluffing when you check (because they know you don’t have AA). Mixing makes you unreadable.

A spot might output:

AKo on Q94r facing all-in (16bb to call, pot 40.5):
  Call 6.5%
  Raise 10.1%
  Fold  83.5%

Read: with AKo here, you should call 6.5% of the time, raise 10.1%, fold 83.5%. Across many instances of this exact decision, those are the proportions that keep villain from exploiting you.

The indifference principle

Here’s the key insight that resolves the “why does fold 83% have lower EV than call?” puzzle:

In a converged Nash equilibrium, every action that you mix into with nonzero frequency has the EXACT SAME EV.

That’s the indifference principle. The strategy isn’t “fold 83% because folding is better” — it’s “fold 83% because fold, call, and raise all have identical EV at the equilibrium, and 83% is the proportion of fold that makes villain indifferent between bluffing and value-shoving.”

Think of it from villain’s perspective. If you fold too little, villain can exploit you by never bluffing — they always have the goods, and you’re calling too often with no equity. If you fold too much, villain exploits you by bluffing more — they print money. There’s exactly one fold frequency that makes villain indifferent between bluff and value: that’s the equilibrium fold rate.

So the 83% fold is not a recommendation to fold. It’s the rate that locks villain in equilibrium. And the EVs of fold/call/raise tie at the equilibrium fold rate — that’s the mathematical definition of indifference.

Why the EVs in your trainer DON’T tie

If indifference is true, why do you see numbers like:

Call -10.72 bb
Raise -13.66 bb
Fold -12.25 bb

with a 1.5–3 bb gap between them? Three reasons:

1. Convergence residual

CFR+ approximates Nash equilibrium iteratively. Each iteration tightens the strategy a bit. With infinite iterations every mixed action’s EV would tie exactly. With finite iterations there’s a small residual gap — the solve is “close to but not at” equilibrium.

Your packs solve at 30 iterations with an exploit target of 5 bb/100. That’s a deliberate trade-off: 30 iterations is fast (so we can solve thousands of boards), 5 bb/100 is “good enough for most spots” — but it leaves residual error of typically 0.5–3 bb between mixed actions.

If you ran the same spot at 1000 iterations the gap would shrink to ≤ 0.05 bb. Same strategy frequencies, tighter EVs.

This is why the trainer also offers Exact Verify Drill mode — runs a fresh, more thoroughly converged solve on the current spot to compare. If the verified solution has the same frequencies but tighter EV ties, it’s just convergence noise. If it has different frequencies, the stored answer might genuinely be off.

2. Action discretization

Real poker has continuous bet sizes. Solvers discretize: bet 33%, bet 75%, raise 50%, etc. The “true” GTO strategy might want to bet 41% but the solver can only choose between 33 and 75. So all “size” strategies are slight approximations.

Look at your action menu in the trainer to see what sizes were available. If you ever feel a spot is “between sizes,” that’s why.

3. Range / opening simplifications

The pack defines starting ranges (e.g. “BTN opens 22+, A2s+, K9s+, …”). If villain in real life is opening a slightly different range, the equilibrium changes. Your stored solution is right for the spec’d ranges; it might be slightly off for your specific opponent.

This isn’t an error — it’s the cost of having a pre-solved library vs. running a custom solve every time.

Why the trainer grades by EV, not frequency

If equilibrium frequencies are the “real” answer, why does the trainer mark you EXACT for picking the highest-EV action?

Two practical reasons:

Frequency-based grading is probabilistic

If GTO says “fold 83%, call 6.5%, raise 10.1%,” then each of those actions is a “valid” play — just at different rates. A single fold isn’t 83% right; it’s right given the dice rolled in the fold direction this time. To grade fairly under frequency-mode, you’d need to:

  1. Roll a random number against the strategy distribution.
  2. Mark you EXACT only if your action matches the rolled answer.

Two problems: (a) you’d get marked POOR for picking call when call has the same EV as fold (both equilibrium-correct), and (b) over many drills, your accuracy would converge to roughly the most-frequent action’s frequency — meaningless as a learning signal.

EV-based grading is unambiguous

Pick the action with the highest EV. That’s chip-maximal given the solver’s villain strategy. If you pick differently, you’ve left chips on the table — measurable in bb. Clean grading, instant feedback, comparable across spots.

The trade-off: in spots where actions tie or near-tie at equilibrium, picking “the wrong one” gives you 0.0001 bb EV loss, which the trainer rounds to EXACT or CLOSE anyway. So in practice, EV-grading and frequency-grading agree on “is this a reasonable play” — they only disagree on which action to mark as “the” answer.

Pure hands vs frequency hands

In every spot, hero’s range divides into roughly three categories:

Pure-action hands (~60–80% of the range)

These play one action ≥ 95% of the time. Best hands and worst hands almost always.

  • AA on a dry board, no obvious draws — bet 99%+ for value
  • 7-2 offsuit on AKK — fold 99%+ to any bet (no equity, no fold equity)

The trainer marks you EXACT for the right pure action and POOR for anything else. These are the easy spots — they should be 95%+ of your accuracy.

Frequency hands (~15–30% of the range)

Hands that mix two or three actions at meaningful rates. Marginal value, semi-bluffs, blockers.

  • KQo on an ace-high board — sometimes call as a bluff-catcher with backdoor equity, sometimes fold
  • AKo on Q94 facing a shove (the example earlier) — mostly fold, sometimes call, sometimes raise

The 83.5%/10.1%/6.5% example is a frequency hand. The trainer marks you EXACT for the highest-EV action, but mathematically any of the three actions is equilibrium-correct. If your accuracy on frequency hands is 60–70%, you’re doing fine. Don’t beat yourself up for picking call when GTO says fold most of the time — they’re tied in EV.

Trap hands (~5–10%)

Edge cases where the right play is highly board-dependent. Slow-played monsters (set on monotone), check-jamming bluffs (no-equity hands with the right blocker), etc. These are where exact convergence matters most and where the solver’s sizing approximation hurts most.

Use the heatmap on the Progress dashboard — if you’re consistently OK/POOR on the same combo class, that’s your weakness slice.

Live play vs. solver play

Here’s where it gets interesting. The trainer’s “GTO best” is chip-maximal against the solver’s villain. In real poker, your villain is not playing solver-perfect. They’re leaking somewhere.

Two regimes for choosing your action:

Stay unexploitable (mix per equilibrium frequencies)

If you’re playing against a strong opponent who could adapt, mixing per the GTO frequencies prevents them from picking up patterns. They can’t exploit you because, on average, your distribution of actions is the equilibrium one.

In practice: most live and online players cannot adapt fast enough for unexploitability to matter much. But against high-stakes regs, it does.

Maximally exploit (pick the highest-EV action assuming villain is also at equilibrium)

This is what the trainer rewards — pick the chip-maximal action assuming your opponent is the spec’d range playing the spec’d strategy.

If your villain is over-folding, this exploits them (you bluff more). If your villain is over-calling, this also exploits them (you value-bet more). The “highest-EV action vs equilibrium villain” is usually a reasonable exploit because the equilibrium villain is roughly the average player at most stakes.

The hybrid

Most pros mix some heuristic exploit on top of GTO base. The trainer is teaching you the GTO base. Once that’s solid, you adjust based on player notes / HUD reads in actual play.

What to take away

  1. The strategy frequency tells you “what mix keeps villain indifferent.” Not “what’s the best play.”
  2. The EV tells you “what action is chip-maximal vs equilibrium villain.” That’s what the trainer grades.
  3. In a converged solution, mixed actions tie in EV. If you see a gap, it’s convergence/discretization residual — small (< 3 bb), meaningful but not catastrophic.
  4. Drill the pure-action spots to ground out fundamentals. Drill frequency-hand spots to learn which combos mix and roughly how much.
  5. Don’t over-index on individual frequency-hand grades. Trends across many attempts are the signal.
  6. In real play, the trainer’s “GTO best” is your default. Deviate when you have strong reads on villain’s leaks.

Further reading

  • “The Mathematics of Poker” — Bill Chen and Jerrod Ankenman. The theoretical foundations.
  • “Modern Poker Theory” — Michael Acevedo. GTO concepts applied to NLHE specifically. Strong intuition-builder.
  • “Play Optimal Poker” — Andrew Brokos. Plain-English version of equilibrium concepts. Good first read if math-heavy texts intimidate you.
  • The PioSolver and GTO+ blogs both have free articles explaining solver output. Treat them as supplements; the underlying math is identical to what your trainer is doing.

Pack-Build Concepts

This section is for power users who want to understand what’s in a pack and why some scenarios are easier or harder to pack-solve. You don’t need this to drill spots, but it helps when you’re choosing what to install or wondering why a given pack misses certain boards.

What a pack covers

A pack is one specific scenario, fully solved post-flop. Each pack spec specifies:

Field Meaning
pack_id Canonical identifier (e.g. nlhe-100bb-btn-vs-bb-srp-flop-v1)
format NLHE / NLHE-shortdeck / etc. (current packs are all NLHE)
stack_depth_bb Effective stack size at the start of the hand (100, 200, 50, etc.)
hero_position / villain_position Heads-up matchup
pot Pot size at the start of the flop, in bb
effective_stack Behind-pot stack at the start of the flop, in bb
hero_range / villain_range Starting ranges as PokerStove notation (22+, A2s+, ...)
streets Which streets the solver outputs strategies for (currently [flop])
bet_sizes Bet menu in pot-percent terms (e.g. [33, 75])
boards Which flops to enumerate (all-canonical-flops = 1755, or curated-flops-N)
solver.iterations CFR+ iteration count (30 for current packs)
solver.exploit_target Convergence stop if exploitability drops below this in bb/100
node_family Tag used by the trainer’s filters (srp_ip_cbet, 3bp_ip_cbet, etc.)

The runtime then enumerates every board, runs profile_driver.exe (the GPU-accelerated CFR+ solver) per board, parses the output JSON, and writes the results to parquet.

SRP vs. 3BP vs. 4BP geometry

The starting pot and effective stack depend on what happened pre-flop. Stack-Pot Ratio (SPR) summarizes this in one number: SPR = effective_stack / pot.

                  pot    effStack    SPR
SRP (BTN open, BB call, 100bb)        6.0     97         16.2
SB-vs-BB SRP (limp/3x or open/call)   6.0     97         16.2
CO-vs-BTN SRP                          6.0     97         16.2
BTN-vs-SB 3BP (SB 3-bet to 9bb)       19.0    91          4.8
BTN-vs-BB 3BP (BB 3-bet to 12bb)      24.5    88          3.6
4-bet pot (4bet to ~25bb)             ~50     ~75         1.5

SPR shapes the entire flop tree:

  • High SPR (16+) → many betting rounds available → larger trees → more variety
  • Low SPR (1.5–5) → quickly committed → trees often terminate at a single bet
  • SPR ≈ 1 → “set-mining” geometry; flop bet often equals all-in

A pack must specify pot and effStack correctly for its scenario. The PackBuilder defaults are SRP-only (pot=6, effStack=97). 3BP and 4BP packs MUST override pot: and effective_stack: in the YAML. If you’re authoring a new pack and forget this, the entire solve is wrong — the strategies are computed against the wrong pot, the bet-percent labels are nonsensical, and the EVs are off by an order of magnitude. (This was bug #2 of the v1.1.0 series.)

Bet menu and raise depth

The bet menu controls what choices the solver has at each node. Two parameters:

Bet sizes (bet_sizes in spec)

Pot-percent sizes the solver may bet first (no prior bet on this street). E.g. [33, 75] means the solver chooses among check, bet 33% pot, bet 75% pot at the root of each street.

Wider menus = more accurate solutions but bigger trees and slower solves. [33, 75] is the standard for v1 — it covers small-cbet and big-cbet sizing without exploding the tree.

Raise sizes (hardcoded in PackBuilder)

When facing a bet, the solver chooses among call, fold, raise. Raise sizes default to `[50]` in v1.1.0.046 — single 50% pot raise (raise by half the current pot’s size, in addition to calling).

Earlier builds used [33, 60, 100] which produced unrealistic flop pots when combined with deep raise trees. The single 50% raise keeps re-raise lines from cascading.

Raise depth (set_raise_limit in template)

How many raise levels per street. v1.1.0.046 uses 2 — meaning the action on a street can go bet → raise → call/fold. No 3-bet on the flop. This matches typical real-game frequency: 3-bets on the flop are rare even at high stakes.

If you build a custom pack, raise_limit 3 is the next reasonable step (allows occasional 3-bets); raise_limit 4 (the solver default) is generally unrealistic and bloats trees.

All-in threshold

The solver auto-adds an “all-in” action to any node where the next normal raise would commit ≥ N% of the remaining stack. v1.1.0.046 sets N to 0.95 — only auto-add all-in when the standard raise tree would already nearly all-in anyway. Earlier builds used 0.67, which produced gratuitous flop shoves at SPR 5+.

In practice, with raise sizes [50] and raise_limit 2, you almost never hit the 0.95 threshold on the flop — the deepest reachable pot before commitment is well below the all-in cliff. So flop shoves are rare in current packs and only appear when the geometry genuinely requires them (very low SPR 4-bet pots, mostly).

Boards: all canonical vs. curated

There are 22,100 unique flop combinations in NLHE (52 × 51 × 50 / 6, ignoring suit-equivalence). Most strategies are isomorphic under suit relabeling — Ah Kh Qh and Ad Kd Qd are the same flop strategically (both monotone broadway flops). De-duplicating by suit yields 1755 canonical flops.

all-canonical-flops

Every canonical flop. Exhaustive coverage; pack file ~25 MB. Used for the BTN-vs-BB SRP pack — the most-studied scenario, worth full coverage.

curated-flops-N (e.g. curated-flops-500)

Stratified sampling: bucket the 1755 boards by texture (paired, monotone, two-tone wet, dry rainbow, etc.), then pick every Nth board within each bucket so the curated set has balanced texture representation. Used for the CO-vs-BTN, SB-vs-BB, and 3BP packs — saves solve time, covers texture variety.

500 boards is the v1 sweet spot. 100 is too sparse (some textures have just 5–10 boards represented). 1000 doubles solve time without much marginal coverage.

Why some boards fail to solve

You’ll occasionally see a pack ship with “496/500 boards” or “1751/1755” — a few boards failed to solve and were excluded.

Transient solver crashes (~0.5–2% rate)

The solver occasionally crashes with 0xC0000409 (STACK_BUFFER_OVERRUN) or 0xC0000005 (ACCESS_VIOLATION) on specific boards. These appear non-deterministic — re-running the same board may succeed. PackBuildRunner retries once, then falls back to CPU-mode (which uses different code paths and sometimes succeeds where GPU failed). If both fail, the board is excluded from the pack.

A few boards are consistently hard: AcKd8h, Ac8d4h, KcQd3h have shown up across multiple pack builds. These are likely tickling a real solver bug, but it’s not been root-caused.

GPU VRAM exhaustion

Very wide ranges (e.g. SB ~42% open + BB ~55% defend) on a single board can exhaust 8GB VRAM mid-iteration. The solver returns “failed to create readback buffer” or “scratch buffer overflow.” PackBuildRunner falls back to CPU automatically — slow but works. If CPU also fails, the board is excluded.

Bad spec (don’t blame the solver)

If a board fails consistently on every retry on every machine, double-check:

  • Is the range syntax valid? 22+, A2s+ is fine; 22+,Ax+ is not.
  • Is pot/effective_stack reasonable? A pot of 0 or effStack < pot crashes the solver.
  • Is the bet menu coherent? bet_sizes: [200, 400] (overbet only, no normal sizes) can be hard for the solver to converge.

Pack file size

Rough sizes for v1.1.0.046 packs:

  • 3BP curated 500 boards — ~5 MB (smallest tree, low SPR)
  • SRP curated 500 boards — ~17 MB (deep SPR, more nodes)
  • SRP all 1755 boards — ~70 MB (fully exhaustive)

Storage: each board ≈ 10 nodes × 300 strategies × 4 floats per strategy. The bulk of the file is the per-combo strategy table.

Re-extract vs. re-solve

When a bug is found:

  • Re-extract — only the extractor (post-solver) changed. Solver JSON is reusable. ~3 minutes per pack. Used for: EV scaling fix, board texture inference fix, range snapshot fix.
  • Re-solve — the spec or solver behavior changed. JSONs are stale, must re-run profile_driver. ~30 min to several hours per pack. Used for: bet menu changes, pot/effective_stack fixes, raise depth changes.

The v1.1.0.046 fixes (raise_limit 2, raise sizes {50}) require re-solve. Once that’s done, the EV scaling and action-history extraction fixes from v1.1.0.043 are already baked in via the extractor.

Verifying a custom pack

After building, run tools/DbProbe (in the dev kit) pointed at the extracted nodes.parquet:

total       = 4960
distinct_id = 4960     ← should equal total (no PK collisions)
dupes       = 0        ← should be 0
ev_min      = -91.2    ← should fall within ±stack_depth_bb
ev_max      =  88.7

Then spot-check in the trainer:

  1. Install the pack.
  2. Drill 5–10 spots, eyeball the pot/SPR/bet sizes.
  3. Pick a known-easy combo (AsAh on a dry board) and verify it bets ≥ 80%.
  4. Pick a known-fold (72o on AKK) and verify it folds ≥ 80%.

If those sanity checks pass, the pack is ready to ship.

Glossary

Plain-English definitions of every term used in this trainer and the manual. Sorted alphabetically.

Action history

The sequence of actions taken on the current street up to the current decision point. Encoded as short tokens separated by - within a street and / between streets. Example: x-b75-r50 reads “check, then bet 75% pot, then raise by 50% of the new pot.”

All-in (allin)

A bet or raise that puts a player’s entire remaining stack in the pot. Encoded as allin token. In current v1 packs, all-in actions only appear on shallow-SPR spots where the geometry forces it; you won’t see flop shoves in standard SRP packs.

Auto-feedback

Trainer setting to only generate LLM explanations when your grade is OK or POOR (skipping correct answers). Saves on LLM costs.

bb (big blind)

The unit of measure for everything in the trainer. A 100bb stack at NL100 ($1/$2) means $200. Pot of 6 bb = $12. EV loss of 1 bb = $2 left on the table for that decision. Always think in bb, not dollars.

bb/100

bb won per 100 hands. Standard winrate metric in cash games. 5 bb/100 is a strong NL winrate; 10+ bb/100 is exceptional.

Bet depth

How many bet/raise tokens are in the action history for the current street. 0 = no facing bet, 1 = facing single bet, 2 = facing raise (bet→raise), 3+ = facing 3-bet (bet→raise→re-raise). Used by the spot sampler for the 40/40/20 bucket weighting.

Big blind position (BB)

The seat that posts the big blind pre-flop. Out of position post-flop in heads-up SRPs (BB acts first on every street).

Bluff-catcher

A hand with marginal showdown value — beats villain’s bluffs, loses to villain’s value. KQ on Q-high paired board, for example. Often a frequency hand: sometimes called as a bluff-catcher, sometimes folded.

Board texture

Categorical classification of a flop based on rank distribution, suitedness, and pairing. The trainer uses buckets like “monotone-high,” “two-tone-low-paired,” “rainbow-broadway-dry,” etc. Used for stratified board sampling and for the texture-lab training mode.

Bookmark

Star a specific (node, combo) pair to revisit. Stored in bookmarked_spots table. Drill them in Bookmarked mode.

Button (BTN)

The seat with the dealer button. Acts last post-flop in any pot it’s involved in. Strongest position in NLHE.

c-bet (continuation bet)

A bet by the previous-street raiser on the next street. E.g. BTN raises pre-flop, then bets the flop = BTN’s c-bet. Both packs and the broader literature use “c-bet” interchangeably with “first-in bet” on flop in SRPs.

CFR / CFR+

Counterfactual Regret Minimization (and its Plus variant). The iterative algorithm the solver uses to approximate Nash equilibrium. CFR+ converges faster than vanilla CFR; standard for modern poker solvers.

Check-raise

Hero checks, villain bets, hero raises. The action history x-b75-r50 from hero’s POV means hero checked, villain bet 75%, hero raised by 50% — that’s a check-raise. The trainer surfaces it as “Facing check-raise” when villain is the one check-raising.

Combo

A specific two-card hand. AsKs is one combo, AsKc is another. AKs is a category of 4 combos (one per suit). 22 is a category of 6 combos. NLHE has 1326 starting combos, 169 categories.

Convergence (and exploitability)

How close the solver’s output is to true Nash equilibrium. Measured in bb/100 of exploitability — the bb/100 a perfect counter-strategy could win against this strategy. Lower is better. Our packs converge to ~5 bb/100 (good for fast solving, leaves residual EV gaps between mixed actions).

DbProbe

Internal tool (tools/DbProbe/) that opens a pack’s nodes.parquet, runs sanity SQL, prints stats. Used to verify packs before ship.

Effective stack

The smaller of the two players’ remaining stacks. Determines the geometry of the hand — you can never play more than the effective stack. In a 100bb hand, effStack starts at 100bb minus whatever’s already in the pot.

EV (expected value)

The chip outcome of an action averaged across all possible villain responses, weighted by villain’s strategy frequencies. EV of -10.72 bb means “if I take this action repeatedly in this exact spot, I lose 10.72 bb on average per attempt.” Negative EVs are common in spots where you’ve already invested chips; what matters is the relative EV between your action options.

EV loss

chosen_EV - best_EV. Always ≤ 0. The bb cost of picking a non-optimal action. The trainer’s primary grading metric.

Equilibrium

Shorthand for “Nash equilibrium.” See Understanding Solver Output for the full breakdown.

Equity

Probability of winning the pot at showdown given current cards revealed and remaining cards to come. AhAd has ~80% equity vs 7s7c on a J-7-2 flop (the 7s7c has trips, but the AA can still hit with 5 outs). Used heavily in pre-flop / pre-action analysis.

Exact Verify Drill

Training mode that runs a fresh CFR+ solve on the current spot (deeper convergence than the stored pack solution) and shows you the diff. Slow per spot but useful for sanity-checking individual spots.

Exploit / exploitable

A strategy is “exploitable” by N bb/100 if a perfect counter-strategy could win N bb/100 against it. GTO equilibrium is by definition unexploitable (0 bb/100 exploitable). All real solver outputs are slightly exploitable due to convergence residual.

Fold equity

The probability that a bet causes villain to fold a hand they would have otherwise played. Hero’s bet of pot 75% might have 40% fold equity — meaning 40% of villain’s range folds to it. Bluffs need fold equity to be +EV.

Frequency

The probability that the GTO strategy mixes a specific action. “Call 6.5% / Raise 10.1% / Fold 83.5%” means the action probabilities for a given combo at a given node.

Frequency hand

A hand that mixes two or more actions at meaningful rates. AKo facing all-in on Q94 (mixing fold/call/raise) is a frequency hand. See Understanding Solver Output.

GTO (Game-Theory Optimal)

Strategy that approximates Nash equilibrium — unexploitable in repeated play. The whole reason the trainer exists.

Heads-up (HU)

Two-player game. All v1 packs are HU.

Hero / villain

You / your opponent. The trainer always frames spots from hero’s POV.

In position (IP) / Out of position (OOP)

IP acts last post-flop. OOP acts first. BTN is IP vs every other position. SB is OOP except vs BTN.

Indifference principle

In Nash equilibrium, every action with nonzero mixing frequency has identical EV. The frequencies are chosen to make the opposing player indifferent between actions, not because some actions are intrinsically better than others. See Understanding Solver Output.

Iteration (solver)

One pass of the CFR+ algorithm updating regrets and average strategy. More iterations = closer to Nash equilibrium. Current packs: 30 iterations.

Line / line play

A specific sequence of actions across streets. The trainer’s “Continue Line” feature continues drilling along the same line (e.g. you bet flop → villain calls → trainer hands you the turn decision in the same line).

Mixed strategy

A strategy that randomizes between actions at fixed frequencies. Vs a pure strategy that always picks one action.

Monotone

A flop with all three cards the same suit. Ah Kh 7h is monotone hearts. Plays very differently from rainbow boards (more flush draws, less pairing equity).

Nash equilibrium

A strategy pair (one for each player) where neither can improve EV by unilaterally deviating. The mathematical “solution” the solver is computing.

Node

A decision point in the game tree. Each node has a board, action history, acting player, pot, and SPR. The pack’s nodes.parquet lists every node; you drill one node at a time.

Node ID

64-bit hash of a node’s canonical fields (board, action_history, positions, pot, etc.). Salted with pack_id in v1.1.0.025+ to prevent cross-pack collisions on install.

Pack

A .drvpack file = a pre-solved scenario library. See Pack Browser.

Pot odds

The price you’re getting on a call. Call 16 to win 56 → 16/72 = ~22% pot odds. You need ≥ 22% equity to break even on the call.

Pre-flop

The betting round before the flop. v1 packs don’t drill pre-flop spots — pre-flop ranges are baked into the spec, post-flop is what gets solved.

Pure-action hand

A hand that plays one action ≥ 95% of the time at this node. AA on a dry low board pure-bets; 72o on AKK pure-folds.

Range

A set of starting hands a player could have. Encoded as PokerStove notation: 22+, A2s+, KTs+, ... Each pack defines hero and villain starting ranges.

Range snapshot

The trainer’s node_ranges.parquet table — for each node, hero and villain’s starting ranges with board-card removal applied (cards on the board can’t be in any range).

Re-raise / 3-bet

A raise after a raise. Pre-flop: open → re-raise = “3-bet.” Post-flop: bet → raise → re-raise. The trainer labels post-flop 3-bets as “Facing 3-bet” in the facing-action label.

Reach probability / reach-weighted

The probability that a specific node is reached in actual play. Computed by multiplying strategy frequencies along the path from root to node, weighted by hero’s range. Reach-weighted sampling (Option B in the architecture doc) is a future enhancement; current sampler uses the bet-depth bucket approximation.

Single-raised pot (SRP)

A pot where one player raised pre-flop and the other called. No 3-bet. Most common pot type. Standard 100bb SRP pot is 6 bb at the flop.

Spec (pack spec)

The YAML file defining a pack. See Pack-Build Concepts.

SPR (Stack-to-Pot Ratio)

effective_stack / pot. SPR 16 = lots of room to maneuver post-flop. SPR 1 = next bet probably commits. Determines flop strategy more than nearly any other variable.

Strategy

A mapping from “decision points” to “action probability distributions.” The solver outputs the strategy at every node for every combo in hero’s range.

Suited (s) / offsuit (o)

AKs = same-suit AK (4 combos). AKo = different-suit AK (12 combos). Notation appears throughout the trainer and pack docs.

Texture

See “Board texture.”

Three-bet pot (3BP)

A pot where someone re-raised pre-flop and the original raiser called. Smaller SPR (~3.6–4.8 at 100bb), tighter ranges, more polar strategies on the flop.

Tree (game tree)

The graph of all reachable decision points from the root of the hand. Each pack’s nodes table is the decision points in this tree.

Villain

Your opponent in the spot. The solver assigns villain a strategy at equilibrium too — the trainer just displays your side.

Troubleshooting & FAQ

If you’ve hit something weird, scan this list before filing a bug. Most things on this page have a known cause.

Pack install issues

“Pack conflicts with N already-installed pack(s) that share spot IDs”

Cause: the pack you’re installing has node IDs that collide with another already-installed pack. Older packs (pre-v1.1.0.025) compute node IDs without salting by pack_id, so two unrelated packs can produce the same ID.

Fix: uninstall the conflicting pack first (the dialog names it). If the conflict is between two packs you both want, ask for a pack-id-salted rebuild of the older one — current builds (v1.1.0.025+) are salted and won’t collide.

Install hangs or app freezes during install

Cause: large packs (> 50 MB) used to ingest on the UI thread, blocking the app for ~30 seconds. Two of the v1 ship packs were pulled from the bundle for this reason.

Fix: make sure you’re on v1.1.0.042 or newer — install moved off the UI thread there. If you’re stuck mid-install on an older build, kill the process, delete %LocalAppData%\DriveGTO\trainer\drivegto.duckdb, and reinstall.

“Solver did not emit expected JSON” when building a pack

Cause: profile_driver.exe needs --resource-path pointing at GTOSolver-cpp/resources/. Without it, the solver fails to load comparer files and silently produces no output.

Fix: if you’re running PackBuilder by hand, always pass --resource-path C:\Dev\DriveGTO\GTOSolver-cpp\resources. The default run_all_v046.bat already includes it.

Pack content looks wrong

“Pot is 60bb on a flop SRP — that can’t be right”

Cause: the pack was built before v1.1.0.046. Earlier raise menus ({33, 60, 100} raise sizes) plus raise_limit=4 (solver default, not overridden) produced unrealistic flop pots.

Fix: uninstall and reinstall a v1.1.0.046+ rebuild of the pack. Pots will cap at ~32 bb on SRP flops, ~91 bb on 3BP flops.

EVs are in the thousands (“EV: 5448 bb”)

Cause: the pack was extracted before v1.1.0.043. The native solver emits EVs in centibb (chips, where 100 chips = 1 bb). Old extractor wrote them straight to parquet without dividing by 100.

Fix: reinstall a v1.1.0.043+ rebuild. EV values should fall within ±stack_depth_bb (so ±100 bb for 100bb packs).

“Pre-flop” segment in action history reads weird

Cause: very old packs (pre-v1.1.0.043) labelled segment 0 of action history as “PF” (preflop), but PackBuilder packs solve from the flop down — preflop is implicit in the spec, not stored.

Fix: v1.1.0.043 fixed the labelling (segments now start at “Flop”). Reinstall.

Villain bet stack doesn’t render in front of villain

Cause #1: old pack with the b67-r120-r441 token format that the trainer’s VillainBetBb parser couldn’t decode (split delimiter mismatch, fixed in v1.1.0.043).

Cause #2: allin token in the action history. v1.1.0.046+ trainer recognizes allin defensively and sets villain’s bet to remaining stack.

Fix: confirm app and trainer DLL are v1.1.0.046+. Reinstall the pack.

Drilling weirdness

“GTO best is X but the highest frequency is Y — bug?”

Not a bug. This is the indifference principle and convergence residual at work. Read Understanding Solver Output for the full explanation. Short answer: in equilibrium, every mixed action has identical EV; finite iterations leave a small residual gap. The trainer grades by EV (chip-maximal); frequency tells you the equilibrium mix.

“I’m getting too many raise-faced spots”

Cause: before v1.1.0.046, the SpotSampler used uniform random over decision nodes, which over-represented deep raise nodes ~4x their real-game frequency.

Fix: upgrade to v1.1.0.046+ (40/40/20 bucket sampling — ~40% no-bet, ~40% single-bet, ~20% raise spots). If you genuinely want more raise spots for targeted study, use Spot Builder mode with no other filters and you’ll get the unmoderated mix from the new buckets.

“My accuracy is stuck around 60% no matter how much I drill”

Two possibilities:

  1. Frequency hands. A lot of frequency hands (mixed actions) genuinely have ~50% best-action selection rate at equilibrium. Some mode mixes pick more frequency hands than others. Check your EV-loss histogram on the Progress dashboard — if it’s left-heavy (mostly < 0.3 bb loss) you’re doing well even at 60% accuracy because you’re picking near-equal-EV actions.
  2. Real leaks. Check the weakness heatmap. If one street/position cell is dark red, that’s where the 60% lives. Drill that slice in Position Practice + Street Practice.

“Spot says ‘Facing 3-bet’ but I thought current packs cap at raise depth 2”

The classification logic counts bet/raise tokens in action_history. If you’ve drilled a check-bet-raise spot with current packs, it’ll count as 2 bet/raise tokens → “Facing raise.” A 3-bet would require bet→raise→re-raise on the same street, which raise_limit=2 prevents. So you shouldn’t see “Facing 3-bet” on flops drawn from v1.1.0.046 packs.

If you do, you’ve got a pre-v1.1.0.046 pack installed. Uninstall and reinstall.

“Bookmark didn’t save” / “Bookmarks gone after restart”

Cause: the bookmarked_spots table is in DuckDB. If the app crashes mid-write (rare, but happens), bookmark might not commit.

Fix: make sure you click Next Spot or wait for the bookmark icon to switch to “★ Bookmarked” before closing the app — that confirms commit. Bookmarks survive normal restarts; if they vanish, your drivegto.duckdb may be corrupted (back up and consider Reset Everything from Settings).

Build / dev issues (only if you’re authoring packs)

“GPU exhausted” on most boards mid-pack-build

Cause: Windows D3D12 device state can get stuck after several hundred solves, even though VRAM appears free in nvidia-smi. Each fresh profile_driver.exe process inherits the stuck state at the OS layer.

Fix: Win + Ctrl + Shift + B resets the graphics driver without rebooting. Screen blinks once, drivers reset clean. Resume the pack-build run.

Solver crashes with 0xC0000409 on specific boards

Cause: known transient solver bug, ~0.5–2% rate, non-deterministic. A handful of boards (e.g. AcKd8h, Ac8d4h, KcQd3h) crash repeatedly across builds.

Fix: PackBuildRunner retries once and falls back to CPU. If both attempts fail, the board is excluded from the pack. Acceptable: a 500-board pack with 4–8 excluded boards (498/500 — 99.2%+ coverage) is fine. If failure rate exceeds ~3% across many packs in a row, that’s a sign something deeper is off; revisit the solver build.

“Hang on a single board for 30+ minutes”

Cause: profile_driver.exe occasionally spins on a board with no log advance. Memory says we’ve seen it on AcQdJd in 3BP runs.

Fix: kill the process, clear the work_dir, restart from the start of the affected pack (no resume support). Add per-board timeout to ProfileDriverInvoker is on the TODO list.

Performance / UX

Trainer feels slow opening packs / running queries

Cause: large packs + DuckDB queries against unindexed columns. The standard indexes (pack_id, node_family, format/positions/street) cover most filters.

Fix: if a specific filter is slow, check whether it joins on acting_player, board_texture, or tags — those don’t have dedicated indexes. Adding one is a one-line ALTER in TrainerDatabase.EnsureSchema. File a bug with the slow-query specifics.

“App won’t launch — splash screen stuck”

Cause: rare DuckDB lock issue if the previous session crashed mid-write.

Fix: find any lingering DriveGTO.App.exe processes in Task Manager, kill them. If the trainer still won’t launch, delete %LocalAppData%\DriveGTO\trainer\drivegto.duckdb-wal (the write-ahead log) — this loses any uncommitted writes but unblocks startup.

Reporting bugs

If your issue isn’t covered here:

  1. Note your app version (Settings tab, bottom of page).
  2. Capture the pack ID and node ID of the misbehaving spot (Bookmark it then check the database).
  3. Screenshot the spot if it’s a UI issue.
  4. File via Settings → Feedback or post in the DriveGTO Discord with the above info.

About this manual

This manual is the consolidated product reference for the entire DriveGTO suite — both the solver application and the Trainer companion. It combines newly-written solver UI documentation with the in-app Trainer help content (located in source at DriveGTO/DriveHUD.DriveGTO.App/Assets/Manual/).

Image placeholders

Throughout the manual, blocks marked [ IMAGE PLACEHOLDER ] indicate where a screenshot should be captured and inserted. Each placeholder describes the screen, annotations to include, and a suggested filename. When capturing, follow the convention in Assets/Manual/images/README.md for the Trainer images and create an analogous folder for the new solver-side images.

Versioning

This document corresponds to DriveGTO build v1.1.0.046+ for the solver and Trainer. Pack file format v1; spec language as of 2026-04-30.

 

 

Menu
Poker HUD | Stats | Tracking