This commit addresses the massive context loss and 0% utilization issue by fixing two related bugs in the context management system:
1. **Masked Tool Retention Bug:** Removed the brittle 'in-flight' protection logic from `getProtectedNodeIds`. Previously, this logic failed to match `MASKED_TOOL` nodes, causing it to perpetually hoard `functionCall` nodes inside the protected list. This hoarding rapidly exhausted the context budget, forcing the GC backstop into an aggressive death spiral. Now, only the `recent_turn` and explicit tasks are protected, allowing standard GC to naturally prune old tool calls (and the `HistoryHardener` handles the rest).
2. **Adaptive Calculator Overhead Bug:** The `AdaptiveTokenCalculator` was calculating ratio drift by dividing the API's `actualTokens` (which includes the static overhead of the System Instruction and Tools) by the graph's `baseUnits` (which only counts history). We now inject an `getOverheadTokens` callback during initialization, allowing the calculator to subtract the overhead from the Gemini API count before dividing by the base units, achieving an accurate apples-to-apples ratio.
3. **Violent Oscillation Dampening:** Introduced strict dampening bounds (`maxStep` limit and a hard `targetWeight` clamp) to the Adaptive Token Calculator's EMA. This acts identically to a machine learning learning-rate limit, ensuring that a single massive outlier reading cannot violently thrash the `learnedWeight` multiplier in a single turn.
Includes new test coverage verifying the maxStep logic and overhead subtraction. Also removed an unsafe `as unknown` cast in the `snapshotGenerator` that failed `eslint`.
When PR cdd482c2e reverted structural snapshot rehydration, it left
toGraph mapping all non-tool parts as NodeType.USER_PROMPT. This caused
findLatestSnapshotBaseline to fail on resumed sessions, starting the Master
State from an empty baseline and leading to catastrophic data loss.
This fix detects the snapshot signature during graph construction to
properly assign NodeType.SNAPSHOT.
This commit implements an opaque state export/import pattern for the
ContextManager to ensure expensive LLM-derived snapshots are properly
rehydrated upon session resume.
The ContextManager now exposes `exportState` and `restoreState` methods,
delegating structural validation to the `SnapshotStateHelper`.
During active chat, the GeminiClient routinely passes the finalized
context state down to the ChatRecordingService, which seamlessly
embeds it into the existing JSONL metadata payload. Upon resume, the
saved snapshot is re-published as a draft to the LiveInbox, allowing
the synchronous pipeline to automatically and deterministically splice
it back into the raw graph without an additional LLM call.