Skill-SD

arXiv Preprint · 2026

Skill-Conditioned Self-Distillation for Multi-turn LLM Agents

Hao Wang*†1,5, Guozhi Wang*‡5, Han Xiao*2, Yufeng Zhou5, Yue Pan5, Jichao Wang1, Ke Xu3, Yafei Wen5, Xiaohu Ruan5, Xiaoxin Chen5, Honggang Qi§4

1 Hangzhou Institute for Advanced Study, UCAS 2 The Chinese University of Hong Kong 3 University of Science and Technology of China 4 University of Chinese Academy of Sciences 5 vivo AI Lab *Equal contribution   §Corresponding author   Project lead   Intern at vivo

Abstract

Reinforcement learning (RL) has been widely used to train LLM agents for multi-turn interactive tasks, but its sample efficiency is severely limited by sparse rewards and long horizons. On-policy self-distillation (OPSD) alleviates this by providing dense token-level supervision from a privileged teacher that has access to ground-truth answers. However, such fixed privileged information cannot capture the diverse valid strategies in agent tasks, and naively combining OPSD with RL often leads to training collapse. To address these limitations, we introduce Skill-SD, a framework that turns the agent's own trajectories into dynamic training-only supervision. Completed trajectories are summarized into compact natural language skills that describe successful behaviors, mistakes, and workflows. These skills serve as dynamic privileged information conditioning only the teacher, while the student always acts under the plain task prompt and learns to internalize the guidance through distillation. To stabilize the training, we derive an importance-weighted reverse-KL loss to provide gradient-correct token-level distillation, and dynamically synchronize the teacher with the improving student. Experimental results on agentic benchmarks demonstrate that Skill-SD substantially outperforms the standard RL baseline, improving both vanilla GRPO (+14.0%/+10.9% on AppWorld/Sokoban) and vanilla OPD (+42.1%/+40.6%).

The helmsman sets the bearing.
The officer reads the wind.

Task reward from GRPO determines the overall direction; the skill-conditioned teacher supplies fine-grained, token-level guidance for the decisions in between.

Contributions

Key Contributions

Dynamic Skill Summaries

Each completed trajectory is asynchronously summarized into a structured skill: success patterns, mistake analysis, and a golden workflow.

Teacher-Only Guidance

Skills augment the teacher's prompt only. The student always operates under a clean task prompt, eliminating train-test mismatch.

Gradient-Correct Distillation

An importance-weighted reverse-KL loss corrects per-token gradient bias caused by teacher and student distribution mismatch.

Method

How it works

01

Plain-Prompt On-Policy Rollouts

The student generates rollouts using only the task prompt — no distilled skills — preserving identical train/test conditioning.

02

Trajectory-to-Skill Distillation

An auxiliary LLM compresses each episode into a reusable skill summary of successes, failures, and workflow.

03

Teacher-Only Skill Replay

Retrieved skills go only to the teacher, which re-scores the trajectory token by token — student inputs stay unchanged.

04

Joint RL + Distillation

GRPO handles trajectory-level reward; importance-weighted reverse-KL distills token-level guidance and corrects teacher-student mismatch.

Pipeline

Method Overview

Skill-SD method overview showing student rollouts, skill extraction, and distillation
Skill-SD pipeline: student rollouts → skill extraction → importance-weighted distillation.

Appendix Insight

What a Skill Looks Like

Skill-SD does not archive full trajectories as supervision. Each completed attempt is compressed into a compact teacher-only JSON artifact that records what to reuse, what to avoid, and what the next best rollout should do.

Teacher-Only Prompt Distilled after a completed attempt, injected only during distillation.

skill.json

  1. {
  2. "success_analysis": "Using task-specific apps and authenticating restricted APIs first is the right strategy.",
  3. "mistake_analysis": "The main failure came from acting on unverified assumptions and calling restricted APIs before checking authentication and parameter requirements.",
  4. "golden_workflow": "1. Retrieve the actual bill from the file system. 2. Get roommate contact info via the contact app. 3. Authenticate Venmo, compute each share, and send the payment requests with the correct API calls."
  5. }

Benchmark Results

Results on AppWorld & Sokoban

Model: Qwen3-4B-Instruct-2507. Subscripts denote absolute change from the base model.

Method AppWorld Sokoban Avg.
Acc. Comp. Acc. Comp. Acc. Comp.
Base Model 8.8%39.1% 12.5%32.0% 10.6%35.6%
Vanilla OPD 22.8%+14.059.7%+20.6 21.9%+9.437.5%+5.5 22.4%+11.748.6%+13.0
Vanilla GRPO 50.9%+42.176.3%+37.2 51.6%+39.168.8%+36.8 51.2%+40.672.5%+36.9
Skill-Augmented GRPO 42.1%+33.376.1%+37.0 20.3%+7.837.5%+5.5 31.2%+20.656.8%+21.2
Skill-SD (Ours) 64.9%+56.184.9%+45.8 62.5%+50.071.1%+39.1 63.7%+53.178.0%+42.4
AppWorld training curves for Skill-SD and baselines
AppWorld training curves
Sokoban training curves for Skill-SD and baselines
Sokoban training curves

Ablations

Ablation Study

Student-Owned Rollout Is Essential

Both off-policy variants collapse during mid-training. The failure is especially severe on Sokoban, where off-policy accuracy drops to 12.5% or 10.9% — matching the uninstructed base model.

Dynamic Sync Keeps the Teacher Calibrated

Within on-policy training, synchronizing the teacher from the latest student checkpoint adds +15.8 pp on AppWorld and +12.5 pp on Sokoban over a frozen teacher.

Skills Should Guide the Teacher

Directly prepending skills to the student hurts performance: Skill-Augmented GRPO underperforms Vanilla GRPO on both AppWorld (42.1% vs. 50.9%) and Sokoban (20.3% vs. 51.6%).

Subscripts denote absolute change from Skill-SD. *Training collapsed during mid-training; values reflect the checkpoint before collapse.

Rollout Teacher AppWorld Sokoban Avg.
Acc.Comp. Acc.Comp. Acc.Comp.
On-policyFrozen 49.1%−15.879.0%−5.9 50.0%−12.563.3%−7.8 49.6%−14.171.1%−6.9
On-policyDynamic 64.9%84.9% 62.5%71.1% 63.7%78.0%
Off-policy*Frozen 45.6%−19.378.8%−6.1 12.5%−50.031.3%−39.8 29.1%−34.655.0%−23.0
Off-policy*Dynamic 42.1%−22.876.5%−8.4 10.9%−51.632.0%−39.1 26.5%−37.254.3%−23.7

Optimization Dynamics

Teacher and Student Distributions Converge During SDL

Token-level self-distillation dynamics showing teacher-student distribution alignment and declining SDL loss
On a representative AppWorld task, teacher and student token distributions become progressively aligned. The SDL loss decreases by 59.3% over training.
AppWorld training dynamics for four rollout-teacher configurations: on-policy dynamic, on-policy frozen, off-policy frozen, off-policy dynamic
AppWorld training dynamics of the four rollout–teacher configurations
Sokoban training dynamics for four rollout-teacher configurations: on-policy dynamic, on-policy frozen, off-policy frozen, off-policy dynamic
Sokoban training dynamics of the four rollout–teacher configurations

Hyperparameter Sweep

SDL Coefficient λ on AppWorld

The SDL coefficient mediates the RL–distillation trade-off. λ = 0.001 achieves 81.19% validation completion, acting as a mild shaping term that guides the student without dominating the RL signal.

λ Val. Completion
0.01 Unstable, below optimum
0.005 74.66%
0.001 81.19% (best)
0.0005 75.98%
AppWorld training and validation completion rate curves for four SDL coefficient values
λ = 0.001 achieves the best validation performance on AppWorld.

Training Principle

Reward sets the course.
Skill refines each turn.

GRPO provides global direction; the skill-conditioned teacher refines delicate token choices during training. At inference, the student uses only the plain task prompt.

Read the arXiv preprint

Citation

Cite this work

@misc{wang2026skillsdskillconditionedselfdistillationmultiturn,
  title={Skill-SD: Skill-Conditioned Self-Distillation for Multi-turn LLM Agents},
  author={Hao Wang and Guozhi Wang and Han Xiao and Yufeng Zhou and Yue Pan and Jichao Wang and Ke Xu and Yafei Wen and Xiaohu Ruan and Xiaoxin Chen and Honggang Qi},
  year={2026},
  eprint={2604.10674},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2604.10674},
}