Learning Rate Finder
Find the optimal learning rate for PyTorch models. Interactive LR range test visualization. Understand learning rate schedules, warm-up, and decay strategies.
Built by Michael Lip
Frequently Asked Questions
What is a learning rate finder?
An LR range test trains for one epoch while exponentially increasing the LR from ~1e-7 to ~10. Plot loss vs LR and pick the value where loss decreases fastest — typically one order of magnitude before the minimum.
Good learning rate for Adam?
Default 3e-4 works well for many models. Fine-tuning: 1e-5 to 5e-5. Training from scratch: 1e-3 to 3e-4. Always validate with an LR range test.
What is learning rate warm-up?
Warm-up gradually increases LR from 0 to the target over the first N steps. Prevents early instability with large batches or Adam. Typical: 1-5% of total steps. Essential for Transformers.
PyTorch LR schedulers?
StepLR, CosineAnnealingLR, ReduceLROnPlateau, OneCycleLR (best general-purpose with warm-up), CosineAnnealingWarmRestarts, and ExponentialLR.
Is this tool free?
Yes. All HeyTensor tools are free, run in your browser, and require no signup.
About This Tool
Part of HeyTensor. All calculations run in your browser. Source code on GitHub.
Contact
Built by Michael Lip. Email [email protected].