Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
created at March 31, 2019, 12:45 a.m.
Make PyTorch models Lightning fast! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once.
created at March 18, 2024, 3:30 p.m.
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
created at March 31, 2019, 12:45 a.m.
Toolbox of models, callbacks, and datasets for AI/ML researchers.
created at March 25, 2020, 4:03 p.m.
Your PyTorch AI Factory - Flash enables you to easily configure and run complex AI recipes for over 15 tasks across 7 data domains
created at Jan. 28, 2021, 6:47 p.m.
Learn to serve Stable Diffusion models on cloud infrastructure at scale. This Lightning App shows load-balancing, orchestrating, pre-provisioning, dynamic batching, GPU-inference, micro-services working together via the Lightning Apps framework.
created at Aug. 23, 2022, 6:52 a.m.
Implementation of Falcon, StableLM, Pythia, INCITE language models based on nanoGPT. Supports flash attention, LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
created at May 4, 2023, 5:46 p.m.