Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 & Gemma 3 LLMs 2x faster with 70% less memory! 🦥
- Updated
May 13, 2025 - Python
Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 & Gemma 3 LLMs 2x faster with 70% less memory! 🦥
Efficient Triton Kernels for LLM Training
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
Interact with your SQL database, Natural Language to SQL using LLMs
A PyTorch Library for Meta-learning Research
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
Easiest and laziest way for building multi-agent LLMs applications.
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
Webui for using XTTS and for finetuning it
Finetuning large language models for GDScript generation.
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
[ACL 2024] Official resources of "ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models".
A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision, llama-3.2-vision, qwen-vl, qwen2-vl, phi3-v etc.
TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle
FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."