diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md deleted file mode 100644 index c2035cea..00000000 --- a/.github/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,128 +0,0 @@ -# Contributor Covenant Code of Conduct - -## Our Pledge - -We as members, contributors, and leaders pledge to make participation in our -community a harassment-free experience for everyone, regardless of age, body -size, visible or invisible disability, ethnicity, sex characteristics, gender -identity and expression, level of experience, education, socio-economic status, -nationality, personal appearance, race, religion, or sexual identity -and orientation. - -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. - -## Our Standards - -Examples of behavior that contributes to a positive environment for our -community include: - -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, - and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall community - -Examples of unacceptable behavior include: - -* The use of sexualized language or imagery, and sexual attention or - advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Enforcement Responsibilities - -Community leaders are responsible for clarifying and enforcing our standards of -acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, threatening, offensive, -or harmful. - -Community leaders have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will communicate reasons for moderation -decisions when appropriate. - -## Scope - -This Code of Conduct applies within all community spaces, and also applies when -an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, -posting via an official social media account, or acting as an appointed -representative at an online or offline event. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at -`hoshihiyouga AT gmail DOT com`. -All complaints will be reviewed and investigated promptly and fairly. - -All community leaders are obligated to respect the privacy and security of the -reporter of any incident. - -## Enforcement Guidelines - -Community leaders will follow these Community Impact Guidelines in determining -the consequences for any action they deem in violation of this Code of Conduct: - -### 1. Correction - -**Community Impact**: Use of inappropriate language or other behavior deemed -unprofessional or unwelcome in the community. - -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - -**Consequence**: A warning with consequences for continued behavior. No -interaction with the people involved, including unsolicited interaction with -those enforcing the Code of Conduct, for a specified period of time. This -includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. - -### 3. Temporary Ban - -**Community Impact**: A serious violation of community standards, including -sustained inappropriate behavior. - -**Consequence**: A temporary ban from any sort of interaction or public -communication with the community for a specified period of time. No public or -private interaction with the people involved, including unsolicited interaction -with those enforcing the Code of Conduct, is allowed during this period. -Violating these terms may lead to a permanent ban. - -### 4. Permanent Ban - -**Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an -individual, or aggression toward or disparagement of classes of individuals. - -**Consequence**: A permanent ban from any sort of public interaction within -the community. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], -version 2.0, available at -https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. - -Community Impact Guidelines were inspired by [Mozilla's code of conduct -enforcement ladder](https://github.com/mozilla/diversity). - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see the FAQ at -https://www.contributor-covenant.org/faq. Translations are available at -https://www.contributor-covenant.org/translations. diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md deleted file mode 100644 index 507d666a..00000000 --- a/.github/CONTRIBUTING.md +++ /dev/null @@ -1,67 +0,0 @@ -# Contributing to LLaMA Factory - -Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable. - -It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you. - -However you choose to contribute, please be mindful and respect our [code of conduct](CODE_OF_CONDUCT.md). - -**This guide was heavily inspired by [transformers guide to contributing](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).** - -## Ways to contribute - -There are several ways you can contribute to LLaMA Factory: - -* Fix outstanding issues with the existing code. -* Submit issues related to bugs or desired new features. -* Contribute to the examples or to the documentation. - -### Style guide - -LLaMA Factory follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html), check it for details. - -### Create a Pull Request - -1. Fork the [repository](https://github.com/hiyouga/LLaMA-Factory) by clicking on the [Fork](https://github.com/hiyouga/LLaMA-Factory/fork) button on the repository's page. This creates a copy of the code under your GitHub user account. - -2. Clone your fork to your local disk, and add the base repository as a remote: - -```bash -git clone git@github.com:[username]/LLaMA-Factory.git -cd LLaMA-Factory -git remote add upstream https://github.com/hiyouga/LLaMA-Factory.git -``` - -3. Create a new branch to hold your development changes: - -```bash -git checkout -b dev_your_branch -``` - -4. Set up a development environment by running the following command in a virtual environment: - -```bash -pip install -e ".[dev]" -``` - -If LLaMA Factory was already installed in the virtual environment, remove it with `pip uninstall llamafactory` before reinstalling it in editable mode with the -e flag. - -5. Check code before commit: - -```bash -make commit -make style && make quality -make test -``` - -6. Submit changes: - -```bash -git add . -git commit -m "commit message" -git fetch upstream -git rebase upstream/main -git push -u origin dev_your_branch -``` - -7. Create a merge request from your branch `dev_your_branch` at [origin repo](https://github.com/hiyouga/LLaMA-Factory). diff --git a/.github/ISSUE_TEMPLATE/bug-report.yml b/.github/ISSUE_TEMPLATE/bug-report.yml deleted file mode 100644 index 58561329..00000000 --- a/.github/ISSUE_TEMPLATE/bug-report.yml +++ /dev/null @@ -1,66 +0,0 @@ -name: "\U0001F41B Bug / Help" -description: Create a report to help us improve the LLaMA Factory -body: - - type: markdown - attributes: - value: | - Issues included in **FAQs** or those with **insufficient** information may be closed without a response. - 包含在**常见问题**内或提供信息**不完整**的 issues 可能不会被回复。 - - - type: checkboxes - id: reminder - attributes: - label: Reminder - description: | - Please ensure you have read the README carefully and searched the existing issues (including FAQs). - 请确保您已经认真阅读了 README 并且搜索过现有的 issues(包括常见问题)。 - - options: - - label: I have read the README and searched the existing issues. - required: true - - - type: textarea - id: system-info - validations: - required: true - attributes: - label: System Info - description: | - Please share your system info with us. You can run the command **llamafactory-cli env** and copy-paste its output below. - 请提供您的系统信息。您可以在命令行运行 **llamafactory-cli env** 并将其输出复制到该文本框中。 - - placeholder: llamafactory version, platform, python version, ... - - - type: textarea - id: reproduction - validations: - required: true - attributes: - label: Reproduction - description: | - Please provide code snippets, error messages and stack traces that reproduces the problem. - 请提供运行参数,错误信息以及异常堆栈以便于我们复现该问题。 - Remember to use Markdown tags to correctly format your code. - 请合理使用 Markdown 标签来格式化您的文本。 - - placeholder: | - ```bash - llamafactory-cli train ... - ``` - - - type: textarea - id: expected-behavior - validations: - required: false - attributes: - label: Expected behavior - description: | - Please provide a clear and concise description of what you would expect to happen. - 请提供您原本的目的,即这段代码的期望行为。 - - - type: textarea - id: others - validations: - required: false - attributes: - label: Others diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index d23d6be3..00000000 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,8 +0,0 @@ -# What does this PR do? - -Fixes # (issue) - -## Before submitting - -- [ ] Did you read the [contributor guideline](https://github.com/hiyouga/LLaMA-Factory/blob/main/.github/CONTRIBUTING.md)? -- [ ] Did you write any new necessary tests? diff --git a/.github/SECURITY.md b/.github/SECURITY.md deleted file mode 100644 index d34728eb..00000000 --- a/.github/SECURITY.md +++ /dev/null @@ -1,7 +0,0 @@ -# Reporting Security Issues - -To report a security issue, please use the GitHub Security Advisory ["Report a Vulnerability"](https://github.com/hiyouga/LLaMA-Factory/security/advisories/new) tab. - -We will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance. - -Report security bugs in third-party modules to the person or team maintaining the module. diff --git a/.github/workflows/label_issue.yml b/.github/workflows/label_issue.yml deleted file mode 100644 index ce7359ab..00000000 --- a/.github/workflows/label_issue.yml +++ /dev/null @@ -1,30 +0,0 @@ -name: label_issue - -on: - issues: - types: - - opened - -jobs: - label_issue: - runs-on: ubuntu-latest - - permissions: - issues: write - - steps: - - env: - GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} - ISSUE_URL: ${{ github.event.issue.html_url }} - ISSUE_TITLE: ${{ github.event.issue.title }} - run: | - LABEL=pending - NPU_KEYWORDS=(npu huawei ascend 华为 昇腾) - ISSUE_TITLE_LOWER=$(echo $ISSUE_TITLE | tr '[:upper:]' '[:lower:]') - for KEYWORD in ${NPU_KEYWORDS[@]}; do - if [[ $ISSUE_TITLE_LOWER == *$KEYWORD* ]] && [[ $ISSUE_TITLE_LOWER != *input* ]]; then - LABEL=pending,npu - break - fi - done - gh issue edit $ISSUE_URL --add-label $LABEL diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml deleted file mode 100644 index 7c5d317f..00000000 --- a/.github/workflows/publish.yml +++ /dev/null @@ -1,40 +0,0 @@ -name: publish - -on: - release: - types: - - published - -jobs: - publish: - name: Upload release to PyPI - - runs-on: ubuntu-latest - - environment: - name: release - url: https://pypi.org/p/llamafactory - - permissions: - id-token: write - - steps: - - name: Checkout - uses: actions/checkout@v4 - - - name: Set up Python - uses: actions/setup-python@v5 - with: - python-version: "3.8" - - - name: Install dependencies - run: | - python -m pip install --upgrade pip - python -m pip install build - - - name: Build package - run: | - python -m build - - - name: Publish package - uses: pypa/gh-action-pypi-publish@release/v1 diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml deleted file mode 100644 index 5c5d8de3..00000000 --- a/.github/workflows/tests.yml +++ /dev/null @@ -1,65 +0,0 @@ -name: tests - -on: - push: - branches: - - "main" - paths: - - "**.py" - - "requirements.txt" - - ".github/workflows/*.yml" - pull_request: - branches: - - "main" - paths: - - "**.py" - - "requirements.txt" - - ".github/workflows/*.yml" - -jobs: - tests: - strategy: - fail-fast: false - matrix: - python-version: - - "3.8" # TODO: remove py38 in next transformers release - - "3.9" - - "3.10" - - "3.11" - os: - - "ubuntu-latest" - - "windows-latest" - - "macos-13" - - runs-on: ${{ matrix.os }} - - environment: - name: tests - - env: - HF_TOKEN: ${{ secrets.HF_TOKEN }} - OS_NAME: ${{ matrix.os }} - - steps: - - name: Checkout - uses: actions/checkout@v4 - - - name: Set up Python - uses: actions/setup-python@v5 - with: - python-version: ${{ matrix.python-version }} - cache: "pip" - cache-dependency-path: "setup.py" - - - name: Install dependencies - run: | - python -m pip install --upgrade pip - python -m pip install ".[torch,dev]" - - - name: Check quality - run: | - make style && make quality - - - name: Test with pytest - run: | - make test diff --git a/CITATION.cff b/CITATION.cff deleted file mode 100644 index 01b4c9fd..00000000 --- a/CITATION.cff +++ /dev/null @@ -1,44 +0,0 @@ -cff-version: 1.2.0 -date-released: 2024-03 -message: "If you use this software, please cite it as below." -authors: -- family-names: "Zheng" - given-names: "Yaowei" -- family-names: "Zhang" - given-names: "Richong" -- family-names: "Zhang" - given-names: "Junhao" -- family-names: "Ye" - given-names: "Yanhan" -- family-names: "Luo" - given-names: "Zheyan" -- family-names: "Feng" - given-names: "Zhangchi" -- family-names: "Ma" - given-names: "Yongqiang" -title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models" -url: "https://arxiv.org/abs/2403.13372" -preferred-citation: - type: conference-paper - conference: - name: "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)" - authors: - - family-names: "Zheng" - given-names: "Yaowei" - - family-names: "Zhang" - given-names: "Richong" - - family-names: "Zhang" - given-names: "Junhao" - - family-names: "Ye" - given-names: "Yanhan" - - family-names: "Luo" - given-names: "Zheyan" - - family-names: "Feng" - given-names: "Zhangchi" - - family-names: "Ma" - given-names: "Yongqiang" - title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models" - url: "https://arxiv.org/abs/2403.13372" - year: 2024 - publisher: "Association for Computational Linguistics" - address: "Bangkok, Thailand" diff --git a/LICENSE b/LICENSE index b09cd785..989e2c59 100644 --- a/LICENSE +++ b/LICENSE @@ -198,4 +198,4 @@ Apache License distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and - limitations under the License. + limitations under the License. \ No newline at end of file diff --git a/README.md b/README.md index d0a94dc6..b1790553 100644 --- a/README.md +++ b/README.md @@ -1,49 +1,6 @@ -![# LLaMA Factory](assets/logo.png) - -[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers) -[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE) -[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) -[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) -[![Citation](https://img.shields.io/badge/citation-93-green)](#projects-using-llama-factory) -[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) -[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK) -[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) -[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing) -[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) -[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board) -[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board) -[![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) - -[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535) - -👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg). - -\[ English | [中文](README_zh.md) \] - -**Fine-tuning a large language model can be easy as...** - -https://github.com/user-attachments/assets/7c96b465-9df7-45f4-8053-bf03e58386d3 - -Choose your path: - -- **Documentation (WIP)**: https://llamafactory.readthedocs.io/zh-cn/latest/ -- **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing -- **Local machine**: Please refer to [usage](#getting-started) -- **PAI-DSW**: [Llama3 Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) -- **Amazon SageMaker**: [Blog](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) - -Recent activities: - -- **2024/10/18-2024/11/30**: Build a personal tour guide bot using PAI+LLaMA Factory. [[website]](https://developer.aliyun.com/topic/llamafactory2) - -> [!NOTE] -> Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them. - ## Table of Contents - [Features](#features) -- [Benchmark](#benchmark) -- [Changelog](#changelog) - [Supported Models](#supported-models) - [Supported Training Approaches](#supported-training-approaches) - [Provided Datasets](#provided-datasets) @@ -64,152 +21,43 @@ Recent activities: - **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc. - **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker. -## Benchmark - -Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory. - -![benchmark](assets/benchmark.svg) - -
Definitions - -- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024) -- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024) -- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024) -- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning. - -
- -## Changelog - -[24/11/27] We supported fine-tuning the **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** model and the **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** dataset. - -[24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage. - -[24/09/19] We supported fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models. - -[24/08/30] We supported fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR. - -
Full Changelog - -[24/08/27] We supported **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training. - -[24/08/09] We supported **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR. - -[24/07/04] We supported [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR. - -[24/06/16] We supported **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage. - -[24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models. - -[24/05/26] We supported **[SimPO](https://arxiv.org/abs/2405.14734)** algorithm for preference learning. See [examples](examples/README.md) for usage. - -[24/05/20] We supported fine-tuning the **PaliGemma** series models. Note that the PaliGemma models are pre-trained models, you need to fine-tune them with `paligemma` template for chat completion. - -[24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage. - -[24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details. - -[24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage. - -[24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details. - -[24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage. - -[24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)** optimizer. See [examples](examples/README.md) for usage. - -[24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison). - -[24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage. - -[24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv! - -[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage. - -[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage. - -[24/03/07] We supported **[GaLore](https://arxiv.org/abs/2403.03507)** optimizer. See [examples](examples/README.md) for usage. - -[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed. - -[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training. - -[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage. - -[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details. - -[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall_en`. - -[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details. - -[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement). - -[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)**. See [this tutorial](#download-from-modelscope-hub) for usage. - -[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune. - -[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention. - -[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage. - -[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs. - -[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings. - -[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage. - -[23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode. - -[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details. - -[23/07/18] We developed an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development. - -[23/07/09] We released **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested. - -[23/06/29] We provided a **reproducible example** of training a chat model using instruction-following datasets, see [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft) for details. - -[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**. - -[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage. - -
- ## Supported Models -| Model | Model size | Template | -| ----------------------------------------------------------------- | -------------------------------- | ---------------- | -| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | -| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | -| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | -| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | -| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | -| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | -| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | -| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | -| [Index](https://huggingface.co/IndexTeam) | 1.9B | index | -| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | -| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | -| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | -| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | -| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | -| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | -| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | -| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | -| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | -| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | -| [OLMo](https://huggingface.co/allenai) | 1B/7B | - | -| [PaliGemma](https://huggingface.co/google) | 3B | paligemma | -| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | -| [Phi-3](https://huggingface.co/microsoft) | 4B/14B | phi | -| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small | -| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | -| [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | -| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | -| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 | -| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | -| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | -| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | -| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | -| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan | +| Model | Model size | Template | +| --------------------------------------------------------------- | -------------------------------- | ---------------- | +| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | +| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | +| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | +| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | +| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | +| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | +| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | +| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | +| [Index](https://huggingface.co/IndexTeam) | 1.9B | index | +| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | +| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | +| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | +| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | +| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | +| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | +| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | +| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | +| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | +| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | +| [OLMo](https://huggingface.co/allenai) | 1B/7B | - | +| [PaliGemma](https://huggingface.co/google) | 3B | paligemma | +| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | +| [Phi-3](https://huggingface.co/microsoft) | 4B/14B | phi | +| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small | +| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | +| [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | +| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | +| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 | +| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | +| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | +| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | +| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | +| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan | > [!NOTE] > For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models. @@ -222,7 +70,7 @@ You also can add a custom chat template to [template.py](src/llamafactory/data/t ## Supported Training Approaches -| Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA | +| Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA | | ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | @@ -236,95 +84,6 @@ You also can add a custom chat template to [template.py](src/llamafactory/data/t > [!TIP] > The implementation details of PPO can be found in [this blog](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html). -## Provided Datasets - -
Pre-training datasets - -- [Wiki Demo (en)](data/wiki_demo.txt) -- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) -- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2) -- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220) -- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered) -- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile) -- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B) -- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb) -- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) -- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack) -- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata) - -
- -
Supervised fine-tuning datasets - -- [Identity (en&zh)](data/identity.json) -- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca) -- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3) -- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) -- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) -- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima) -- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) -- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN) -- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN) -- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) -- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M) -- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M) -- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) -- [UltraChat (en)](https://github.com/thunlp/UltraChat) -- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) -- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) -- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) -- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca) -- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca) -- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) -- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) -- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa) -- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa) -- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn) -- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) -- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data) -- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen) -- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k) -- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) -- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) -- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct) -- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) -- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) -- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) -- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction) -- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo) -- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2) -- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) -- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) -- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) -- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT) -- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k) -- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions) -- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de) -- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de) -- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de) -- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de) -- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de) -- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de) -- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de) -- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de) -- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de) - -
- -
Preference datasets - -- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k) -- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) -- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset) -- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback) -- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs) -- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf) -- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) -- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de) -- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k) - -
- Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands. ```bash @@ -354,109 +113,20 @@ huggingface-cli login ### Hardware Requirement -\* *estimated* +\* _estimated_ -| Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | +| Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | | ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ | | Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB | -| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | -| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | -| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | -| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | -| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | -| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB | +| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | +| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | +| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | +| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | +| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | +| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB | ## Getting Started -### Installation - -> [!IMPORTANT] -> Installation is mandatory. - -```bash -git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git -cd LLaMA-Factory -pip install -e ".[torch,metrics]" -``` - -Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, openmind, quality - -> [!TIP] -> Use `pip install --no-deps -e .` to resolve package conflicts. - -
For Windows users - -If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version. - -```bash -pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl -``` - -To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements. - -
- -
For Ascend NPU users - -To install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands: - -```bash -# replace the url according to your CANN version and devices -# install CANN Toolkit -wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run -bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install - -# install CANN Kernels -wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run -bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install - -# set env variables -source /usr/local/Ascend/ascend-toolkit/set_env.sh -``` - -| Requirement | Minimum | Recommend | -| ------------ | ------- | ----------- | -| CANN | 8.0.RC1 | 8.0.RC1 | -| torch | 2.1.0 | 2.1.0 | -| torch-npu | 2.1.0 | 2.1.0.post3 | -| deepspeed | 0.13.2 | 0.13.2 | - -Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use. - -If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations. - -Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html) - -
- -### Data Preparation - -Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope / Modelers hub or load the dataset in local disk. - -> [!NOTE] -> Please update `data/dataset_info.json` to use your custom dataset. - -### Quickstart - -Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively. - -```bash -llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml -llamafactory-cli chat examples/inference/llama3_lora_sft.yaml -llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml -``` - -See [examples/README.md](examples/README.md) for advanced usage (including distributed training). - -> [!TIP] -> Use `llamafactory-cli help` to show help information. - -### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio)) - -```bash -llamafactory-cli webui -``` - ### Build Docker For CUDA users: @@ -467,294 +137,34 @@ docker compose up -d docker compose exec llamafactory bash ``` -For Ascend NPU users: +### Installation + +> [!IMPORTANT] +> Installation is mandatory. ```bash -cd docker/docker-npu/ -docker compose up -d -docker compose exec llamafactory bash +git clone --depth 1 http://172.16.10.175:2230/kyy/llm_trainer.git +cd llm_trainer +pip install -e ".[torch,metrics]" ``` -For AMD ROCm users: +Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, openmind, quality + +### Data Preparation + +You can either use datasets on HuggingFace / ModelScope / Modelers hub or load the dataset in local disk. + +> [!NOTE] +> Please update `data/dataset_info.json` to use your custom dataset. + +### SFT Start ```bash -cd docker/docker-rocm/ -docker compose up -d -docker compose exec llamafactory bash +sh run_train/run_sft.sh ``` -
Build without Docker Compose - -For CUDA users: +### PT Start ```bash -docker build -f ./docker/docker-cuda/Dockerfile \ - --build-arg INSTALL_BNB=false \ - --build-arg INSTALL_VLLM=false \ - --build-arg INSTALL_DEEPSPEED=false \ - --build-arg INSTALL_FLASHATTN=false \ - --build-arg PIP_INDEX=https://pypi.org/simple \ - -t llamafactory:latest . - -docker run -dit --gpus=all \ - -v ./hf_cache:/root/.cache/huggingface \ - -v ./ms_cache:/root/.cache/modelscope \ - -v ./om_cache:/root/.cache/openmind \ - -v ./data:/app/data \ - -v ./output:/app/output \ - -p 7860:7860 \ - -p 8000:8000 \ - --shm-size 16G \ - --name llamafactory \ - llamafactory:latest - -docker exec -it llamafactory bash +sh run_train/run_pt.sh ``` - -For Ascend NPU users: - -```bash -# Choose docker image upon your environment -docker build -f ./docker/docker-npu/Dockerfile \ - --build-arg INSTALL_DEEPSPEED=false \ - --build-arg PIP_INDEX=https://pypi.org/simple \ - -t llamafactory:latest . - -# Change `device` upon your resources -docker run -dit \ - -v ./hf_cache:/root/.cache/huggingface \ - -v ./ms_cache:/root/.cache/modelscope \ - -v ./om_cache:/root/.cache/openmind \ - -v ./data:/app/data \ - -v ./output:/app/output \ - -v /usr/local/dcmi:/usr/local/dcmi \ - -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ - -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ - -v /etc/ascend_install.info:/etc/ascend_install.info \ - -p 7860:7860 \ - -p 8000:8000 \ - --device /dev/davinci0 \ - --device /dev/davinci_manager \ - --device /dev/devmm_svm \ - --device /dev/hisi_hdc \ - --shm-size 16G \ - --name llamafactory \ - llamafactory:latest - -docker exec -it llamafactory bash -``` - -For AMD ROCm users: - -```bash -docker build -f ./docker/docker-rocm/Dockerfile \ - --build-arg INSTALL_BNB=false \ - --build-arg INSTALL_VLLM=false \ - --build-arg INSTALL_DEEPSPEED=false \ - --build-arg INSTALL_FLASHATTN=false \ - --build-arg PIP_INDEX=https://pypi.org/simple \ - -t llamafactory:latest . - -docker run -dit \ - -v ./hf_cache:/root/.cache/huggingface \ - -v ./ms_cache:/root/.cache/modelscope \ - -v ./om_cache:/root/.cache/openmind \ - -v ./data:/app/data \ - -v ./output:/app/output \ - -v ./saves:/app/saves \ - -p 7860:7860 \ - -p 8000:8000 \ - --device /dev/kfd \ - --device /dev/dri \ - --shm-size 16G \ - --name llamafactory \ - llamafactory:latest - -docker exec -it llamafactory bash -``` - -
- -
Details about volume - -- `hf_cache`: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory. -- `ms_cache`: Similar to Hugging Face cache but for ModelScope users. -- `om_cache`: Similar to Hugging Face cache but for Modelers users. -- `data`: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI. -- `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine. - -
- -### Deploy with OpenAI-style API and vLLM - -```bash -API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml -``` - -> [!TIP] -> Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document. -> -> Examples: [Image understanding](scripts/api_example/test_image.py) | [Function calling](scripts/api_example/test_toolcall.py) - -### Download from ModelScope Hub - -If you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope. - -```bash -export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows -``` - -Train the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`. - -### Download from Modelers Hub - -You can also use Modelers Hub to download models and datasets. - -```bash -export USE_OPENMIND_HUB=1 # `set USE_OPENMIND_HUB=1` for Windows -``` - -Train the model by specifying a model ID of the Modelers Hub as the `model_name_or_path`. You can find a full list of model IDs at [Modelers Hub](https://modelers.cn/models), e.g., `TeleAI/TeleChat-7B-pt`. - -### Use W&B Logger - -To use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files. - -```yaml -report_to: wandb -run_name: test_run # optional -``` - -Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account. - -## Projects using LLaMA Factory - -If you have a project that should be incorporated, please contact via email or create a pull request. - -
Click to show - -1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223) -1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092) -1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526) -1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816) -1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710) -1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319) -1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286) -1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904) -1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625) -1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176) -1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187) -1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746) -1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801) -1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809) -1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819) -1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204) -1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714) -1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043) -1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333) -1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419) -1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228) -1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073) -1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541) -1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246) -1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008) -1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443) -1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604) -1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827) -1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167) -1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316) -1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084) -1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836) -1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581) -1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215) -1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621) -1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140) -1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585) -1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760) -1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378) -1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055) -1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739) -1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816) -1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215) -1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30) -1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380) -1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106) -1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136) -1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496) -1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688) -1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955) -1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973) -1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115) -1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815) -1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099) -1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173) -1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074) -1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408) -1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546) -1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695) -1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233) -1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069) -1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25) -1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949) -1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365) -1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470) -1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129) -1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044) -1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756) -1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/) -1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561) -1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637) -1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535) -1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705) -1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137) -1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf) -1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11) -1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23) -1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693) -1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168) -1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/) -1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072) -1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611) -1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B. -1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge. -1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B. -1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B. -1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods. -1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) -1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B. -1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models. -1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX. -1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory. -1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357) - -
- -## License - -This repository is licensed under the [Apache-2.0 License](LICENSE). - -Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan) - -## Citation - -If this work is helpful, please kindly cite as: - -```bibtex -@inproceedings{zheng2024llamafactory, - title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models}, - author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma}, - booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)}, - address={Bangkok, Thailand}, - publisher={Association for Computational Linguistics}, - year={2024}, - url={http://arxiv.org/abs/2403.13372} -} -``` - -## Acknowledgement - -This repo benefits from [PEFT](https://github.com/huggingface/peft), [TRL](https://github.com/huggingface/trl), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works. - -## Star History - -![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date) diff --git a/README_zh.md b/README_zh.md deleted file mode 100644 index 7e5b914b..00000000 --- a/README_zh.md +++ /dev/null @@ -1,760 +0,0 @@ -![# LLaMA Factory](assets/logo.png) - -[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers) -[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE) -[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) -[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) -[![Citation](https://img.shields.io/badge/citation-93-green)](#使用了-llama-factory-的项目) -[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) -[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK) -[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) -[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing) -[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) -[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board) -[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board) -[![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) - -[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535) - -👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。 - -\[ [English](README.md) | 中文 \] - -**微调大模型可以像这样轻松…** - -https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272 - -选择你的打开方式: - -- **入门教程**:https://zhuanlan.zhihu.com/p/695287607 -- **框架文档**:https://llamafactory.readthedocs.io/zh-cn/latest/ -- **Colab**:https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing -- **本地机器**:请见[如何使用](#如何使用) -- **PAI-DSW**:[Llama3 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) -- **Amazon SageMaker**:[博客](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) - -近期活动: - -- **2024/10/18-2024/11/30**:使用 PAI+LLaMA Factory 构建个性化导游机器人。[[活动页面]](https://developer.aliyun.com/topic/llamafactory2) - -> [!NOTE] -> 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。 - -## 目录 - -- [项目特色](#项目特色) -- [性能指标](#性能指标) -- [更新日志](#更新日志) -- [模型](#模型) -- [训练方法](#训练方法) -- [数据集](#数据集) -- [软硬件依赖](#软硬件依赖) -- [如何使用](#如何使用) -- [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目) -- [协议](#协议) -- [引用](#引用) -- [致谢](#致谢) - -## 项目特色 - -- **多种模型**:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。 -- **集成方法**:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。 -- **多种精度**:16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。 -- **先进算法**:[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[Adam-mini](https://github.com/zyushun/Adam-mini)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ、PiSSA 和 Agent 微调。 -- **实用技巧**:[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。 -- **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow 等等。 -- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。 - -## 性能指标 - -与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。 - -![benchmark](assets/benchmark.svg) - -
变量定义 - -- **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024) -- **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024) -- **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024) -- 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32`。 - -
- -## 更新日志 - -[24/11/27] 我们支持了 **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** 模型的微调和 **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** 数据集。 - -[24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。 - -[24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。 - -[24/08/30] 我们支持了 **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** 模型的微调。感谢 [@simonJJJ](https://github.com/simonJJJ) 的 PR。 - -
展开日志 - -[24/08/27] 我们支持了 **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**。请使用 `enable_liger_kernel: true` 来加速训练。 - -[24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。 - -[24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。 - -[24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/06/07] 我们支持了 **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** 和 **[GLM-4](https://github.com/THUDM/GLM-4)** 模型的微调。 - -[24/05/26] 我们支持了 **[SimPO](https://arxiv.org/abs/2405.14734)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/05/20] 我们支持了 **PaliGemma** 系列模型的微调。注意 PaliGemma 是预训练模型,你需要使用 `paligemma` 模板进行微调使其获得对话能力。 - -[24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。 - -[24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。 - -[24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。 - -[24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看! - -[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/03/07] 我们支持了 **[GaLore](https://arxiv.org/abs/2403.03507)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。 - -[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。 - -[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)。 - -[24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。 - -[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall_zh` 即可使模型获得工具调用能力。 - -[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。 - -[23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。 - -[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔搭社区下载)。 - -[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。 - -[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。 - -[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。 - -[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。 - -[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。 - -[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。 - -[23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true` 和 `max_steps: 10000` 参数来流式加载数据集。 - -[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。 - -[23/07/18] 我们开发了支持训练和测试的**浏览器一体化界面**。请使用 `train_web.py` 在您的浏览器中微调模型。感谢 [@KanadeSiina](https://github.com/KanadeSiina) 和 [@codemayq](https://github.com/codemayq) 在该功能开发中付出的努力。 - -[23/07/09] 我们开源了 **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹,一个简单易用的、能迅速编辑大模型事实记忆的工具包。如果您感兴趣请关注我们的 [FastEdit](https://github.com/hiyouga/FastEdit) 项目。 - -[23/06/29] 我们提供了一个**可复现的**指令模型微调示例,详细内容请查阅 [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft)。 - -[23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。 - -[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)。 - -
- -## 模型 - -| 模型名 | 模型大小 | Template | -| ----------------------------------------------------------------- | -------------------------------- | ---------------- | -| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | -| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | -| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | -| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | -| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | -| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | -| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | -| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | -| [Index](https://huggingface.co/IndexTeam) | 1.9B | index | -| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | -| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | -| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | -| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | -| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | -| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | -| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | -| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | -| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | -| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | -| [OLMo](https://huggingface.co/allenai) | 1B/7B | - | -| [PaliGemma](https://huggingface.co/google) | 3B | paligemma | -| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | -| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi | -| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | -| [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | -| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | -| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 | -| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | -| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | -| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | -| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | -| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan | - -> [!NOTE] -> 对于所有“基座”(Base)模型,`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。 -> -> 请务必在训练和推理时采用**完全一致**的模板。 - -项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。 - -您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。 - -## 训练方法 - -| 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA | -| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ | -| 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| SimPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | - -> [!TIP] -> 有关 PPO 的实现细节,请参考[此博客](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html)。 - -## 数据集 - -
预训练数据集 - -- [Wiki Demo (en)](data/wiki_demo.txt) -- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) -- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2) -- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220) -- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered) -- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile) -- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B) -- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb) -- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) -- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack) -- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata) - -
- -
指令微调数据集 - -- [Identity (en&zh)](data/identity.json) -- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca) -- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3) -- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) -- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) -- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima) -- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) -- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN) -- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN) -- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) -- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M) -- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M) -- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) -- [UltraChat (en)](https://github.com/thunlp/UltraChat) -- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) -- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) -- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) -- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca) -- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca) -- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) -- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) -- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa) -- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa) -- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn) -- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) -- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data) -- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen) -- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k) -- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) -- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) -- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct) -- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) -- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) -- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) -- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction) -- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo) -- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2) -- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) -- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) -- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) -- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT) -- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k) -- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions) -- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de) -- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de) -- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de) -- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de) -- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de) -- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de) -- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de) -- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de) -- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de) - -
- -
偏好数据集 - -- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k) -- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) -- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset) -- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback) -- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs) -- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf) -- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) -- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de) -- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k) - -
- -部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。 - -```bash -pip install --upgrade huggingface_hub -huggingface-cli login -``` - -## 软硬件依赖 - -| 必需项 | 至少 | 推荐 | -| ------------ | ------- | --------- | -| python | 3.8 | 3.11 | -| torch | 1.13.1 | 2.4.0 | -| transformers | 4.41.2 | 4.43.4 | -| datasets | 2.16.0 | 2.20.0 | -| accelerate | 0.30.1 | 0.32.0 | -| peft | 0.11.1 | 0.12.0 | -| trl | 0.8.6 | 0.9.6 | - -| 可选项 | 至少 | 推荐 | -| ------------ | ------- | --------- | -| CUDA | 11.6 | 12.2 | -| deepspeed | 0.10.0 | 0.14.0 | -| bitsandbytes | 0.39.0 | 0.43.1 | -| vllm | 0.4.3 | 0.5.0 | -| flash-attn | 2.3.0 | 2.6.3 | - -### 硬件依赖 - -\* *估算值* - -| 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | -| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ | -| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB | -| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | -| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | -| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | -| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | -| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | -| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB | - -## 如何使用 - -### 安装 LLaMA Factory - -> [!IMPORTANT] -> 此步骤为必需。 - -```bash -git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git -cd LLaMA-Factory -pip install -e ".[torch,metrics]" -``` - -可选的额外依赖项:torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、adam-mini、qwen、modelscope、openmind、quality - -> [!TIP] -> 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。 - -
Windows 用户指南 - -如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。 - -```bash -pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl -``` - -如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。 - -
- -
昇腾 NPU 用户指南 - -在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令: - -```bash -# 请替换 URL 为 CANN 版本和设备型号对应的 URL -# 安装 CANN Toolkit -wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run -bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install - -# 安装 CANN Kernels -wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run -bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install - -# 设置环境变量 -source /usr/local/Ascend/ascend-toolkit/set_env.sh -``` - -| 依赖项 | 至少 | 推荐 | -| ------------ | ------- | ----------- | -| CANN | 8.0.RC1 | 8.0.RC1 | -| torch | 2.1.0 | 2.1.0 | -| torch-npu | 2.1.0 | 2.1.0.post3 | -| deepspeed | 0.13.2 | 0.13.2 | - -请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。 - -如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。 - -下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html) - -
- -### 数据准备 - -关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope / Modelers 上的数据集或加载本地数据集。 - -> [!NOTE] -> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。 - -### 快速开始 - -下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。 - -```bash -llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml -llamafactory-cli chat examples/inference/llama3_lora_sft.yaml -llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml -``` - -高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。 - -> [!TIP] -> 使用 `llamafactory-cli help` 显示帮助信息。 - -### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动) - -```bash -llamafactory-cli webui -``` - -### 构建 Docker - -CUDA 用户: - -```bash -cd docker/docker-cuda/ -docker compose up -d -docker compose exec llamafactory bash -``` - -昇腾 NPU 用户: - -```bash -cd docker/docker-npu/ -docker compose up -d -docker compose exec llamafactory bash -``` - -AMD ROCm 用户: - -```bash -cd docker/docker-rocm/ -docker compose up -d -docker compose exec llamafactory bash -``` - -
不使用 Docker Compose 构建 - -CUDA 用户: - -```bash -docker build -f ./docker/docker-cuda/Dockerfile \ - --build-arg INSTALL_BNB=false \ - --build-arg INSTALL_VLLM=false \ - --build-arg INSTALL_DEEPSPEED=false \ - --build-arg INSTALL_FLASHATTN=false \ - --build-arg PIP_INDEX=https://pypi.org/simple \ - -t llamafactory:latest . - -docker run -dit --gpus=all \ - -v ./hf_cache:/root/.cache/huggingface \ - -v ./ms_cache:/root/.cache/modelscope \ - -v ./om_cache:/root/.cache/openmind \ - -v ./data:/app/data \ - -v ./output:/app/output \ - -p 7860:7860 \ - -p 8000:8000 \ - --shm-size 16G \ - --name llamafactory \ - llamafactory:latest - -docker exec -it llamafactory bash -``` - -昇腾 NPU 用户: - -```bash -# 根据您的环境选择镜像 -docker build -f ./docker/docker-npu/Dockerfile \ - --build-arg INSTALL_DEEPSPEED=false \ - --build-arg PIP_INDEX=https://pypi.org/simple \ - -t llamafactory:latest . - -# 根据您的资源更改 `device` -docker run -dit \ - -v ./hf_cache:/root/.cache/huggingface \ - -v ./ms_cache:/root/.cache/modelscope \ - -v ./om_cache:/root/.cache/openmind \ - -v ./data:/app/data \ - -v ./output:/app/output \ - -v /usr/local/dcmi:/usr/local/dcmi \ - -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ - -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ - -v /etc/ascend_install.info:/etc/ascend_install.info \ - -p 7860:7860 \ - -p 8000:8000 \ - --device /dev/davinci0 \ - --device /dev/davinci_manager \ - --device /dev/devmm_svm \ - --device /dev/hisi_hdc \ - --shm-size 16G \ - --name llamafactory \ - llamafactory:latest - -docker exec -it llamafactory bash -``` - -AMD ROCm 用户: - -```bash -docker build -f ./docker/docker-rocm/Dockerfile \ - --build-arg INSTALL_BNB=false \ - --build-arg INSTALL_VLLM=false \ - --build-arg INSTALL_DEEPSPEED=false \ - --build-arg INSTALL_FLASHATTN=false \ - --build-arg PIP_INDEX=https://pypi.org/simple \ - -t llamafactory:latest . - -docker run -dit \ - -v ./hf_cache:/root/.cache/huggingface \ - -v ./ms_cache:/root/.cache/modelscope \ - -v ./om_cache:/root/.cache/openmind \ - -v ./data:/app/data \ - -v ./output:/app/output \ - -v ./saves:/app/saves \ - -p 7860:7860 \ - -p 8000:8000 \ - --device /dev/kfd \ - --device /dev/dri \ - --shm-size 16G \ - --name llamafactory \ - llamafactory:latest - -docker exec -it llamafactory bash -``` - -
- -
数据卷详情 - -- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。 -- `ms_cache`:类似 Hugging Face 缓存文件夹,为 ModelScope 用户提供。 -- `om_cache`:类似 Hugging Face 缓存文件夹,为 Modelers 用户提供。 -- `data`:宿主机中存放数据集的文件夹路径。 -- `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。 - -
- -### 利用 vLLM 部署 OpenAI API - -```bash -API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml -``` - -> [!TIP] -> API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。 -> -> 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py) - -### 从魔搭社区下载 - -如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。 - -```bash -export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1` -``` - -将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`。 - -### 从魔乐社区下载 - -您也可以通过下述方法,使用魔乐社区下载数据集和模型。 - -```bash -export USE_OPENMIND_HUB=1 # Windows 使用 `set USE_OPENMIND_HUB=1` -``` - -将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔乐社区](https://modelers.cn/models)查看所有可用的模型,例如 `TeleAI/TeleChat-7B-pt`。 - -### 使用 W&B 面板 - -若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。 - -```yaml -report_to: wandb -run_name: test_run # 可选 -``` - -在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。 - -## 使用了 LLaMA Factory 的项目 - -如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。 - -
点击显示 - -1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223) -1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092) -1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526) -1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816) -1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710) -1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319) -1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286) -1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904) -1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625) -1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176) -1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187) -1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746) -1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801) -1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809) -1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819) -1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204) -1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714) -1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043) -1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333) -1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419) -1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228) -1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073) -1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541) -1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246) -1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008) -1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443) -1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604) -1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827) -1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167) -1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316) -1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084) -1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836) -1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581) -1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215) -1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621) -1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140) -1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585) -1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760) -1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378) -1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055) -1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739) -1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816) -1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215) -1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30) -1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380) -1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106) -1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136) -1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496) -1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688) -1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955) -1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973) -1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115) -1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815) -1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099) -1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173) -1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074) -1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408) -1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546) -1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695) -1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233) -1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069) -1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25) -1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949) -1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365) -1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470) -1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129) -1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044) -1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756) -1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/) -1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561) -1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637) -1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535) -1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705) -1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137) -1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf) -1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11) -1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23) -1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693) -1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168) -1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/) -1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072) -1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611) -1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。 -1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。 -1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。 -1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。 -1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。 -1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) -1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。 -1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。 -1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。 -1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调. -1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357) - -
- -## 协议 - -本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。 - -使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan) - -## 引用 - -如果您觉得此项目有帮助,请考虑以下列格式引用 - -```bibtex -@inproceedings{zheng2024llamafactory, - title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models}, - author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma}, - booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)}, - address={Bangkok, Thailand}, - publisher={Association for Computational Linguistics}, - year={2024}, - url={http://arxiv.org/abs/2403.13372} -} -``` - -## 致谢 - -本项目受益于 [PEFT](https://github.com/huggingface/peft)、[TRL](https://github.com/huggingface/trl)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。 - -## Star History - -![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date) diff --git a/assets/benchmark.svg b/assets/benchmark.svg deleted file mode 100644 index e2b1db48..00000000 --- a/assets/benchmark.svg +++ /dev/null @@ -1,1216 +0,0 @@ - - - - - - - - 2023-11-18T11:28:03.028228 - image/svg+xml - - - Matplotlib v3.7.1, https://matplotlib.org/ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/assets/logo.png b/assets/logo.png deleted file mode 100644 index 5fb3dd56..00000000 Binary files a/assets/logo.png and /dev/null differ diff --git a/assets/wechat.jpg b/assets/wechat.jpg deleted file mode 100644 index 79cdc21a..00000000 Binary files a/assets/wechat.jpg and /dev/null differ diff --git a/assets/wechat_npu.jpg b/assets/wechat_npu.jpg deleted file mode 100644 index 5104d61c..00000000 Binary files a/assets/wechat_npu.jpg and /dev/null differ diff --git a/configs/ds2_config_bf.json b/configs/ds2_config_bf.json new file mode 100644 index 00000000..1bfc7279 --- /dev/null +++ b/configs/ds2_config_bf.json @@ -0,0 +1,27 @@ +{ + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "zero_allow_untested_optimizer": true, + "fp16": { + "enabled": "auto", + "loss_scale": 0, + "initial_scale_power": 16, + "loss_scale_window": 1000, + "hysteresis": 2, + "min_loss_scale": 1 + }, + "zero_optimization": { + "stage": 2, + "allgather_partitions": true, + "allgather_bucket_size": 5e8, + "reduce_scatter": true, + "reduce_bucket_size": 5e8, + "overlap_comm": false, + "contiguous_gradients": true + }, + "bf16": { + "enabled": "auto" + } + } \ No newline at end of file diff --git a/configs/ds3_bf_stage_v2.json b/configs/ds3_bf_stage_v2.json new file mode 100644 index 00000000..54f512aa --- /dev/null +++ b/configs/ds3_bf_stage_v2.json @@ -0,0 +1,27 @@ +{ + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "zero_allow_untested_optimizer": true, + "bf16": { + "enabled": "auto", + "loss_scale": 0, + "loss_scale_window": 1000, + "initial_scale_power": 16, + "hysteresis": 2, + "min_loss_scale": 1 + }, + "zero_optimization": { + "stage": 3, + "overlap_comm": true, + "contiguous_gradients": true, + "allgather_bucket_size": 5e8, + "reduce_bucket_size": 5e8, + "stage3_prefetch_bucket_size": "auto", + "stage3_param_persistence_threshold": "auto", + "stage3_max_live_parameters": 1e9, + "stage3_max_reuse_distance": 1e9, + "stage3_gather_16bit_weights_on_model_save": true + } +} \ No newline at end of file diff --git a/configs/ds3_fp_stage_v2.json b/configs/ds3_fp_stage_v2.json new file mode 100644 index 00000000..2adb8871 --- /dev/null +++ b/configs/ds3_fp_stage_v2.json @@ -0,0 +1,27 @@ +{ + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "zero_allow_untested_optimizer": true, + "fp16": { + "enabled": "auto", + "loss_scale": 0, + "loss_scale_window": 1000, + "initial_scale_power": 16, + "hysteresis": 2, + "min_loss_scale": 1 + }, + "zero_optimization": { + "stage": 3, + "overlap_comm": true, + "contiguous_gradients": true, + "allgather_bucket_size": 5e8, + "reduce_bucket_size": 5e8, + "stage3_prefetch_bucket_size": "auto", + "stage3_param_persistence_threshold": "auto", + "stage3_max_live_parameters": 1e9, + "stage3_max_reuse_distance": 1e9, + "stage3_gather_16bit_weights_on_model_save": true + } +} \ No newline at end of file diff --git a/configs/ds_zero3.json b/configs/ds_zero3.json new file mode 100644 index 00000000..dcc78a2f --- /dev/null +++ b/configs/ds_zero3.json @@ -0,0 +1,31 @@ +{ + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "zero_allow_untested_optimizer": true, + "fp16": { + "enabled": "auto", + "loss_scale": 0, + "loss_scale_window": 1000, + "initial_scale_power": 16, + "hysteresis": 2, + "min_loss_scale": 1 + }, + "zero_optimization": { + "stage": 3, + "offload_optimizer": { + "device": "cpu", + "pin_memory": true + }, + "overlap_comm": true, + "contiguous_gradients": true, + "allgather_bucket_size": 1e9, + "reduce_bucket_size": 1e9, + "stage3_prefetch_bucket_size": "auto", + "stage3_param_persistence_threshold": "auto", + "stage3_max_live_parameters": 2e9, + "stage3_max_reuse_distance": 2e9, + "stage3_gather_16bit_weights_on_model_save": true + } +} \ No newline at end of file diff --git a/configs/fp_ds3_stage_v2.json b/configs/fp_ds3_stage_v2.json new file mode 100644 index 00000000..54f512aa --- /dev/null +++ b/configs/fp_ds3_stage_v2.json @@ -0,0 +1,27 @@ +{ + "train_batch_size": "auto", + "train_micro_batch_size_per_gpu": "auto", + "gradient_accumulation_steps": "auto", + "gradient_clipping": "auto", + "zero_allow_untested_optimizer": true, + "bf16": { + "enabled": "auto", + "loss_scale": 0, + "loss_scale_window": 1000, + "initial_scale_power": 16, + "hysteresis": 2, + "min_loss_scale": 1 + }, + "zero_optimization": { + "stage": 3, + "overlap_comm": true, + "contiguous_gradients": true, + "allgather_bucket_size": 5e8, + "reduce_bucket_size": 5e8, + "stage3_prefetch_bucket_size": "auto", + "stage3_param_persistence_threshold": "auto", + "stage3_max_live_parameters": 1e9, + "stage3_max_reuse_distance": 1e9, + "stage3_gather_16bit_weights_on_model_save": true + } +} \ No newline at end of file diff --git a/data/README.md b/data/README.md deleted file mode 100644 index 1786804f..00000000 --- a/data/README.md +++ /dev/null @@ -1,419 +0,0 @@ -The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it. - -Currently we support datasets in **alpaca** and **sharegpt** format. - -```json -"dataset_name": { - "hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)", - "ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)", - "script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)", - "file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)", - "formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})", - "ranking": "whether the dataset is a preference dataset or not. (default: False)", - "subset": "the name of the subset. (optional, default: None)", - "split": "the name of dataset split to be used. (optional, default: train)", - "folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)", - "num_samples": "the number of samples in the dataset to be used. (optional, default: None)", - "columns (optional)": { - "prompt": "the column name in the dataset containing the prompts. (default: instruction)", - "query": "the column name in the dataset containing the queries. (default: input)", - "response": "the column name in the dataset containing the responses. (default: output)", - "history": "the column name in the dataset containing the histories. (default: None)", - "messages": "the column name in the dataset containing the messages. (default: conversations)", - "system": "the column name in the dataset containing the system prompts. (default: None)", - "tools": "the column name in the dataset containing the tool description. (default: None)", - "images": "the column name in the dataset containing the image inputs. (default: None)", - "videos": "the column name in the dataset containing the videos inputs. (default: None)", - "chosen": "the column name in the dataset containing the chosen answers. (default: None)", - "rejected": "the column name in the dataset containing the rejected answers. (default: None)", - "kto_tag": "the column name in the dataset containing the kto tags. (default: None)" - }, - "tags (optional, used for the sharegpt format)": { - "role_tag": "the key in the message represents the identity. (default: from)", - "content_tag": "the key in the message represents the content. (default: value)", - "user_tag": "the value of the role_tag represents the user. (default: human)", - "assistant_tag": "the value of the role_tag represents the assistant. (default: gpt)", - "observation_tag": "the value of the role_tag represents the tool results. (default: observation)", - "function_tag": "the value of the role_tag represents the function call. (default: function_call)", - "system_tag": "the value of the role_tag represents the system prompt. (default: system, can override system column)" - } -} -``` - -## Alpaca Format - -### Supervised Fine-Tuning Dataset - -* [Example dataset](alpaca_en_demo.json) - -In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\ninput`. The `output` column represents the model response. - -The `system` column will be used as the system prompt if specified. - -The `history` column is a list consisting of string tuples representing prompt-response pairs in the history messages. Note that the responses in the history **will also be learned by the model** in supervised fine-tuning. - -```json -[ - { - "instruction": "human instruction (required)", - "input": "human input (optional)", - "output": "model response (required)", - "system": "system prompt (optional)", - "history": [ - ["human instruction in the first round (optional)", "model response in the first round (optional)"], - ["human instruction in the second round (optional)", "model response in the second round (optional)"] - ] - } -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "columns": { - "prompt": "instruction", - "query": "input", - "response": "output", - "system": "system", - "history": "history" - } -} -``` - -### Pre-training Dataset - -- [Example dataset](c4_demo.json) - -In pre-training, only the `text` column will be used for model learning. - -```json -[ - {"text": "document"}, - {"text": "document"} -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "columns": { - "prompt": "text" - } -} -``` - -### Preference Dataset - -Preference datasets are used for reward modeling, DPO training, ORPO and SimPO training. - -It requires a better response in `chosen` column and a worse response in `rejected` column. - -```json -[ - { - "instruction": "human instruction (required)", - "input": "human input (optional)", - "chosen": "chosen answer (required)", - "rejected": "rejected answer (required)" - } -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "ranking": true, - "columns": { - "prompt": "instruction", - "query": "input", - "chosen": "chosen", - "rejected": "rejected" - } -} -``` - -### KTO Dataset - -An additional column `kto_tag` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. - -### Multimodal Image Dataset - -An additional column `images` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. - -### Multimodal Video Dataset - -An additional column `videos` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. - -## Sharegpt Format - -### Supervised Fine-Tuning Dataset - -- [Example dataset](glaive_toolcall_en_demo.json) - -Compared to the alpaca format, the sharegpt format allows the datasets have **more roles**, such as human, gpt, observation and function. They are presented in a list of objects in the `conversations` column. - -Note that the human and observation should appear in odd positions, while gpt and function should appear in even positions. - -```json -[ - { - "conversations": [ - { - "from": "human", - "value": "human instruction" - }, - { - "from": "function_call", - "value": "tool arguments" - }, - { - "from": "observation", - "value": "tool result" - }, - { - "from": "gpt", - "value": "model response" - } - ], - "system": "system prompt (optional)", - "tools": "tool description (optional)" - } -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "formatting": "sharegpt", - "columns": { - "messages": "conversations", - "system": "system", - "tools": "tools" - } -} -``` - -### Pre-training Dataset - -Not yet supported, please use the [alpaca](#alpaca-format) format. - -### Preference Dataset - -- [Example dataset](dpo_en_demo.json) - -Preference datasets in sharegpt format also require a better message in `chosen` column and a worse message in `rejected` column. - -```json -[ - { - "conversations": [ - { - "from": "human", - "value": "human instruction" - }, - { - "from": "gpt", - "value": "model response" - }, - { - "from": "human", - "value": "human instruction" - } - ], - "chosen": { - "from": "gpt", - "value": "chosen answer (required)" - }, - "rejected": { - "from": "gpt", - "value": "rejected answer (required)" - } - } -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "formatting": "sharegpt", - "ranking": true, - "columns": { - "messages": "conversations", - "chosen": "chosen", - "rejected": "rejected" - } -} -``` - -### KTO Dataset - -- [Example dataset](kto_en_demo.json) - -KTO datasets require a extra `kto_tag` column containing the boolean human feedback. - -```json -[ - { - "conversations": [ - { - "from": "human", - "value": "human instruction" - }, - { - "from": "gpt", - "value": "model response" - } - ], - "kto_tag": "human feedback [true/false] (required)" - } -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "formatting": "sharegpt", - "columns": { - "messages": "conversations", - "kto_tag": "kto_tag" - } -} -``` - -### Multimodal Image Dataset - -- [Example dataset](mllm_demo.json) - -Multimodal image datasets require a `images` column containing the paths to the input images. - -The number of images should be identical to the `` tokens in the conversations. - -```json -[ - { - "conversations": [ - { - "from": "human", - "value": "human instruction" - }, - { - "from": "gpt", - "value": "model response" - } - ], - "images": [ - "image path (required)" - ] - } -] -``` - -Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: - -```json -"dataset_name": { - "file_name": "data.json", - "formatting": "sharegpt", - "columns": { - "messages": "conversations", - "images": "images" - } -} -``` - -### Multimodal Video Dataset - -- [Example dataset](mllm_video_demo.json) - -Multimodal video datasets require a `videos` column containing the paths to the input videos. - -The number of videos should be identical to the `