{"id":4012,"date":"2024-06-03T17:08:25","date_gmt":"2024-06-03T09:08:25","guid":{"rendered":"https:\/\/www.aqwu.net\/wp\/?p=4012"},"modified":"2024-06-03T17:42:08","modified_gmt":"2024-06-03T09:42:08","slug":"%e5%a6%82%e4%bd%95%e7%94%a8-hugging-face-%e5%be%ae%e8%b0%83-llm","status":"publish","type":"post","link":"https:\/\/www.aqwu.net\/wp\/?p=4012","title":{"rendered":"\u5982\u4f55\u7528 Hugging Face \u5fae\u8c03 LLM"},"content":{"rendered":"\n<p>\u5927\u578b\u8bed\u8a00\u6a21\u578b\uff08LLM\uff09\u5728\u53bb\u5e74\u53d6\u5f97\u4e86\u5f88\u5927\u8fdb\u5c55\u3002\u6211\u4eec\u4ece\u6ca1\u6709 ChatGPT \u7684\u7ade\u4e89\u5bf9\u624b\u53d8\u6210\u4e86\u4e00\u6574\u5957 LLM \u52a8\u7269\u56ed\uff0c\u5305\u62ec Meta AI \u7684&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/meta-llama\/Llama-2-70b-chat-hf\">Llama 2<\/a>\u3001Mistrals&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/mistralai\/Mistral-7B-Instruct-v0.2\">Mistral<\/a>&nbsp;\u548c&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/mistralai\/Mixtral-8x7B-Instruct-v0.1\">Mixtral<\/a>&nbsp;\u6a21\u578b\u3001TII&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/tiiuae\/falcon-40b\">Falcon<\/a>&nbsp;\u7b49\u7b49\u3002 \u8fd9\u4e9b LLM \u53ef\u7528\u4e8e\u5404\u79cd\u4efb\u52a1\uff0c\u5305\u62ec\u804a\u5929\u673a\u5668\u4eba\u3001\u95ee\u7b54\u3001\u603b\u7ed3\uff0c\u65e0\u9700\u4efb\u4f55\u989d\u5916\u57f9\u8bad\u3002\u4f46\u662f\uff0c\u5982\u679c\u8981\u4e3a\u5e94\u7528\u7a0b\u5e8f\u81ea\u5b9a\u4e49\u6a21\u578b\u3002\u60a8\u53ef\u80fd\u9700\u8981\u5bf9\u6570\u636e\u8fdb\u884c\u5fae\u8c03\uff0c\u4ee5\u83b7\u5f97\u66f4\u9ad8\u8d28\u91cf\u7684\u7ed3\u679c\uff0c\u800c\u4e0d\u662f\u901a\u8fc7\u8bad\u7ec3\u66f4\u5c0f\u7684\u6a21\u578b\u6765\u63d0\u793a\u6216\u8282\u7701\u6210\u672c\uff0c\u66f4\u9ad8\u6548\u7684\u6a21\u578b\u3002<\/p>\n\n\n\n<p>\u8fd9\u7bc7\u535a\u6587\u5c06\u5f15\u5bfc\u60a8\u4e86\u89e3\u5982\u4f55\u5728 2024 \u5e74\u4f7f\u7528 Hugging Face&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/docs\/trl\/index\">TRL<\/a>\u3001<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/docs\/transformers\/index\">Transformer<\/a>&nbsp;\u548c<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/docs\/datasets\/index\">\u6570\u636e\u96c6<\/a>\u5fae\u8c03\u5f00\u653e LLM\u3002\u5728\u535a\u5ba2\u4e2d\uff0c\u6211\u4eec\u5c06\uff1a<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#1-define-our-use-case\">\u5b9a\u4e49\u6211\u4eec\u7684\u7528\u4f8b<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#2-setup-development-environment\">\u8bbe\u7f6e\u5f00\u53d1\u73af\u5883<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#3-create-and-prepare-the-dataset\">\u521b\u5efa\u548c\u51c6\u5907\u6570\u636e\u96c6<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#4-fine-tune-llm-using-trl-and-the-sfttrainer\">\u4f7f\u7528&nbsp;<code>trl<\/code>&nbsp;\u548c&nbsp;<code>SFTTrainer<\/code>&nbsp;\u5fae\u8c03 LLM<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#5-test-and-evaluate-the-llm\">\u6d4b\u8bd5\u548c\u8bc4\u4f30\u6cd5\u5b66\u7855\u58eb<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#6-deploy-the-llm-for-production\">\u4e3a\u751f\u4ea7\u90e8\u7f72 LLM<\/a><\/li>\n<\/ol>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u672c\u535a\u5ba2\u65e8\u5728\u5728\u6d88\u8d39\u7ea7 GPU \uff0824GB\uff09 \u4e0a\u8fd0\u884c\uff0c\u4f8b\u5982 NVIDIA A10G \u6216 RTX 4090\/3090\uff0c\u4f46\u53ef\u4ee5\u8f7b\u677e\u9002\u5e94\u5728\u66f4\u5927\u7684 GPU \u4e0a\u8fd0\u884c\u3002<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"1-define-our-use-case\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#1-define-our-use-case\"><\/a>1. \u5b9a\u4e49\u6211\u4eec\u7684\u7528\u4f8b<\/h2>\n\n\n\n<p>\u5728\u5fae\u8c03 LLM \u65f6\uff0c\u4e86\u89e3\u60a8\u7684\u7528\u4f8b\u548c\u60f3\u8981\u89e3\u51b3\u7684\u4efb\u52a1\u975e\u5e38\u91cd\u8981\u3002\u8fd9\u5c06\u5e2e\u52a9\u60a8\u9009\u62e9\u6b63\u786e\u7684\u6a21\u578b\u6216\u5e2e\u52a9\u60a8\u521b\u5efa\u6570\u636e\u96c6\u6765\u5fae\u8c03\u6a21\u578b\u3002\u5982\u679c\u60a8\u5c1a\u672a\u5b9a\u4e49\u7528\u4f8b\u3002\u60a8\u53ef\u80fd\u60f3\u56de\u5230\u7ed8\u56fe\u677f\u3002 \u6211\u60f3\u63d0\u4e00\u4e0b\uff0c\u5e76\u975e\u6240\u6709\u7528\u4f8b\u90fd\u9700\u8981\u5fae\u8c03\uff0c\u59cb\u7ec8\u5efa\u8bae\u5728\u5fae\u8c03\u81ea\u5df1\u7684\u6a21\u578b\u4e4b\u524d\u8bc4\u4f30\u548c\u8bd5\u7528\u5df2\u7ecf\u5fae\u8c03\u7684\u6a21\u578b\u6216\u57fa\u4e8e API \u7684\u6a21\u578b\u3002<\/p>\n\n\n\n<p>\u4f8b\u5982\uff0c\u6211\u4eec\u5c06\u4f7f\u7528\u4ee5\u4e0b\u7528\u4f8b\uff1a<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u6211\u4eec\u60f3\u8981\u5fae\u8c03\u4e00\u4e2a\u6a21\u578b\uff0c\u8be5\u6a21\u578b\u53ef\u4ee5\u57fa\u4e8e\u81ea\u7136\u8bed\u8a00\u6307\u4ee4\u751f\u6210 SQL \u67e5\u8be2\uff0c\u7136\u540e\u53ef\u4ee5\u5c06\u5176\u96c6\u6210\u5230\u6211\u4eec\u7684 BI \u5de5\u5177\u4e2d\u3002\u76ee\u6807\u662f\u51cf\u5c11\u521b\u5efa SQL \u67e5\u8be2\u6240\u9700\u7684\u65f6\u95f4\uff0c\u5e76\u4f7f\u975e\u6280\u672f\u7528\u6237\u66f4\u5bb9\u6613\u521b\u5efa SQL \u67e5\u8be2\u3002<\/p>\n<\/blockquote>\n\n\n\n<p>\u6587\u672c\u8f6c SQL \u53ef\u80fd\u662f\u5fae\u8c03 LLM \u7684\u4e00\u4e2a\u5f88\u597d\u7684\u7528\u4f8b\uff0c\u56e0\u4e3a\u5b83\u662f\u4e00\u9879\u590d\u6742\u7684\u4efb\u52a1\uff0c\u9700\u8981\u5927\u91cf\u5173\u4e8e\u6570\u636e\u548c SQL \u8bed\u8a00\u7684\uff08\u5185\u90e8\uff09\u77e5\u8bc6\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"2-setup-development-environment\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#2-setup-development-environment\"><\/a>2. \u642d\u5efa\u5f00\u53d1\u73af\u5883<\/h2>\n\n\n\n<p>\u6211\u4eec\u7684\u7b2c\u4e00\u6b65\u662f\u5b89\u88c5 Hugging Face Libraries \u548c Pytorch\uff0c\u5305\u62ec trl\u3001transformer \u548c\u6570\u636e\u96c6\u3002\u5982\u679c\u60a8\u8fd8\u6ca1\u6709\u542c\u8bf4\u8fc7 trl\uff0c\u8bf7\u4e0d\u8981\u62c5\u5fc3\u3002\u5b83\u662f transformer \u548c\u6570\u636e\u96c6\u4e4b\u4e0a\u7684\u65b0\u5e93\uff0c\u53ef\u4ee5\u66f4\u8f7b\u677e\u5730\u5fae\u8c03\u3001rlhf\u3001\u5bf9\u9f50\u5f00\u653e LLM\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \"># Install Pytorch &amp; other libraries\n!pip install \"torch==2.1.2\" tensorboard\n \n# Install Hugging Face libraries\n!pip install  --upgrade \\\n  \"transformers==4.36.2\" \\\n  \"datasets==2.16.1\" \\\n  \"accelerate==0.26.1\" \\\n  \"evaluate==0.4.1\" \\\n  \"bitsandbytes==0.42.0\" \\\n  # \"trl==0.7.10\" # \\\n  # \"peft==0.7.1\" \\\n \n# install peft &amp; trl from github\n!pip install git+https:\/\/github.com\/huggingface\/trl@a3c5b7178ac4f65569975efadc97db2f3749c65e --upgrade\n!pip install git+https:\/\/github.com\/huggingface\/peft@4a1559582281fc3c9283892caea8ccef1d6f5a4f --upgrade<\/pre><\/div>\n\n\n\n<p>\u5982\u679c\u60a8\u4f7f\u7528\u7684\u662f\u5177\u6709 Ampere \u67b6\u6784\u7684 GPU\uff08\u4f8b\u5982 NVIDIA A10G \u6216 RTX 4090\/3090\uff09\u6216\u66f4\u65b0\u7248\u672c\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528 Flash \u6ce8\u610f\u3002Flash Attention \u662f\u4e00\u79cd\u5bf9\u6ce8\u610f\u529b\u8ba1\u7b97\u8fdb\u884c\u91cd\u65b0\u6392\u5e8f\u7684\u65b9\u6cd5\uff0c\u5e76\u5229\u7528\u7ecf\u5178\u6280\u672f\uff08\u5e73\u94fa\u3001\u91cd\u65b0\u8ba1\u7b97\uff09\u6765\u663e\u7740\u52a0\u5feb\u901f\u5ea6\uff0c\u5e76\u5c06\u5e8f\u5217\u957f\u5ea6\u4ece\u4e8c\u6b21\u51cf\u5c11\u5230\u7ebf\u6027\u7684\u5185\u5b58\u4f7f\u7528\u91cf\u3002The TL;\u535a\u58eb;\u5c06\u8bad\u7ec3\u901f\u5ea6\u63d0\u9ad8 3 \u500d\u3002\u5982\u9700\u4e86\u89e3\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u8bbf\u95ee<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/Dao-AILab\/flash-attention\/tree\/main\">FlashAttention<\/a>\u3002<\/p>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u5982\u679c\u60a8\u7684\u8ba1\u7b97\u673a\u7684 RAM \u5c11\u4e8e 96GB \u4e14 CPU \u5185\u6838\u5f88\u591a\uff0c\u8bf7\u51cf\u5c11<code>MAX_JOBS<\/code>\u7684\u6570\u91cf\u3002\u5728&nbsp;<code>g5.2xlarge<\/code>&nbsp;\u4e0a\uff0c\u6211\u4eec\u4f7f\u7528\u4e86&nbsp;<code>4<\/code>\u3002<\/em><\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">import torch; assert torch.cuda.get_device_capability()[0] &gt;= 8, 'Hardware not supported for Flash Attention'\n# install flash-attn\n!pip install ninja packaging\n!MAX_JOBS=4 pip install flash-attn --no-build-isolation<\/pre><\/div>\n\n\n\n<p><em>\u5b89\u88c5 flash-attn \u53ef\u80fd\u9700\u8981\u76f8\u5f53\u957f\u7684\u65f6\u95f4\uff0810-45 \u5206\u949f\uff09\u3002<\/em><\/p>\n\n\n\n<p>\u6211\u4eec\u5c06\u4f7f\u7528&nbsp;<a href=\"https:\/\/huggingface.co\/models\" target=\"_blank\" rel=\"noreferrer noopener\">Hugging Face Hub<\/a>&nbsp;\u4f5c\u4e3a\u8fdc\u7a0b\u6a21\u578b\u7248\u672c\u63a7\u5236\u670d\u52a1\u3002\u8fd9\u610f\u5473\u7740\u6211\u4eec\u5c06\u5728\u8bad\u7ec3\u671f\u95f4\u81ea\u52a8\u5c06\u6a21\u578b\u3001\u65e5\u5fd7\u548c\u4fe1\u606f\u63a8\u9001\u5230\u4e2d\u5fc3\u3002\u60a8\u5fc5\u987b\u4e3a\u6b64\u5728&nbsp;<a href=\"https:\/\/huggingface.co\/join\" target=\"_blank\" rel=\"noreferrer noopener\">Hugging Face<\/a>&nbsp;\u4e0a\u6ce8\u518c\u3002\u5728\u60a8\u62e5\u6709\u5e10\u6237\u540e\uff0c\u6211\u4eec\u5c06\u4f7f\u7528<code>huggingface_hub<\/code>\u8f6f\u4ef6\u5305\u4e2d\u767b\u5f55\u6211\u4eec\u7684\u5e10\u6237\u5e76\u5c06\u6211\u4eec\u7684\u4ee4\u724c\uff08\u8bbf\u95ee\u5bc6\u94a5\uff09\u5b58\u50a8\u5728\u78c1\u76d8\u4e0a\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from huggingface_hub import login\n \nlogin(\n  token=\"\", # ADD YOUR TOKEN HERE\n  add_to_git_credential=True\n)\n <\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"3-create-and-prepare-the-dataset\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#3-create-and-prepare-the-dataset\"><\/a>3. \u521b\u5efa\u5e76\u51c6\u5907\u6570\u636e\u96c6<\/h2>\n\n\n\n<p>\u4e00\u65e6\u786e\u5b9a\u5fae\u8c03\u662f\u6b63\u786e\u7684\u89e3\u51b3\u65b9\u6848\uff0c\u6211\u4eec\u5c31\u9700\u8981\u521b\u5efa\u4e00\u4e2a\u6570\u636e\u96c6\u6765\u5fae\u8c03\u6211\u4eec\u7684\u6a21\u578b\u3002\u6570\u636e\u96c6\u5e94\u8be5\u662f\u8981\u89e3\u51b3\u7684\u4efb\u52a1\u7684\u4e00\u7ec4\u591a\u6837\u5316\u7684\u6f14\u793a\u3002\u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u521b\u5efa\u6b64\u7c7b\u6570\u636e\u96c6\uff0c\u5305\u62ec\uff1a<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u4f7f\u7528\u73b0\u6709\u7684\u5f00\u6e90\u6570\u636e\u96c6\uff0c\u4f8b\u5982&nbsp;<a href=\"https:\/\/huggingface.co\/datasets\/spider\" target=\"_blank\" rel=\"noreferrer noopener\">Spider<\/a><\/li>\n\n\n\n<li>\u4f7f\u7528 LLM \u521b\u5efa\u7efc\u5408\u6570\u636e\u96c6\uff0c\u4f8b\u5982&nbsp;<a href=\"https:\/\/huggingface.co\/datasets\/tatsu-lab\/alpaca\" target=\"_blank\" rel=\"noreferrer noopener\">Alpaca<\/a><\/li>\n\n\n\n<li>\u4f7f\u7528\u4eba\u7c7b\u521b\u5efa\u6570\u636e\u96c6\uff0c\u4f8b\u5982&nbsp;<a href=\"https:\/\/huggingface.co\/datasets\/databricks\/databricks-dolly-15k\" target=\"_blank\" rel=\"noreferrer noopener\">Dolly<\/a>\u3002<\/li>\n\n\n\n<li>\u4f7f\u7528\u4e0a\u8ff0\u65b9\u6cd5\u7684\u7ec4\u5408\uff0c\u4f8b\u5982<a href=\"https:\/\/huggingface.co\/datasets\/Open-Orca\/OpenOrca\" target=\"_blank\" rel=\"noreferrer noopener\">\uff0c<a href=\"https:\/\/huggingface.co\/datasets\/Open-Orca\/OpenOrca\" target=\"_blank\" rel=\"noreferrer noopener\">Orca<\/a><\/a><\/li>\n<\/ul>\n\n\n\n<p>\u6bcf\u79cd\u65b9\u6cd5\u90fd\u6709\u81ea\u5df1\u7684\u4f18\u70b9\u548c\u7f3a\u70b9\uff0c\u53d6\u51b3\u4e8e\u9884\u7b97\u3001\u65f6\u95f4\u548c\u8d28\u91cf\u8981\u6c42\u3002\u4f8b\u5982\uff0c\u4f7f\u7528\u73b0\u6709\u6570\u636e\u96c6\u662f\u6700\u7b80\u5355\u7684\uff0c\u4f46\u53ef\u80fd\u65e0\u6cd5\u6839\u636e\u60a8\u7684\u7279\u5b9a\u7528\u4f8b\u8fdb\u884c\u5b9a\u5236\uff0c\u800c\u4f7f\u7528\u4eba\u5de5\u53ef\u80fd\u662f\u6700\u51c6\u786e\u7684\uff0c\u4f46\u53ef\u80fd\u65e2\u8017\u65f6\u53c8\u6602\u8d35\u3002\u4e5f\u53ef\u4ee5\u7ed3\u5408\u51e0\u79cd\u65b9\u6cd5\u6765\u521b\u5efa\u6307\u4ee4\u6570\u636e\u96c6\uff0c\u5982&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/arxiv.org\/abs\/2306.02707\">Orca\uff1a Progressive Learning from Complex Explanation Traces of GPT-4<\/a>&nbsp;\u4e2d\u6240\u793a\u3002<\/p>\n\n\n\n<p>\u5728\u6211\u4eec\u7684\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u5c06\u4f7f\u7528\u4e00\u4e2a\u540d\u4e3a&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/datasets\/b-mc2\/sql-create-context\">sql-create-context<\/a>&nbsp;\u7684\u73b0\u6709\u6570\u636e\u96c6\uff0c\u5176\u4e2d\u5305\u542b\u81ea\u7136\u8bed\u8a00\u6307\u4ee4\u3001\u67b6\u6784\u5b9a\u4e49\u548c\u76f8\u5e94 SQL \u67e5\u8be2\u7684\u793a\u4f8b\u3002<\/p>\n\n\n\n<p>\u5728<code>trl<\/code>\u6700\u65b0\u7248\u672c\u4e2d\uff0c\u6211\u4eec\u73b0\u5728\u652f\u6301\u6d41\u884c\u7684\u6307\u4ee4\u548c\u5bf9\u8bdd\u6570\u636e\u96c6\u683c\u5f0f\u3002\u8fd9\u610f\u5473\u7740\u6211\u4eec\u53ea\u9700\u8981\u5c06\u6570\u636e\u96c6\u8f6c\u6362\u4e3a\u652f\u6301\u7684\u683c\u5f0f\u4e4b\u4e00\uff0c\u5176\u4f59\u7684<code>trl<\/code>\u5c06\u8d1f\u8d23\u3002\u8fd9\u4e9b\u683c\u5f0f\u5305\u62ec\uff1a<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u5bf9\u8bdd\u683c\u5f0f(conversational format)<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}<\/pre><\/div>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u6307\u4ee4\u683c\u5f0f(instruction format)<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">{\"prompt\": \"&lt;prompt text&gt;\", \"completion\": \"&lt;ideal generated text&gt;\"}\n{\"prompt\": \"&lt;prompt text&gt;\", \"completion\": \"&lt;ideal generated text&gt;\"}\n{\"prompt\": \"&lt;prompt text&gt;\", \"completion\": \"&lt;ideal generated text&gt;\"}<\/pre><\/div>\n\n\n\n<p>\u5728\u6211\u4eec\u7684\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u5c06\u4f7f\u7528 \ud83e\udd17 Datasets \u5e93\u52a0\u8f7d\u5f00\u6e90\u6570\u636e\u96c6\uff0c\u7136\u540e\u5c06\u5176\u8f6c\u6362\u4e3a\u5bf9\u8bdd\u683c\u5f0f\uff0c\u5176\u4e2d\u6211\u4eec\u5c06\u67b6\u6784\u5b9a\u4e49\u5305\u542b\u5728\u52a9\u624b\u7684\u7cfb\u7edf\u6d88\u606f\u4e2d\u3002\u7136\u540e\uff0c\u6211\u4eec\u5c06\u6570\u636e\u96c6\u4fdd\u5b58\u4e3a jsonl \u6587\u4ef6\uff0c\u7136\u540e\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u5b83\u6765\u5fae\u8c03\u6211\u4eec\u7684\u6a21\u578b\u3002\u6211\u4eec\u5c06\u6570\u636e\u96c6\u968f\u673a\u62bd\u6837\u5230\u4ec5 10,000 \u4e2a\u6837\u672c\u3002<\/p>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u6b64\u6b65\u9aa4\u53ef\u80fd\u56e0\u60a8\u7684\u7528\u4f8b\u800c\u5f02\u3002\u4f8b\u5982\uff0c\u5982\u679c\u60a8\u5df2\u7ecf\u6709\u4e00\u4e2a\u6570\u636e\u96c6\uff0c\u4f8b\u5982\u4f7f\u7528 OpenAI\uff0c\u60a8\u53ef\u4ee5\u8df3\u8fc7\u6b64\u6b65\u9aa4\u5e76\u76f4\u63a5\u8fdb\u5165\u5fae\u8c03\u6b65\u9aa4\u3002<\/em><\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from datasets import load_dataset\n \n# Convert dataset to OAI messages\nsystem_message = \"\"\"You are an text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.\nSCHEMA:\n{schema}\"\"\"\n \ndef create_conversation(sample):\n  return {\n    \"messages\": [\n      {\"role\": \"system\", \"content\": system_message.format(schema=sample[\"context\"])},\n      {\"role\": \"user\", \"content\": sample[\"question\"]},\n      {\"role\": \"assistant\", \"content\": sample[\"answer\"]}\n    ]\n  }\n \n# Load dataset from the hub\ndataset = load_dataset(\"b-mc2\/sql-create-context\", split=\"train\")\ndataset = dataset.shuffle().select(range(12500))\n \n# Convert dataset to OAI messages\ndataset = dataset.map(create_conversation, remove_columns=dataset.features,batched=False)\n# split dataset into 10,000 training samples and 2,500 test samples\ndataset = dataset.train_test_split(test_size=2500\/12500)\n \nprint(dataset[\"train\"][345][\"messages\"])\n \n# save datasets to disk\ndataset[\"train\"].to_json(\"train_dataset.json\", orient=\"records\")\ndataset[\"test\"].to_json(\"test_dataset.json\", orient=\"records\")<\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"4-fine-tune-llm-using-trl-and-the-sfttrainer\">4. \u4f7f\u7528<code>trl<\/code>\u548c<code>SFTTrainer<\/code>\u5fae\u8c03LLM<\/h2>\n\n\n\n<p>\u6211\u4eec\u73b0\u5728\u51c6\u5907\u5fae\u8c03\u6211\u4eec\u7684\u6a21\u578b\u3002\u6211\u4eec\u5c06\u4f7f\u7528\u6765\u81ea<code>trl<\/code>\u7684<a href=\"https:\/\/huggingface.co\/docs\/trl\/sft_trainer\" target=\"_blank\" rel=\"noreferrer noopener\">SFTTrainer<\/a>\u00a0\u6765\u5fae\u8c03\u6211\u4eec\u7684\u6a21\u578b\u3002\u8fd9\u4f7f\u5f97<code>SFTTrainer<\/code>\u76d1\u7763\u5fae\u8c03\u5f00\u653e LLM \u53d8\u5f97\u76f4\u63a5\u3002\u8be5<code>SFTTrainer<\/code>\u662f\u6765\u81ea<code>transformers<\/code>\u7684<code>Trainer<\/code>\u7684\u5b50\u7c7b\u5e93\uff0c\u652f\u6301\u6240\u6709\u76f8\u540c\u7684\u529f\u80fd\uff0c\u5305\u62ec\u65e5\u5fd7\u8bb0\u5f55\u3001\u8bc4\u4f30\u548c\u68c0\u67e5\u70b9\uff0c\u4f46\u6dfb\u52a0\u4e86\u989d\u5916\u7684\u751f\u6d3b\u8d28\u91cf\u529f\u80fd\uff0c\u5305\u62ec\uff1a<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u6570\u636e\u96c6\u683c\u5f0f\uff0c\u5305\u62ec\u5bf9\u8bdd\u548c\u6307\u4ee4\u683c\u5f0f<\/li>\n\n\n\n<li>\u4ec5\u9488\u5bf9\u5b8c\u6210\u60c5\u51b5\u8fdb\u884c\u8bad\u7ec3\uff0c\u5ffd\u7565\u63d0\u793a<\/li>\n\n\n\n<li>\u6253\u5305\u6570\u636e\u96c6\u4ee5\u5b9e\u73b0\u66f4\u9ad8\u6548\u7684\u8bad\u7ec3<\/li>\n\n\n\n<li>PEFT\uff08\u53c2\u6570\u9ad8\u6548\u5fae\u8c03\uff09\u652f\u6301\uff0c\u5305\u62ec Q-LoRA<\/li>\n\n\n\n<li>\u51c6\u5907\u7528\u4e8e\u5bf9\u8bdd\u5fae\u8c03\u7684\u6a21\u578b\u548c\u5206\u8bcd\u5668\uff08\u4f8b\u5982\uff0c\u6dfb\u52a0\u7279\u6b8a\u4ee4\u724c\uff09<\/li>\n<\/ul>\n\n\n\n<p>\u5728\u6211\u4eec\u7684\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u5c06\u4f7f\u7528\u6570\u636e\u96c6\u683c\u5f0f\u3001\u6253\u5305\u548c PEFT \u7279\u5f81\u3002\u4f5c\u4e3a peft \u65b9\u6cd5\uff0c\u6211\u4eec\u5c06\u4f7f\u7528&nbsp;<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/arxiv.org\/abs\/2305.14314\">QLoRA<\/a>&nbsp;\u6280\u672f\u6765\u51cf\u5c11\u5fae\u8c03\u671f\u95f4\u5927\u578b\u8bed\u8a00\u6a21\u578b\u7684\u5185\u5b58\u5360\u7528\uff0c\u800c\u4e0d\u4f1a\u56e0\u4f7f\u7528\u91cf\u5316\u800c\u727a\u7272\u6027\u80fd\u3002\u5982\u679c\u60a8\u60f3\u4e86\u89e3\u6709\u5173 QLoRA \u53ca\u5176\u5de5\u4f5c\u539f\u7406\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u67e5\u770b<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/huggingface.co\/blog\/4bit-transformers-bitsandbytes\">\u4f7f\u7528\u4f4d\u6c99\u5b57\u8282\u30014 \u4f4d\u91cf\u5316\u548c QLoRA \u535a\u5ba2\u6587\u7ae0\u4f7f LLM \u66f4\u6613\u4e8e\u8bbf\u95ee<\/a>\u3002<\/p>\n\n\n\n<p>\u73b0\u5728\uff0c\u8ba9\u6211\u4eec\u5f00\u59cb\u5427\uff01\ud83d\ude80 \u8ba9\u6211\u4eec\u4ece\u78c1\u76d8\u52a0\u8f7d json \u6570\u636e\u96c6\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from datasets import load_dataset\n \n# Load jsonl data from disk\ndataset = load_dataset(\"json\", data_files=\"train_dataset.json\", split=\"train\")<\/pre><\/div>\n\n\n\n<p>\u63a5\u4e0b\u6765\uff0c\u6211\u4eec\u5c06\u52a0\u8f7d\u6211\u4eec\u7684 LLM\u3002\u5bf9\u4e8e\u6211\u4eec\u7684\u7528\u4f8b\uff0c\u6211\u4eec\u5c06\u4f7f\u7528 CodeLlama 7B\u3002CodeLlama \u662f\u4e00\u4e2a\u7ecf\u8fc7\u8bad\u7ec3\u7684 Llama \u6a21\u578b\uff0c\u7528\u4e8e\u4e00\u822c\u4ee3\u7801\u5408\u6210\u548c\u7406\u89e3\u3002 \u4f46\u662f\u6211\u4eec\u53ef\u4ee5\u901a\u8fc7\u66f4\u6539\u53d8\u91cf<code>model_id<\/code>\u8f7b\u677e\u5730\u5c06\u6a21\u578b\u6362\u6210\u53e6\u4e00\u4e2a\u6a21\u578b\uff0c\u4f8b\u5982\u00a0<a href=\"https:\/\/huggingface.co\/mistralai\/Mistral-7B-Instruct-v0.2\" target=\"_blank\" rel=\"noreferrer noopener\">Mistral<\/a>\u00a0\u6216\u00a0<a href=\"https:\/\/huggingface.co\/mistralai\/Mixtral-8x7B-Instruct-v0.1\" target=\"_blank\" rel=\"noreferrer noopener\">Mixtral<\/a>\u00a0\u6a21\u578b\u3001TII\u00a0<a href=\"https:\/\/huggingface.co\/tiiuae\/falcon-40b\" target=\"_blank\" rel=\"noreferrer noopener\">Falcon<\/a>\u00a0\u6216\u4efb\u4f55\u5176\u4ed6 LLM\u3002\u6211\u4eec\u5c06\u4f7f\u7528 bitsandbytes \u5c06\u6211\u4eec\u7684\u6a21\u578b\u91cf\u5316\u4e3a 4 \u4f4d\u3002<\/p>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u8bf7\u6ce8\u610f\uff0c\u6a21\u578b\u8d8a\u5927\uff0c\u6240\u9700\u7684\u5185\u5b58\u5c31\u8d8a\u591a\u3002\u5728\u6211\u4eec\u7684\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u5c06\u4f7f\u7528 7B \u7248\u672c\uff0c\u5b83\u53ef\u4ee5\u5728 24GB GPU \u4e0a\u8fdb\u884c\u8c03\u6574\u3002\u5982\u679c\u60a8\u7684 GPU \u8f83\u5c0f\u3002<\/em><\/p>\n\n\n\n<p>\u6b63\u786e\u5730\u51c6\u5907\u7528\u4e8e\u8bad\u7ec3\u804a\u5929\/\u5bf9\u8bdd\u6a21\u578b\u7684\u6a21\u578b\u548c\u5206\u8bcd\u5668\u81f3\u5173\u91cd\u8981\u3002\u6211\u4eec\u9700\u8981\u5411\u5206\u8bcd\u5668\u548c\u6a21\u578b\u6dfb\u52a0\u65b0\u7684\u7279\u6b8a\u4ee4\u724c\uff0c\u4ee5\u6559\u4ed6\u4eec\u5728\u5bf9\u8bdd\u4e2d\u7684\u4e0d\u540c\u89d2\u8272\u3002\u5728 <code>trl<\/code> \u6211\u4eec\u6709\u4e00\u4e2a<a href=\"https:\/\/huggingface.co\/docs\/trl\/main\/en\/sft_trainer#add-special-tokens-for-chat-format\" target=\"_blank\" rel=\"noreferrer noopener\">\u65b9\u4fbf\u7684setup_chat_format<\/a>\u65b9\u6cd5\uff1a<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\u5411\u5206\u8bcd\u5668\u6dfb\u52a0\u7279\u6b8a\u4ee4\u724c\uff0c\u4f8b\u5982<code>&lt;|im_start|><\/code>\u548c<code>&lt;|im_end|><\/code>\uff0c\u6307\u793a\u5bf9\u8bdd\u7684\u5f00\u59cb\u548c\u7ed3\u675f\u3002<\/li>\n\n\n\n<li>\u8c03\u6574\u6a21\u578b\u5d4c\u5165\u5c42\u7684\u5927\u5c0f\u4ee5\u5bb9\u7eb3\u65b0\u6807\u8bb0\u3002<\/li>\n\n\n\n<li>\u8bbe\u7f6e <code>chat_template<\/code> \u5206\u8bcd\u5668 \uff0c\u7528\u4e8e\u5c06\u8f93\u5165\u6570\u636e\u683c\u5f0f\u5316\u4e3a\u7c7b\u4f3c\u804a\u5929\u7684\u683c\u5f0f\u3002\u9ed8\u8ba4\u503c <code>chatml<\/code> \u6765\u81ea OpenAI\u3002<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">import torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\nfrom trl import setup_chat_format\n \n# Hugging Face model id\nmodel_id = \"codellama\/CodeLlama-7b-hf\" # or `mistralai\/Mistral-7B-v0.1`\n \n# BitsAndBytesConfig int-4 config\nbnb_config = BitsAndBytesConfig(\n    load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type=\"nf4\", bnb_4bit_compute_dtype=torch.bfloat16\n)\n \n# Load model and tokenizer\nmodel = AutoModelForCausalLM.from_pretrained(\n    model_id,\n    device_map=\"auto\",\n    attn_implementation=\"flash_attention_2\",\n    torch_dtype=torch.bfloat16,\n    quantization_config=bnb_config\n)\ntokenizer = AutoTokenizer.from_pretrained(model_id)\ntokenizer.padding_side = 'right' # to prevent warnings\n \n# # set chat template to OAI chatML, remove if you start from a fine-tuned model\nmodel, tokenizer = setup_chat_format(model, tokenizer)<\/pre><\/div>\n\n\n\n<p><code>SFTTrainer<\/code>\u652f\u6301\u4e0e<code>peft<\/code>\u7684\u672c\u673a\u96c6\u6210\uff0c\u8fd9\u4f7f\u5f97\u4f7f\u7528 QLoRA \u7b49 LLM \u7684\u9ad8\u6548\u8c03\u6574\u53d8\u5f97\u975e\u5e38\u5bb9\u6613\u3002\u6211\u4eec<code>LoraConfig<\/code>\u53ea\u9700\u8981\u521b\u5efa\u6211\u4eec\u7684\u5e76\u5c06\u5176\u63d0\u4f9b\u7ed9\u57f9\u8bad\u5e08\u3002\u6211\u4eec<code>LoraConfig<\/code>\u7684\u53c2\u6570\u662f\u6839\u636e\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2305.14314.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">qlora \u8bba\u6587<\/a>\u548c sebastian\u00a0<a href=\"https:\/\/magazine.sebastianraschka.com\/p\/practical-tips-for-finetuning-llms\" target=\"_blank\" rel=\"noreferrer noopener\">\u7684\u535a\u5ba2\u6587\u7ae0<\/a>\u5b9a\u4e49\u7684\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from peft import LoraConfig\n \n# LoRA config based on QLoRA paper &amp; Sebastian Raschka experiment\npeft_config = LoraConfig(\n        lora_alpha=128,\n        lora_dropout=0.05,\n        r=256,\n        bias=\"none\",\n        target_modules=\"all-linear\",\n        task_type=\"CAUSAL_LM\",\n)<\/pre><\/div>\n\n\n\n<p>\u5728\u5f00\u59cb\u8bad\u7ec3\u4e4b\u524d\uff0c\u6211\u4eec\u9700\u8981\u5b9a\u4e49\u8981\u4f7f\u7528\u7684\u8d85\u53c2\u6570\uff08<code>TrainingArguments<\/code>\uff09\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from transformers import TrainingArguments\n \nargs = TrainingArguments(\n    output_dir=\"code-llama-7b-text-to-sql\", # directory to save and repository id\n    num_train_epochs=3,                     # number of training epochs\n    per_device_train_batch_size=3,          # batch size per device during training\n    gradient_accumulation_steps=2,          # number of steps before performing a backward\/update pass\n    gradient_checkpointing=True,            # use gradient checkpointing to save memory\n    optim=\"adamw_torch_fused\",              # use fused adamw optimizer\n    logging_steps=10,                       # log every 10 steps\n    save_strategy=\"epoch\",                  # save checkpoint every epoch\n    learning_rate=2e-4,                     # learning rate, based on QLoRA paper\n    bf16=True,                              # use bfloat16 precision\n    tf32=True,                              # use tf32 precision\n    max_grad_norm=0.3,                      # max gradient norm based on QLoRA paper\n    warmup_ratio=0.03,                      # warmup ratio based on QLoRA paper\n    lr_scheduler_type=\"constant\",           # use constant learning rate scheduler\n    push_to_hub=True,                       # push model to hub\n    report_to=\"tensorboard\",                # report metrics to tensorboard\n)<\/pre><\/div>\n\n\n\n<p>\u73b0\u5728\uff0c\u6211\u4eec\u62e5\u6709\u4e86\u521b\u5efa <code>SFTTrainer<\/code> \u6a21\u578b\u5e76\u5f00\u59cb\u8bad\u7ec3\u6a21\u578b\u7684\u6240\u6709\u6784\u5efa\u5757\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from trl import SFTTrainer\n \nmax_seq_length = 3072 # max sequence length for model and packing of the dataset\n \ntrainer = SFTTrainer(\n    model=model,\n    args=args,\n    train_dataset=dataset,\n    peft_config=peft_config,\n    max_seq_length=max_seq_length,\n    tokenizer=tokenizer,\n    packing=True,\n    dataset_kwargs={\n        \"add_special_tokens\": False,  # We template with special tokens\n        \"append_concat_token\": False, # No need to add additional separator token\n    }\n)<\/pre><\/div>\n\n\n\n<p>\u6211\u4eec\u53ef\u4ee5\u901a\u8fc7\u5728 <code>Trainer<\/code> \u5b9e\u4f8b\u4e0a\u8c03\u7528<code>train()<\/code>\u65b9\u6cd5\u6765\u5f00\u59cb\u8bad\u7ec3\u6a21\u578b\u3002\u8fd9\u5c06\u542f\u52a8\u8bad\u7ec3\u5faa\u73af\u5e76\u8bad\u7ec3\u6211\u4eec\u7684\u6a21\u578b 3 \u4e2a\u5468\u671f\u3002\u7531\u4e8e\u6211\u4eec\u4f7f\u7528\u7684\u662fPEFT\u65b9\u6cd5\uff0c\u56e0\u6b64\u6211\u4eec\u53ea\u4f1a\u4fdd\u5b58\u81ea\u9002\u5e94\u7684\u6a21\u578b\u6743\u91cd\uff0c\u800c\u4e0d\u662f\u5b8c\u6574\u7684\u6a21\u578b\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \"># start training, the model will be automatically saved to the hub and the output directory\ntrainer.train()\n \n# save model\ntrainer.save_model()<\/pre><\/div>\n\n\n\n<p>\u4f7f\u7528 Flash Attention \u5bf9 3 \u4e2a epoch \u7684\u8bad\u7ec3\uff0c\u6570\u636e\u96c6\u4e3a 10k \u4e2a\u6837\u672c\uff0c\u5728 <code>g5.2xlarge<\/code> \u5b9e\u4f8b\u6210\u672c<code>1.212$\/h<\/code>\uff0c\u4f7f\u6211\u4eec\u7684\u603b\u6210\u672c\u4ec5\u4e3a<code>1.8$<\/code>\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \"># free the memory again\ndel model\ndel trainer\ntorch.cuda.empty_cache()<\/pre><\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"optional-merge-lora-adapter-in-to-the-original-model\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#optional-merge-lora-adapter-in-to-the-original-model\"><\/a><em>\u53ef\u9009\uff1a\u5c06 LoRA \u9002\u914d\u5668\u5408\u5e76\u5230\u539f\u59cb\u6a21\u578b\u4e2d<\/em><\/h3>\n\n\n\n<p>\u4f7f\u7528 QLoRA \u65f6\uff0c\u6211\u4eec\u53ea\u8bad\u7ec3\u9002\u914d\u5668\uff0c\u800c\u4e0d\u662f\u5b8c\u6574\u6a21\u578b\u3002\u8fd9\u610f\u5473\u7740\u5728\u8bad\u7ec3\u671f\u95f4\u4fdd\u5b58\u6a21\u578b\u65f6\uff0c\u6211\u4eec\u53ea\u4fdd\u5b58\u9002\u914d\u5668\u6743\u91cd\uff0c\u800c\u4e0d\u662f\u5b8c\u6574\u6a21\u578b\u3002\u5982\u679c\u8981\u4fdd\u5b58\u5b8c\u6574\u6a21\u578b\uff0c\u4ee5\u4fbf\u66f4\u8f7b\u677e\u5730\u4e0e\u6587\u672c\u751f\u6210\u63a8\u7406\u4e00\u8d77\u4f7f\u7528\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528 <code>merge_and_unload<\/code> \u65b9\u6cd5\u5c06\u9002\u914d\u5668\u6743\u91cd\u5408\u5e76\u5230\u6a21\u578b\u6743\u91cd\u4e2d\uff0c\u7136\u540e\u4f7f\u7528 <code>save_pretrained<\/code> \u65b9\u6cd5\u4fdd\u5b58\u6a21\u578b\u3002\u8fd9\u5c06\u4fdd\u5b58\u4e00\u4e2a\u9ed8\u8ba4\u6a21\u578b\uff0c\u8be5\u6a21\u578b\u53ef\u7528\u4e8e\u63a8\u7406\u3002<\/p>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u60a8\u53ef\u80fd\u9700\u8981&gt; 30GB CPU \u5185\u5b58\u3002<\/em><\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \"> \n#### COMMENT IN TO MERGE PEFT AND BASE MODEL ####\n# from peft import AutoPeftModelForCausalLM\n \n# # Load PEFT model on CPU\n# model = AutoPeftModelForCausalLM.from_pretrained(\n#     args.output_dir,\n#     torch_dtype=torch.float16,\n#     low_cpu_mem_usage=True,\n# )\n# # Merge LoRA and base model and save\n# merged_model = model.merge_and_unload()\n# merged_model.save_pretrained(args.output_dir,safe_serialization=True, max_shard_size=\"2GB\")<\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"5-test-and-evaluate-the-llm\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#5-test-and-evaluate-the-llm\"><\/a>5. \u6d4b\u8bd5\u548c\u8bc4\u4f30 LLM<\/h2>\n\n\n\n<p>\u8bad\u7ec3\u5b8c\u6210\u540e\uff0c\u6211\u4eec\u8981\u8bc4\u4f30\u548c\u6d4b\u8bd5\u6211\u4eec\u7684\u6a21\u578b\u3002\u6211\u4eec\u5c06\u4ece\u539f\u59cb\u6570\u636e\u96c6\u4e2d\u52a0\u8f7d\u4e0d\u540c\u7684\u6837\u672c\uff0c\u5e76\u4f7f\u7528\u7b80\u5355\u7684\u5faa\u73af\u548c\u51c6\u786e\u6027\u4f5c\u4e3a\u6211\u4eec\u7684\u6307\u6807\uff0c\u5728\u8fd9\u4e9b\u6837\u672c\u4e0a\u8bc4\u4f30\u6a21\u578b\u3002<\/p>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u8bc4\u4f30\u751f\u6210\u5f0f AI \u6a21\u578b\u5e76\u975e\u6613\u4e8b\uff0c\u56e0\u4e3a 1 \u4e2a\u8f93\u5165\u53ef\u4ee5\u6709\u591a\u4e2a\u6b63\u786e\u7684\u8f93\u51fa\u3002\u5982\u679c\u60a8\u60f3\u4e86\u89e3\u6709\u5173\u8bc4\u4f30\u751f\u6210\u6a21\u578b\u7684\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u67e5\u770b<a href=\"https:\/\/www.philschmid.de\/evaluate-llm\" target=\"_blank\" rel=\"noreferrer noopener\">\u4f7f\u7528 Langchain \u548c Hugging Face \u7684\u5b9e\u4f8b\u8bc4\u4f30 LLM \u548c RAG<\/a>&nbsp;\u535a\u5ba2\u6587\u7ae0\u3002<\/em><\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">import torch\nfrom peft import AutoPeftModelForCausalLM\nfrom transformers import AutoTokenizer, pipeline\n \npeft_model_id = \".\/code-llama-7b-text-to-sql\"\n# peft_model_id = args.output_dir\n \n# Load Model with PEFT adapter\nmodel = AutoPeftModelForCausalLM.from_pretrained(\n  peft_model_id,\n  device_map=\"auto\",\n  torch_dtype=torch.float16\n)\ntokenizer = AutoTokenizer.from_pretrained(peft_model_id)\n# load into pipeline\npipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)<\/pre><\/div>\n\n\n\n<p>\u8ba9\u6211\u4eec\u52a0\u8f7d\u6211\u4eec\u7684\u6d4b\u8bd5\u6570\u636e\u96c6\uff0c\u5c1d\u8bd5\u751f\u6210\u4e00\u6761\u6307\u4ee4\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from datasets import load_dataset\nfrom random import randint\n \n \n# Load our test dataset\neval_dataset = load_dataset(\"json\", data_files=\"test_dataset.json\", split=\"train\")\nrand_idx = randint(0, len(eval_dataset))\n \n# Test on sample\nprompt = pipe.tokenizer.apply_chat_template(eval_dataset[rand_idx][\"messages\"][:2], tokenize=False, add_generation_prompt=True)\noutputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)\n \nprint(f\"Query:\\n{eval_dataset[rand_idx]['messages'][1]['content']}\")\nprint(f\"Original Answer:\\n{eval_dataset[rand_idx]['messages'][2]['content']}\")\nprint(f\"Generated Answer:\\n{outputs[0]['generated_text'][len(prompt):].strip()}\")<\/pre><\/div>\n\n\n\n<p>\u597d\uff01\u6211\u4eec\u7684\u6a21\u578b\u80fd\u591f\u6839\u636e\u81ea\u7136\u8bed\u8a00\u6307\u4ee4\u751f\u6210 SQL \u67e5\u8be2\u3002\u8ba9\u6211\u4eec\u5728\u6d4b\u8bd5\u6570\u636e\u96c6\u7684\u5168\u90e8 2,500 \u4e2a\u6837\u672c\u4e0a\u8bc4\u4f30\u6211\u4eec\u7684\u6a21\u578b\u3002<em>\u6ce8\u610f\uff1a\u5982\u4e0a\u6240\u8ff0\uff0c\u8bc4\u4f30\u751f\u6210\u6a21\u578b\u5e76\u975e\u6613\u4e8b\u3002\u5728\u6211\u4eec\u7684\u793a\u4f8b\u4e2d\uff0c\u6211\u4eec\u4f7f\u7528\u57fa\u4e8e\u771f\u5b9e SQL \u67e5\u8be2\u751f\u6210\u7684 SQL \u7684\u51c6\u786e\u6027\u4f5c\u4e3a\u6211\u4eec\u7684\u6307\u6807\u3002\u53e6\u4e00\u79cd\u65b9\u6cd5\u662f\u81ea\u52a8\u6267\u884c\u751f\u6210\u7684 SQL \u67e5\u8be2\uff0c\u5e76\u5c06\u7ed3\u679c\u4e0e\u5b9e\u9645\u60c5\u51b5\u8fdb\u884c\u6bd4\u8f83\u3002\u8fd9\u5c06\u662f\u4e00\u4e2a\u66f4\u51c6\u786e\u7684\u6307\u6807\uff0c\u4f46\u9700\u8981\u66f4\u591a\u7684\u5de5\u4f5c\u6765\u8bbe\u7f6e\u3002<\/em><\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">from tqdm import tqdm\n \n \ndef evaluate(sample):\n    prompt = pipe.tokenizer.apply_chat_template(sample[\"messages\"][:2], tokenize=False, add_generation_prompt=True)\n    outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)\n    predicted_answer = outputs[0]['generated_text'][len(prompt):].strip()\n    if predicted_answer == sample[\"messages\"][2][\"content\"]:\n        return 1\n    else:\n        return 0\n \nsuccess_rate = []\nnumber_of_eval_samples = 1000\n# iterate over eval dataset and predict\nfor s in tqdm(eval_dataset.shuffle().select(range(number_of_eval_samples))):\n    success_rate.append(evaluate(s))\n \n# compute accuracy\naccuracy = sum(success_rate)\/len(success_rate)\n \nprint(f\"Accuracy: {accuracy*100:.2f}%\")\n <\/pre><\/div>\n\n\n\n<p>\u6211\u4eec\u5728\u8bc4\u4f30\u6570\u636e\u96c6\u4e2d\u7684 1000 \u4e2a\u6837\u672c\u4e0a\u8bc4\u4f30\u4e86\u6211\u4eec\u7684\u6a21\u578b\uff0c\u5e76\u5f97\u5230\u4e86<code>79.50%<\/code>\u7684\u51c6\u786e\u5ea6\uff0c\u8fd9\u9700\u8981 ~25 \u5206\u949f\u3002<\/p>\n\n\n\n<p>\u8fd9\u5f88\u597d\uff0c\u4f46\u5982\u524d\u6240\u8ff0\uff0c\u60a8\u9700\u8981\u5bf9\u8fd9\u4e2a\u6307\u6807\u6301\u4fdd\u7559\u6001\u5ea6\u3002\u5982\u679c\u6211\u4eec\u53ef\u4ee5\u901a\u8fc7\u5bf9\u771f\u5b9e\u6570\u636e\u5e93\u8fd0\u884c\u67e5\u8be2\u6765\u8bc4\u4f30\u6211\u4eec\u7684\u6a21\u578b\u5e76\u6bd4\u8f83\u7ed3\u679c\uff0c\u90a3\u5c31\u66f4\u597d\u4e86\u3002\u56e0\u4e3a\u540c\u4e00\u6307\u4ee4\u53ef\u80fd\u6709\u4e0d\u540c\u7684\u201c\u6b63\u786e\u201dSQL \u67e5\u8be2\u3002\u8fd8\u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u63d0\u9ad8\u6027\u80fd\uff0c\u65b9\u6cd5\u662f\u4f7f\u7528\u5c11\u6837\u672c\u5b66\u4e60\uff0c\u4f7f\u7528 RAG\u3001\u81ea\u6211\u4fee\u590d\u6765\u751f\u6210 SQL \u67e5\u8be2\u3002<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"6-deploy-the-llm-for-production\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#6-deploy-the-llm-for-production\"><\/a>6. \u4e3a\u751f\u4ea7\u90e8\u7f72 LLM<\/h2>\n\n\n\n<p>\u73b0\u5728\u53ef\u4ee5\u5c06\u6a21\u578b\u90e8\u7f72\u5230\u751f\u4ea7\u73af\u5883\u3002\u4e3a\u4e86\u5c06\u5f00\u653e LLM \u90e8\u7f72\u5230\u751f\u4ea7\u73af\u5883\u4e2d\uff0c\u6211\u4eec\u5efa\u8bae\u4f7f\u7528<a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/huggingface\/text-generation-inference\">\u6587\u672c\u751f\u6210\u63a8\u7406 \uff08TGI\uff09\u3002<\/a>TGI \u662f\u7528\u4e8e\u90e8\u7f72\u548c\u670d\u52a1\u5927\u578b\u8bed\u8a00\u6a21\u578b \uff08LLM\uff09 \u7684\u4e13\u7528\u89e3\u51b3\u65b9\u6848\u3002TGI \u652f\u6301\u4f7f\u7528\u5f20\u91cf\u5e76\u884c\u548c\u8fde\u7eed\u6279\u5904\u7406\u4e3a\u6700\u6d41\u884c\u7684\u5f00\u653e LLM\uff08\u5305\u62ec Llama\u3001Mistral\u3001Mixtral\u3001StarCoder\u3001T5 \u7b49\uff09\u751f\u6210\u9ad8\u6027\u80fd\u6587\u672c\u3002\u6587\u672c\u751f\u6210\u63a8\u7406\u88ab IBM\u3001Grammarly\u3001Uber\u3001Deutsche Telekom \u7b49\u516c\u53f8\u4f7f\u7528\u3002\u6709\u51e0\u79cd\u65b9\u6cd5\u53ef\u4ee5\u90e8\u7f72\u6a21\u578b\uff0c\u5305\u62ec\uff1a<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/huggingface.co\/blog\/inference-endpoints-llm\" target=\"_blank\" rel=\"noreferrer noopener\">\u4f7f\u7528\u62e5\u62b1\u4eba\u8138\u63a8\u7406\u7aef\u70b9\u90e8\u7f72 LLM<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/huggingface.co\/blog\/sagemaker-huggingface-llm\" target=\"_blank\" rel=\"noreferrer noopener\">\u9002\u7528\u4e8e Amazon SageMaker \u7684 Hugging Face LLM \u63a8\u7406\u5bb9\u5668<\/a><\/li>\n\n\n\n<li>DIY\u7684<\/li>\n<\/ul>\n\n\n\n<p>\u5982\u679c\u5b89\u88c5\u4e86 docker\uff0c\u5219\u53ef\u4ee5\u4f7f\u7528\u4ee5\u4e0b\u547d\u4ee4\u542f\u52a8\u63a8\u7406\u670d\u52a1\u5668\u3002<\/p>\n\n\n\n<p><em>\u6ce8\u610f\uff1a\u8bf7\u786e\u4fdd\u6709\u8db3\u591f\u7684 GPU \u5185\u5b58\u6765\u8fd0\u884c\u5bb9\u5668\u3002\u91cd\u65b0\u542f\u52a8\u5185\u6838\u4ee5\u4ece\u7b14\u8bb0\u672c\u7535\u8111\u4e2d\u5220\u9664\u6240\u6709\u5df2\u5206\u914d\u7684 GPU \u5185\u5b58\u3002<\/em><\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">%%bash\n# model=$PWD\/{args.output_dir} # path to model\nmodel=$(pwd)\/code-llama-7b-text-to-sql # path to model\nnum_shard=1             # number of shards\nmax_input_length=1024   # max input length\nmax_total_tokens=2048   # max total tokens\n \ndocker run -d --name tgi --gpus all -ti -p 8080:80 \\\n  -e MODEL_ID=\/workspace \\\n  -e NUM_SHARD=$num_shard \\\n  -e MAX_INPUT_LENGTH=$max_input_length \\\n  -e MAX_TOTAL_TOKENS=$max_total_tokens \\\n  -v $model:\/workspace \\\n  ghcr.io\/huggingface\/text-generation-inference:latest<\/pre><\/div>\n\n\n\n<p>\u5bb9\u5668\u8fd0\u884c\u540e\uff0c\u53ef\u4ee5\u53d1\u9001\u8bf7\u6c42\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">import requests as r\nfrom transformers import AutoTokenizer\nfrom datasets import load_dataset\nfrom random import randint\n \n# Load our test dataset and Tokenizer again\ntokenizer = AutoTokenizer.from_pretrained(\"code-llama-7b-text-to-sql\")\neval_dataset = load_dataset(\"json\", data_files=\"test_dataset.json\", split=\"train\")\nrand_idx = randint(0, len(eval_dataset))\n \n# generate the same prompt as for the first local test\nprompt = tokenizer.apply_chat_template(eval_dataset[rand_idx][\"messages\"][:2], tokenize=False, add_generation_prompt=True)\nrequest= {\"inputs\":prompt,\"parameters\":{\"temperature\":0.2, \"top_p\": 0.95, \"max_new_tokens\": 256}}\n \n# send request to inference server\nresp = r.post(\"http:\/\/127.0.0.1:8080\/generate\", json=request)\n \noutput = resp.json()[\"generated_text\"].strip()\ntime_per_token = resp.headers.get(\"x-time-per-token\")\ntime_prompt_tokens = resp.headers.get(\"x-prompt-tokens\")\n \n# Print results\nprint(f\"Query:\\n{eval_dataset[rand_idx]['messages'][1]['content']}\")\nprint(f\"Original Answer:\\n{eval_dataset[rand_idx]['messages'][2]['content']}\")\nprint(f\"Generated Answer:\\n{output}\")\nprint(f\"Latency per token: {time_per_token}ms\")\nprint(f\"Latency prompt encoding: {time_prompt_tokens}ms\")<\/pre><\/div>\n\n\n\n<p>\u592a\u68d2\u4e86\uff0c\u5b8c\u6210\u540e\u4e0d\u8981\u5fd8\u8bb0\u505c\u6b62\u5bb9\u5668\u3002<\/p>\n\n\n\n<div class=\"wp-block-urvanov-syntax-highlighter-code-block\"><pre class=\"lang:python decode:true \">!docker stop tgi<\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"conclusion\"><a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst#conclusion\"><\/a>\u7ed3\u8bba<\/h2>\n\n\n\n<p>\u5927\u578b\u8bed\u8a00\u6a21\u578b\u548c\u5de5\u5177TRL\u7684\u53ef\u7528\u6027\u4f7f\u5176\u6210\u4e3a\u516c\u53f8\u6295\u8d44\u5f00\u653e\u5f0fLLM\u6280\u672f\u7684\u7406\u60f3\u65f6\u673a\u3002\u9488\u5bf9\u7279\u5b9a\u4efb\u52a1\u5fae\u8c03\u5f00\u653e LLM \u53ef\u4ee5\u663e\u8457\u63d0\u9ad8\u6548\u7387\uff0c\u5e76\u4e3a\u521b\u65b0\u548c\u6539\u8fdb\u670d\u52a1\u5f00\u8f9f\u65b0\u7684\u673a\u4f1a\u3002\u968f\u7740\u53ef\u8bbf\u95ee\u6027\u548c\u6210\u672c\u6548\u76ca\u7684\u63d0\u9ad8\uff0c\u73b0\u5728\u662f\u5f00\u59cb\u4f7f\u7528\u5f00\u653e\u5f0f LLM \u7684\u6700\u4f73\u65f6\u673a\u3002<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>\u611f\u8c22\u60a8\u7684\u9605\u8bfb\uff01\u5982\u679c\u60a8\u6709\u4efb\u4f55\u95ee\u9898\uff0c\u8bf7\u968f\u65f6\u5728<a href=\"https:\/\/twitter.com\/_philschmid\" target=\"_blank\" rel=\"noreferrer noopener\">Twitter<\/a>\u6216<a href=\"https:\/\/www.linkedin.com\/in\/philipp-schmid-a6a2bb196\/\" target=\"_blank\" rel=\"noreferrer noopener\">LinkedIn<\/a>\u4e0a\u4e0e\u6211\u8054\u7cfb\u3002<\/p>\n\n\n\n<p>\u53c2\u8003\u8fde\u63a5\uff1a<a href=\"https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst\">https:\/\/www.philschmid.de\/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst<\/a><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u5927\u578b\u8bed\u8a00\u6a21\u578b\uff08LLM\uff09\u5728\u53bb\u5e74\u53d6\u5f97\u4e86\u5f88\u5927\u8fdb\u5c55\u3002\u6211\u4eec\u4ece\u6ca1\u6709 ChatGPT \u7684\u7ade\u4e89\u5bf9\u624b\u53d8\u6210\u4e86\u4e00\u6574\u5957 LLM \u52a8\u7269\u56ed [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[444,443,442],"tags":[296,474,404],"class_list":["post-4012","post","type-post","status-publish","format-standard","hentry","category-ai","category-llm","category-llms","tag-ai","tag-fine-tune","tag-llm"],"views":2539,"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=\/wp\/v2\/posts\/4012","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4012"}],"version-history":[{"count":27,"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=\/wp\/v2\/posts\/4012\/revisions"}],"predecessor-version":[{"id":4040,"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=\/wp\/v2\/posts\/4012\/revisions\/4040"}],"wp:attachment":[{"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4012"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4012"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aqwu.net\/wp\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4012"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}