Skip to content

Commit 04fc8ea

Browse files
authored
Merge pull request #2 from ShawnSiao/translation_for_#713
fix path and translate resources.md
2 parents 0d302d2 + 21c5629 commit 04fc8ea

File tree

3 files changed

+41
-5
lines changed

3 files changed

+41
-5
lines changed

18-fine-tuning/translations/cn/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121

2222
## 插图指南
2323

24-
想要在深入了解之前先对本课程的内容有个大概的了解吗?请查看这份插图指南,它描述了我们本课程的学习过程——从理解微调的核心概念和动机,到理解微调任务的过程和最佳实践。这是一个值得深入探讨的有趣主题,别忘了查看[资源页面](./RESOURCES.md?WT.mc_id=academic-105485-koreyst),获取更多支持你自学旅程的链接!
24+
想要在深入了解之前先对本课程的内容有个大概的了解吗?请查看这份插图指南,它描述了我们本课程的学习过程——从理解微调的核心概念和动机,到理解微调任务的过程和最佳实践。这是一个值得深入探讨的有趣主题,别忘了查看[资源页面](../../RESOURCES.md?WT.mc_id=academic-105485-koreyst),获取更多支持你自学旅程的链接!
2525

2626
![微调语言模型插图指南](../../img/18-fine-tuning-sketchnote.png?WT.mc_id=academic-105485-koreyst)
2727

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# 自主学习资源
2+
3+
本课程参考了OpenAI和Azure OpenAI的核心资源构建,涉及术语和教程。以下为非 exhaustive 资源列表,供您自主探索学习。
4+
5+
## 1. 核心资源
6+
7+
| 标题/链接 | 描述 |
8+
| :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
9+
| [Fine-tuning with OpenAI Models](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst) | 微调通过训练比提示中能容纳的更多示例来改进少样本学习,从而节省成本、提高响应质量并实现更低延迟的请求。**从OpenAI获取微调的概述**|
10+
| [What is Fine-Tuning with Azure OpenAI?](https://learn.microsoft.com/azure/ai-services/openai/concepts/fine-tuning-considerations#what-is-fine-tuning-with-azure-openai?WT.mc_id=academic-105485-koreyst) | 理解**微调的概念**、适用场景(动机问题)、训练数据选择及质量评估方法 |
11+
| [Customize a model with fine-tuning](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython&pivots=programming-language-studio#continuous-fine-tuning?WT.mc_id=academic-105485-koreyst) | 通过Azure OpenAI Service使用微调技术定制模型。学习如何**通过Azure AI Studio、Python SDK或REST API微调模型**的具体流程。 |
12+
| [Recommendations for LLM fine-tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) | 当LLM在特定领域、任务或数据集上表现不佳,或产生不准确/误导性输出时,**何时应考虑使用微调**作为解决方案? |
13+
| [Continuous Fine Tuning](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython&pivots=programming-language-studio#continuous-fine-tuning?WT.mc_id=academic-105485-koreyst) | 持续微调是通过选择已微调模型作为基础模型,**在新增训练示例集上继续微调**的迭代过程。 |
14+
| [Fine-tuning and function calling](https://learn.microsoft.com/azure/ai-services/openai/how-to/fine-tuning-functions?WT.mc_id=academic-105485-koreyst) | 通过**函数调用示例微调模型**可以提高输出准确性、保持响应格式一致性,同时降低成本 |
15+
| [Fine-tuning Models: Azure OpenAI Guidance](https://learn.microsoft.com/azure/ai-services/openai/concepts/models#fine-tuning-models?WT.mc_id=academic-105485-koreyst) | 查阅此表了解**Azure OpenAI中可微调的模型**及区域可用性,查看token限制和训练数据有效期 |
16+
| [To Fine Tune or Not To Fine Tune? That is the Question](https://learn.microsoft.com/shows/ai-show/to-fine-tune-or-not-fine-tune-that-is-the-question?WT.mc_id=academic-105485-koreyst) | 这期30分钟**2023年10月**的AI Show节目探讨微调的优缺点及实践洞察 |
17+
| [Getting Started With LLM Fine-Tuning](https://learn.microsoft.com/ai/playbook/technology-guidance/generative-ai/working-with-llms/fine-tuning-recommend?WT.mc_id=academic-105485-koreyst) |**AI Playbook**资源指导数据需求、格式要求、超参数调优及需要注意的挑战/限制 |
18+
| **教程**: [Azure OpenAI GPT3.5 Turbo Fine-Tuning](https://learn.microsoft.com/azure/ai-services/openai/tutorials/fine-tune?tabs=python%2Ccommand-line?WT.mc_id=academic-105485-koreyst) | 学习创建微调样本数据集、准备微调、创建微调任务及在Azure部署微调模型 |
19+
| **教程**: [Fine-tune a Llama 2 model in Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/how-to/fine-tune-model-llama?WT.mc_id=academic-105485-koreyst) | 通过Azure AI Studio的**低代码友好型UI工作流**定制大语言模型 |
20+
| **教程**:[Fine-tune Hugging Face models for a single GPU on Azure](https://learn.microsoft.com/azure/databricks/machine-learning/train-model/huggingface/fine-tune-model?WT.mc_id=academic-105485-koreyst) | 使用Hugging Face transformers库在Azure Databricks单GPU上微调模型的完整指南 |
21+
| **培训**: [Fine-tune a foundation model with Azure Machine Learning](https://learn.microsoft.com/training/modules/finetune-foundation-model-with-azure-machine-learning/?WT.mc_id=academic-105485-koreyst) | Azure Machine Learning模型目录提供众多可微调的开源模型,该模块属于[AzureML生成式AI学习路径](https://learn.microsoft.com/training/paths/work-with-generative-models-azure-machine-learning/?WT.mc_id=academic-105485-koreyst) |
22+
| **教程:** [Azure OpenAI Fine-Tuning](https://docs.wandb.ai/guides/integrations/azure-openai-fine-tuning?WT.mc_id=academic-105485-koreyst) | 使用W&B在Microsoft Azure上微调GPT-3.5/GPT-4模型可实现详细的性能跟踪分析 |
23+
24+
25+
## 2. 补充资源
26+
27+
本节包含值得探索的额外资源,这些内容未在本课程中涵盖。它们可能出现在未来课程或作为课后拓展练习,现阶段可供您自主构建专业知识体系。
28+
29+
| 标题/链接 | 描述 |
30+
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
31+
| **OpenAI Cookbook**: [Data preparation and analysis for chat model fine-tuning](https://cookbook.openai.com/examples/chat_finetuning_data_prep?WT.mc_id=academic-105485-koreyst) | 本notebook提供聊天模型微调数据集预处理与分析工具,可检测格式错误、提供基础统计信息并估算微调成本的token数量。参考:[GPT-3.5 Turbo微调方法](https://platform.openai.com/docs/guides/fine-tuning?WT.mc_id=academic-105485-koreyst) |
32+
| **OpenAI Cookbook**: [Fine-Tuning for Retrieval Augmented Generation (RAG) with Qdrant](https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant?WT.mc_id=academic-105485-koreyst) | 本notebook演示如何为检索增强生成(RAG)微调OpenAI模型的完整案例,结合Qdrant向量数据库和少样本学习技术提升模型性能并减少幻觉 |
33+
| **OpenAI Cookbook**: [Fine-tuning GPT with Weights & Biases](https://cookbook.openai.com/examples/third_party/gpt_finetuning_with_wandb?WT.mc_id=academic-105485-koreyst) | Weights & Biases (W&B)是集成模型训练、微调和基础模型应用的AI开发平台。建议先阅读其[OpenAI微调指南](https://docs.wandb.ai/guides/integrations/openai-fine-tuning/?WT.mc_id=academic-105485-koreyst),再尝试本Cookbook实践 |
34+
| **社区教程** [Phinetuning 2.0](https://huggingface.co/blog/g-ronimo/phinetuning?WT.mc_id=academic-105485-koreyst) - 小语言模型微调 | 了解微软推出的[Phi-2](https://www.microsoft.com/research/blog/phi-2-the-surprising-power-of-small-language-models/?WT.mc_id=academic-105485-koreyst)小型语言模型,该教程将指导您构建定制数据集并使用QLoRA技术微调模型 |
35+
| **Hugging Face教程** [2024年使用Hugging Face微调LLMs](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl?WT.mc_id=academic-105485-koreyst) | 本博客逐步指导如何使用Hugging Face TRL、Transformers和datasets库在2024年微调开源LLM,涵盖用例定义、开发环境配置、数据集准备、模型微调、测试评估到生产部署全流程 |
36+
| **Hugging Face: [AutoTrain Advanced](https://github.com/huggingface/autotrain-advanced?WT.mc_id=academic-105485-koreyst)** | 提供[前沿机器学习模型](https://twitter.com/abhi1thakur/status/1755167674894557291?WT.mc_id=academic-105485-koreyst)的快速训练部署方案,包含Colab友好型教程和YouTube视频指导。**反映最新[本地优先](https://twitter.com/abhi1thakur/status/1750828141805777057?WT.mc_id=academic-105485-koreyst)更新**,详见[AutoTrain文档](https://huggingface.co/autotrain?WT.mc_id=academic-105485-koreyst) |

19-slm/translations/cn/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ Phi-3.5-MoE 包含 16 个 3.8B 专家模块,仅需 6.6B 活跃参数即可实
154154
- 标准化 API 支持
155155
- 企业级安全控制
156156

157-
- 演示:[使用 NVIDIA NIM 调用 Phi-3.5-Vision-API](./python/Phi-3-Vision-Nividia-NIM.ipynb?WT.mc_id=academic-105485-koreyst)
157+
- 演示:[使用 NVIDIA NIM 调用 Phi-3.5-Vision-API](../../python/Phi-3-Vision-Nividia-NIM.ipynb?WT.mc_id=academic-105485-koreyst)
158158

159159
### 本地环境推理
160160

@@ -169,9 +169,9 @@ Phi-3.5-MoE 包含 16 个 3.8B 专家模块,仅需 6.6B 活跃参数即可实
169169
注意:视觉和 MoE 场景需要 GPU 加速,未量化的 CPU 推理效率有限。
170170

171171
- 演示:
172-
- [调用 Phi-3.5-Instuct](./python/phi35-instruct-demo.ipynb?WT.mc_id=academic-105485-koreyst)
173-
- [调用 Phi-3.5-Vision](./python/phi35-vision-demo.ipynb?WT.mc_id=academic-105485-koreyst)
174-
- [调用 Phi-3.5-MoE](./python/phi35_moe_demo.ipynb?WT.mc_id=academic-105485-koreyst)
172+
- [调用 Phi-3.5-Instuct](../../python/phi35-instruct-demo.ipynb?WT.mc_id=academic-105485-koreyst)
173+
- [调用 Phi-3.5-Vision](../../python/phi35-vision-demo.ipynb?WT.mc_id=academic-105485-koreyst)
174+
- [调用 Phi-3.5-MoE](../../python/phi35_moe_demo.ipynb?WT.mc_id=academic-105485-koreyst)
175175

176176
**Ollama**
177177

0 commit comments

Comments
 (0)