Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions 10-building-low-code-ai-applications/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

Now that we've learned how to build image generating applications, let's talk about low code. Generative AI can be used for a variety of different areas including low code, but what is low code and how can we add AI to it?

Building apps and solutions has become more easier for traditional developers and non-developers through the use of Low Code Development Platforms. Low Code Development Platforms enable you to build apps and solutions with little to no code. This is achieved by providing a visual development environment that enables you to drag and drop components to build apps and solutions. This enables you to build apps and solutions faster and with less resources. In this lesson, we dive deep into how to use Low Code and how to enhance low code development with AI using Power Platform.
Building apps and solutions has become easier for traditional developers and non-developers through the use of Low Code Development Platforms. Low Code Development Platforms enable you to build apps and solutions with little to no code. This is achieved by providing a visual development environment that enables you to drag and drop components to build apps and solutions. This enables you to build apps and solutions faster and with less resources. In this lesson, we dive deep into how to use Low Code and how to enhance low code development with AI using Power Platform.

The Power Platform provides organizations with the opportunity to empower their teams to build their own solutions through an intuitive low-code or no-code environment. This environment helps simplify the process of building solutions. With Power Platform, solutions can be built in days or weeks instead of months or years. Power Platform consists of five key products: Power Apps, Power Automate, Power BI, Power Pages and Copilot Studio.

Expand Down Expand Up @@ -67,7 +67,7 @@ As part of the Power Platform, Power Automate lets users create automated workfl

The copilot AI assistant feature in Power Automate enables you to describe what kind of flow you need and what actions you want your flow to perform. Copilot then generates a flow based on your description. You can then customize the flow to meet your needs. The AI Copilot also generates and suggests the actions you need to perform the task you want to automate. We will look at what flows are and how you can use them in Power Automate in this lesson later. You can then customize the actions to meet your needs using the AI Copilot assistant feature through conversational steps. This feature is readily available from the Power Automate home screen.

## Assignment: manage student assignments and invoices for our startup, using Copilot
## Assignment: Manage student assignments and invoices for our startup, using Copilot

Our startup provides online courses to students. The startup has grown rapidly and is now struggling to keep up with the demand for its courses. The startup has hired you as a Power Platform developer to help them build a low code solution to help them manage their student assignments and invoices. Their solution should be able to help them track and manage student assignments through an app and automate the invoice processing process through a workflow. You have been asked to use Generative AI to develop the solution.

Expand Down Expand Up @@ -119,7 +119,7 @@ The finance team of our startup has been struggling to keep track of invoices. T

The Power Platform has an underlying data platform called Dataverse that enables you to store the data for your apps and solutions. Dataverse provides a low-code data platform for storing the app's data. It is a fully managed service that securely stores data in the Microsoft Cloud and is provisioned within your Power Platform environment. It comes with built-in data governance capabilities, such as data classification, data lineage, fine-grained access control, and more. You can learn more [about Dataverse here](https://docs.microsoft.com/powerapps/maker/data-platform/data-platform-intro?WT.mc_id=academic-109639-somelezediko).

Why should we use Dataverse for our startup? The standard and custom tables within Dataverse provide a secure and cloud-based storage option for your data. Tables let you store different types of data, similar to how you might use multiple worksheets in a single Excel workbook. You can use tables to store data that is specific to your organization or business need. Some of the benefits our startup will get from using Dataverse include but are not limited to:
Why should we use Dataverse for our startup? The standard and custom tables within Dataverse provide a secure and cloud-based storage option for your data. Tables let you store different types of data, similar to how you might use multiple worksheets in a single Excel workbook. You can use tables to store data that is specific to your organization or business needs. Some of the benefits our startup will get from using Dataverse include but are not limited to:

- **Easy to manage**: Both the metadata and data are stored in the cloud, so you don't have to worry about the details of how they are stored or managed. You can focus on building your apps and solutions.

Expand Down Expand Up @@ -182,7 +182,7 @@ With Custom AI Models you can bring your own model into AI Builder so that it ca

The finance team has been struggling to process invoices. They have been using a spreadsheet to track the invoices but this has become difficult to manage as the number of invoices has increased. They have asked you to build a workflow that will help them process invoices using AI. The workflow should enable them to extract information from invoices and store the information in a Dataverse table. The workflow should also enable them to send an email to the finance team with the extracted information.

Now that you know what AI Builder is and why you should use it, let's look at how you can use the Invoice Processing AI Model in AI Builder, that we covered earlier on, to build a workflow that will help the finance team process invoices.
Now that you know what AI Builder is and why you should use it, let's look at how you can use the Invoice Processing AI Model in AI Builder, which we covered earlier on, to build a workflow that will help the finance team process invoices.

To build a workflow that will help the finance team process invoices using the Invoice Processing AI Model in AI Builder, follow the steps below:

Expand Down
14 changes: 7 additions & 7 deletions 11-integrating-with-function-calling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The above mentioned problems are what this chapter is looking to address.

This lesson will cover:

- Explain what is function calling and its use cases.
- Explain what function calling is and its use cases.
- Creating a function call using Azure OpenAI.
- How to integrate a function call into an application.

Expand All @@ -22,7 +22,7 @@ By the end of this lesson, you will be able to:
- Setup Function Call using the Azure OpenAI Service.
- Design effective function calls for your application's use case.

## Scenario: improving our chatbot with functions
## Scenario: Improving our chatbot with functions

For this lesson, we want to build a feature for our education startup that allows users to use a chatbot to find technical courses. We will recommend courses that fit their skill level, current role and technology of interest.

Expand All @@ -38,7 +38,7 @@ To get started, let's look at why we would want to use function calling in the f

Before function calling, responses from an LLM were unstructured and inconsistent. Developers were required to write complex validation code to make sure they are able to handle each variation of a response. Users could not get answers like "What is the current weather in Stockholm?". This is because models were limited to the time the data was trained on.

Function Calling is a feature of the Azure OpenAI Service to overcome to the following limitations:
Function Calling is a feature of the Azure OpenAI Service to overcome the following limitations:

- **Consistent response format**. If we can better control the response format we can more easily integrate the response downstream to other systems.
- **External data**. Ability to use data from other sources of an application in a chat context.
Expand Down Expand Up @@ -189,7 +189,7 @@ There are many different use cases where function calls can improve your app lik
The process of creating a function call includes 3 main steps:

1. **Calling** the Chat Completions API with a list of your functions and a user message.
2. **Reading** the model's response to perform an action ie execute a function or API Call.
2. **Reading** the model's response to perform an action i.e. execute a function or API Call.
3. **Making** another call to Chat Completions API with the response from your function to use that information to create a response to the user.

![LLM Flow](./images/LLM-Flow.png?WT.mc_id=academic-105485-koreyst)
Expand Down Expand Up @@ -247,7 +247,7 @@ Let's describe each function instance more in detail below:

- `name` - The name of the function that we want to have called.
- `description` - This is the description of how the function works. Here it's important to be specific and clear.
- `parameters` - A list of values and format that you want the model to produce in its response. The parameters array consists of items where item have the following properties:
- `parameters` - A list of values and format that you want the model to produce in its response. The parameters array consists of items where the items have the following properties:
1. `type` - The data type of the properties will be stored in.
1. `properties` - List of the specific values that the model will use for its response
1. `name` - The key is the name of the property that the model will use in its formatted response, for example, `product`.
Expand Down Expand Up @@ -305,7 +305,7 @@ After we have tested the formatted response from the LLM, now we can integrate t

To integrate this into our application, let's take the following steps:

1. First, let's make the call to the Open AI services and store the message in a variable called `response_message`.
1. First, let's make the call to the OpenAI services and store the message in a variable called `response_message`.

```python
response_message = response.choices[0].message
Expand Down Expand Up @@ -455,4 +455,4 @@ Hint: Follow the [Learn API reference documentation](https://learn.microsoft.com

After completing this lesson, check out our [Generative AI Learning collection](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) to continue leveling up your Generative AI knowledge!

Head over to Lesson 12 where we will look at how to [design UX for AI applications](../12-designing-ux-for-ai-applications/README.md?WT.mc_id=academic-105485-koreyst)!
Head over to Lesson 12, where we will look at how to [design UX for AI applications](../12-designing-ux-for-ai-applications/README.md?WT.mc_id=academic-105485-koreyst)!
6 changes: 3 additions & 3 deletions 18-fine-tuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ So, before you learn "how" to fine-tune language models, you need to know "why"
- Compute - for running fine-tuning jobs, and deploying fine-tuned model
- Data - access to sufficient quality examples for fine-tuning impact
- **Benefits**: Have you confirmed the benefits for fine-tuning?
- Quality - did fine-tuned model outperform baseline?
- Quality - did fine-tuned model outperform the baseline?
- Cost - does it reduce token usage by simplifying prompts?
- Extensibility - can you repurpose base model for new domains?
- Extensibility - can you repurpose the base model for new domains?

By answering these questions, you should be able to decide if fine-tuning is the right approach for your use case. Ideally, the approach is valid only if the benefits outweigh the costs. Once you decide to proceed, it's time to think about _how_ you can fine tune the pre-trained model.

Expand Down Expand Up @@ -94,4 +94,4 @@ After completing this lesson, check out our [Generative AI Learning collection](

Congratulations!! You have completed the final lesson from the v2 series for this course! Don't stop learning and building. \*\*Check out the [RESOURCES](RESOURCES.md?WT.mc_id=academic-105485-koreyst) page for a list of additional suggestions for just this topic.

Our v1 series of lessons have also been updated with more assignments and concepts. So take a minute to refresh your knowledge - and please [share your questions and feedback](https://github.com/microsoft/generative-ai-for-beginners/issues?WT.mc_id=academic-105485-koreyst) to help us improve these lessons for the community.
Our v1 series of lessons has also been updated with more assignments and concepts. So take a minute to refresh your knowledge - and please [share your questions and feedback](https://github.com/microsoft/generative-ai-for-beginners/issues?WT.mc_id=academic-105485-koreyst) to help us improve these lessons for the community.
10 changes: 5 additions & 5 deletions 19-slm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@
We can think of it as an upgrade of Phi-3-mini. While the parameters remain unchanged, it improves the ability to support multiple languages(
Support 20+ languages:Arabic, Chinese, Czech, Danish, Dutch, English, Finnish, French, German, Hebrew, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish, Ukrainian) ​​and adds stronger support for long context.

Phi-3.5-mini with 3.8B parameters outperforms language models of the same size and on par with models twice its size.
Phi-3.5-mini with 3.8B parameters outperforms language models of the same size and is on par with models twice its size.

### Phi-3 / 3.5 Vision

Expand All @@ -120,7 +120,7 @@

### Phi-3.5-MoE

***Mixture of Experts(MoE)*** enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
***Mixture of Experts(MoE)*** enables models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.

Phi-3.5-MoE comprises 16x3.8B expert modules.Phi-3.5-MoE with only 6.6B active parameters achieves a similar level of reasoning, language understanding, and math as much larger models

Expand Down Expand Up @@ -148,12 +148,12 @@

**Azure AI Studio**

Or if we want to use the vision and MoE models, you can use Azure AI Studio to complete the call. If you are interested, you can read the Phi-3 Cookbook to learn how to call Phi-3/3.5 Instruct, Vision, MoE through Azure AI Studio [Click this link](https://github.com/microsoft/Phi-3CookBook/blob/main/md/02.QuickStart/AzureAIStudio_QuickStart.md?WT.mc_id=academic-105485-koreyst)

Check failure on line 151 in 19-slm/README.md

View workflow job for this annotation

GitHub Actions / Check Broken URLs



**NVIDIA NIM**

In addition to the cloud-based Model Catalog solutions provided by Azure and GitHub, you can also use [Nivida NIM](https://developer.nvidia.com/nim?WT.mc_id=academic-105485-koreyst) to complete related calls. You can visit NIVIDA NIM to complete the API calls of Phi-3/3.5 Family. NVIDIA NIM (NVIDIA Inference Microservices) is a set of accelerated inference microservices designed to help developers deploy AI models efficiently across various environments, including clouds, data centers, and workstations.
In addition to the cloud-based Model Catalog solutions provided by Azure and GitHub, you can also use [Nivida NIM](https://developer.nvidia.com/nim?WT.mc_id=academic-105485-koreyst) to complete related calls. You can visit NIVIDA NIM to complete the API calls of the Phi-3/3.5 Family. NVIDIA NIM (NVIDIA Inference Microservices) is a set of accelerated inference microservices designed to help developers deploy AI models efficiently across various environments, including clouds, data centers, and workstations.

Here are some key features of NVIDIA NIM:

Expand All @@ -171,7 +171,7 @@
### Inference Phi-3/3.5 in local env
Inference in relation to Phi-3, or any language model like GPT-3, refers to the process of generating responses or predictions based on the input it receives. When you provide a prompt or question to Phi-3, it uses its trained neural network to infer the most likely and relevant response by analyzing patterns and relationships in the data it was trained on.

**Hugging face Transformer**
**Hugging Face Transformer**
Hugging Face Transformers is a powerful library designed for natural language processing (NLP) and other machine learning tasks. Here are some key points about it:

1. **Pretrained Models**: It provides thousands of pretrained models that can be used for various tasks such as text classification, named entity recognition, question answering, summarization, translation, and text generation.
Expand Down Expand Up @@ -223,7 +223,7 @@
- **Performance Optimization:** It includes optimizations for different hardware accelerators like NVIDIA GPUs, AMD GPUs, and more2.
- **Ease of Use:** It provides APIs for easy integration into applications, allowing you to generate text, images, and other content with minimal code
- Users can call a high level generate() method, or run each iteration of the model in a loop, generating one token at a time, and optionally updating generation parameters inside the loop.
- ONNX runtume also has support for greedy/beam search and TopP, TopK sampling to generate token sequences and built-in logits processing like repetition penalties. You can also easily add custom scoring.
- ONNX runtime also has support for greedy/beam search and TopP, TopK sampling to generate token sequences and built-in logits processing like repetition penalties. You can also easily add custom scoring.

## Getting Started
To get started with ONNX Runtime for GENAI, you can follow these steps:
Expand Down
8 changes: 4 additions & 4 deletions 20-mistral/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ This lesson will cover:
In this lesson, we will explore 3 different Mistral models:
**Mistral Large**, **Mistral Small** and **Mistral Nemo**.

Each of these models are available free on the Github Model marketplace. The code in this notebook will be using this models to run the code. Here are more details on using Github Models to [prototype with AI models](https://docs.github.com/en/github-models/prototyping-with-ai-models?WT.mc_id=academic-105485-koreyst).
Each of these models is available free on the Github Model marketplace. The code in this notebook will be using these models to run the code. Here are more details on using Github Models to [prototype with AI models](https://docs.github.com/en/github-models/prototyping-with-ai-models?WT.mc_id=academic-105485-koreyst).


## Mistral Large 2 (2407)
Mistral Large 2 is currently the flagship model from Mistral and is designed for enterprise use.

The model is an upgrade to the original Mistral Large by offering
The model is an upgrade to the original Mistral Large by offering
- Larger Context Window - 128k vs 32k
- Better performance on Math and Coding Tasks - 76.9% average accuracy vs 60.4%
- Increased multilingual performance - languages include: English, French, German, Spanish, Italian, Portuguese, Dutch, Russian, Chinese, Japanese, Korean, Arabic, and Hindi.
Expand Down Expand Up @@ -212,7 +212,7 @@ Compared to the other two models discussed in this lesson, Mistral NeMo is the o

It is viewed as an upgrade to the earlier open source LLM from Mistral, Mistral 7B.

Some other feature of the NeMo model are:
Some other features of the NeMo model are:

- *More efficient tokenization:* This model using the Tekken tokenizer over the more commonly used tiktoken. This allows for better performance over more languages and code.

Expand Down Expand Up @@ -345,4 +345,4 @@ print(len(tokens))

## Learning does not stop here, continue the Journey

After completing this lesson, check out our [Generative AI Learning collection](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) to continue leveling up your Generative AI knowledge!
After completing this lesson, check out our [Generative AI Learning collection](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) to continue leveling up your Generative AI knowledge!
Loading
Loading