Skip to content

Commit f5c0c95

Browse files
committed
AI agents added
1 parent f93fbff commit f5c0c95

File tree

4 files changed

+194
-78
lines changed

4 files changed

+194
-78
lines changed

generative_ai/LLM.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
2+
## LLMs From Scratch Series
3+
4+
1. Andrej Karpathy - [10 Vidoes](https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ&si=EuGApF9EXdu1_an5)
5+
- The spelled-out intro to neural networks and backpropagation: building micrograd
6+
- The spelled-out intro to language modeling: building makemore
7+
- Building makemore Part 2: MLP
8+
- Building makemore Part 3: Activations & Gradients, BatchNorm
9+
- Building makemore Part 4: Becoming a Backprop Ninja
10+
- Building makemore Part 5: Building a WaveNet
11+
- Let's build GPT: from scratch, in code, spelled out.
12+
- State of GPT
13+
- Let's build the GPT Tokenizer
14+
- Let's reproduce GPT-2 (124M)
15+
16+
2. StatQuest with Josh Starmer - [15 Videos](https://youtube.com/playlist?list=PLblh5JKOoLUIxGDQs4LFFD--41Vzf-ME1&si=DfkDMWz58VJgBrsD)
17+
- Recurrent Neural Networks (RNNs), Clearly Explained!!!
18+
- Long Short-Term Memory (LSTM), Clearly Explained
19+
- Word Embedding and Word2Vec, Clearly Explained!!!
20+
- Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!!
21+
- Attention for Neural Networks, Clearly Explained!!!
22+
- Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
23+
- Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!!
24+
- Tensors for Neural Networks, Clearly Explained!!!
25+
- Essential Matrix Algebra for Neural Networks, Clearly Explained!!!
26+
- The matrix math behind transformer neural networks, one step at a time!!!
27+
- The StatQuest Introduction to PyTorch
28+
- Introduction to Coding Neural Networks with PyTorch and Lightning
29+
- Long Short-Term Memory with PyTorch + Lightning
30+
- Word Embedding in PyTorch + Lightning
31+
- Coding a ChatGPT Like Transformer From Scratch in PyTorch
32+
3. Sebastian Raschka - [5 Videos](https://youtube.com/playlist?list=PLTKMiZHVd_2Licpov-ZK24j6oUnbhiPkm&si=9oqXgWnDumkgA176)
33+
- Developing an LLM: Building, Training, Finetuning
34+
- Understanding PyTorch Buffers
35+
- Finetuning Open-Source LLMs
36+
- Insights from Finetuning LLMs with Low-Rank Adaptation
37+
- Building LLMs from the Ground Up: A 3-hour Coding Workshop
38+
4. CodeEmporium - [12 Videos](https://youtube.com/playlist?list=PLTl9hO2Oobd97qfWC40gOSU8C0iu0m2l4&si=_kt_U8h_i2QtJyPj)
39+
- Self Attention in Transformer Neural Networks (with Code!)
40+
- Multi Head Attention in Transformer Neural Networks with Code!
41+
- Positional Encoding in Transformer Neural Networks Explained
42+
- Layer Normalization - EXPLAINED (in Transformer Neural Networks)
43+
- Blowing up the Transformer Encoder!
44+
- Transformer Encoder in 100 lines of code!
45+
- Blowing up Transformer Decoder architecture
46+
- Transformer Decoder coded from scratch
47+
- Sentence Tokenization in Transformer Code from scratch!
48+
- The complete guide to Transformer neural Networks!
49+
- The complete Transformer Neural Network Code in 300 lines!
50+
- Building a Translator with Transformers
51+
52+
5. Jay Alammar
53+
- Coming soon
54+
55+
6. Luis Serrano
56+
- Coming soon
57+
58+
59+
### Modue 1 - Introduction to Generative AI
60+
61+
| Topic | References |
62+
| --------------------------------------------------------- |:-------------------------------------------------------------------------------------------------------------------------------------------- |
63+
| Introduction to Generative AI,Importance and Applications | [Intro to Generartive AI - Google Cloud Tech▶️](https://www.youtube.com/watch?v=G2fqAlgmoPo&pp=ygUdaW50cm9kdWN0aW9uIHRvIGdlbmVyYXRpdmUgYWk%3D) |
64+
| Autoencoders and Variational Autoencoders (VAEs) | [Variational Autoencoders - ArxivInsights▶️](https://www.youtube.com/watch?v=9zKuYvjFFS8&t=346s&pp=ygUXdmFyaWF0aW9uYWwgYXV0b2VuY29kZXI%3D)<br> [Autoencoders Explained Easily▶️](https://youtu.be/SSXDkfiPs7c?si=3KD2T44sQQSMFjdG)<br>[Autoencoders - Jeremy Jordan🧾 ](https://www.jeremyjordan.me/autoencoders/) |
65+
|Generative Adversarial Networks (GANs)| [A Friendly Introduction to Generative Adversarial Networks (GANs) - Serrano.Academy▶️](https://www.youtube.com/watch?v=8L11aMN5KY8&t=1076s&pp=ygUER2Fucw%3D%3D) <br> [6 GAN Architectures You Really Should Know - neptune.ai🧾](https://neptune.ai/blog/6-gan-architectures)|
66+
|Autoregressive Models and RBMs|[Guide to Autoregressive Models- Turing🧾](https://www.turing.com/kb/guide-to-autoregressive-models)<br> [Autoregressive Diffusion Models - Yannic Kilcher▶️](https://www.youtube.com/watch?v=2h4tRsQzipQ)<br>[Restricted Boltzmann Machines (RBM)- Serrano.Academy▶️](https://www.youtube.com/watch?v=Fkw0_aAtwIw)<br>|
67+
|Text Generation and Language Modeling| [Text Generation-HuggingFace](https://huggingface.co/tasks/text-generation)🧾|
68+
69+
### Module 2 - Deep Learning Based Natural Language Processing
70+
| Topic | References |
71+
| ----- |:---------- |
72+
| Word Embedding | [Word Embedding and Word2Vec -StatQuest](https://www.youtube.com/watch?v=viZrOnJclY0)▶️<br> [Word2Vec, GloVe, FastText- CodeEmporium](https://www.youtube.com/watch?v=9S0-OC4LFNo&t=386s&pp=ygUQV29yZCBFbWJlZGRpbmdzIA%3D%3D)▶️<br> |
73+
|Representation Learning|[Representation Learning Complete Guide-AIM](https://analyticsindiamag.com/a-comprehensive-guide-to-representation-learning-for-beginners/#:~:text=Representation%20learning%20is%20a%20class,them%20to%20a%20given%20activity.)🧾|
74+
|Sequence-to-Sequence Models,Encoder-Decoder Architectures|[Sequence-to-Sequence (seq2seq) EncoderDecoder Neural Networks - StatQuest](https://www.youtube.com/watch?v=L8HKweZIOmg&pp=ygUbU2VxdWVuY2UtdG8tU2VxdWVuY2UgTW9kZWxz)▶️<br>[EncoderDecoder Seq2Seq Models - Kriz Moses](https://medium.com/analytics-vidhya/encoder-decoder-seq2seq-models-clearly-explained-c34186fbf49b)🧾|
75+
|seq2seq with Attention|[Sequence to Sequence (seq2seq) and Attention - Lena Voita](https://lena-voita.github.io/nlp_course/seq2seq_and_attention.html)🧾<br>[Attention for Neural Networks-StatQuest](https://www.youtube.com/watch?v=PSs6nxngL6k&pp=ygUsU2VxdWVuY2UgdG8gU2VxdWVuY2UgKHNlcTJzZXEpIGFuZCBBdHRlbnRpb24%3D)▶️|
76+
|Self Attention,Transformers| [Introduction to Transformers - Andrej Karpathy](https://www.youtube.com/watch?v=XfpMkf4rD6E)▶️<br>[Attention for Neural Networks-StatQuest](https://www.youtube.com/watch?v=PSs6nxngL6k)▶️<br>[Self attention-H2O.ai](https://h2o.ai/wiki/self-attention/)🧾<br>[What are Transformer Models and how do they work?-Serrano.Academy](https://www.youtube.com/watch?v=qaWMOYf4ri8)▶️|
77+
|Self-Supervised Learning |[Self-Supervised Learning: The Dark Matter of Intelligence-Yannic Kilcher](https://www.youtube.com/watch?v=Ag1bw8MfHGQ)▶️<br>[Self-Supervised Learning and Its Applications-neptune.ai🧾](https://neptune.ai/blog/self-supervised-learning)|
78+
|Advanced NLP|[Stanford CS224N: NLP with Deep Learning▶️](https://www.youtube.com/watch?v=rmVRLeJRkl4&list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4&pp=iAQB)<br>[Natural Language Processing: Advance Techniques🧾](https://medium.com/analytics-vidhya/natural-language-processing-advance-techniques-in-depth-analysis-b67bca5db432) |

generative_ai/README.md

Lines changed: 0 additions & 78 deletions
Original file line numberDiff line numberDiff line change
@@ -1,80 +1,2 @@
11
## Generative AI
22
This is a comprehensive guide to understanding and navigating the realm of Generative AI. Generative AI has gained significant traction in recent years due to its wide range of applications across various domains. From generating realistic images to aiding in natural language processing tasks, Generative AI has revolutionized how we interact with and create content.
3-
4-
### LLMs From Scratch Series
5-
6-
1. Andrej Karpathy - [10 Vidoes](https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ&si=EuGApF9EXdu1_an5)
7-
- The spelled-out intro to neural networks and backpropagation: building micrograd
8-
- The spelled-out intro to language modeling: building makemore
9-
- Building makemore Part 2: MLP
10-
- Building makemore Part 3: Activations & Gradients, BatchNorm
11-
- Building makemore Part 4: Becoming a Backprop Ninja
12-
- Building makemore Part 5: Building a WaveNet
13-
- Let's build GPT: from scratch, in code, spelled out.
14-
- State of GPT
15-
- Let's build the GPT Tokenizer
16-
- Let's reproduce GPT-2 (124M)
17-
18-
2. StatQuest with Josh Starmer - [15 Videos](https://youtube.com/playlist?list=PLblh5JKOoLUIxGDQs4LFFD--41Vzf-ME1&si=DfkDMWz58VJgBrsD)
19-
- Recurrent Neural Networks (RNNs), Clearly Explained!!!
20-
- Long Short-Term Memory (LSTM), Clearly Explained
21-
- Word Embedding and Word2Vec, Clearly Explained!!!
22-
- Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!!
23-
- Attention for Neural Networks, Clearly Explained!!!
24-
- Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!!
25-
- Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!!
26-
- Tensors for Neural Networks, Clearly Explained!!!
27-
- Essential Matrix Algebra for Neural Networks, Clearly Explained!!!
28-
- The matrix math behind transformer neural networks, one step at a time!!!
29-
- The StatQuest Introduction to PyTorch
30-
- Introduction to Coding Neural Networks with PyTorch and Lightning
31-
- Long Short-Term Memory with PyTorch + Lightning
32-
- Word Embedding in PyTorch + Lightning
33-
- Coding a ChatGPT Like Transformer From Scratch in PyTorch
34-
3. Sebastian Raschka - [5 Videos](https://youtube.com/playlist?list=PLTKMiZHVd_2Licpov-ZK24j6oUnbhiPkm&si=9oqXgWnDumkgA176)
35-
- Developing an LLM: Building, Training, Finetuning
36-
- Understanding PyTorch Buffers
37-
- Finetuning Open-Source LLMs
38-
- Insights from Finetuning LLMs with Low-Rank Adaptation
39-
- Building LLMs from the Ground Up: A 3-hour Coding Workshop
40-
4. CodeEmporium - [12 Videos](https://youtube.com/playlist?list=PLTl9hO2Oobd97qfWC40gOSU8C0iu0m2l4&si=_kt_U8h_i2QtJyPj)
41-
- Self Attention in Transformer Neural Networks (with Code!)
42-
- Multi Head Attention in Transformer Neural Networks with Code!
43-
- Positional Encoding in Transformer Neural Networks Explained
44-
- Layer Normalization - EXPLAINED (in Transformer Neural Networks)
45-
- Blowing up the Transformer Encoder!
46-
- Transformer Encoder in 100 lines of code!
47-
- Blowing up Transformer Decoder architecture
48-
- Transformer Decoder coded from scratch
49-
- Sentence Tokenization in Transformer Code from scratch!
50-
- The complete guide to Transformer neural Networks!
51-
- The complete Transformer Neural Network Code in 300 lines!
52-
- Building a Translator with Transformers
53-
54-
5. Jay Alammar
55-
- Coming soon
56-
57-
6. Luis Serrano
58-
- Coming soon
59-
60-
61-
### Modue 1 - Introduction to Generative AI
62-
63-
| Topic | References |
64-
| --------------------------------------------------------- |:-------------------------------------------------------------------------------------------------------------------------------------------- |
65-
| Introduction to Generative AI,Importance and Applications | [Intro to Generartive AI - Google Cloud Tech▶️](https://www.youtube.com/watch?v=G2fqAlgmoPo&pp=ygUdaW50cm9kdWN0aW9uIHRvIGdlbmVyYXRpdmUgYWk%3D) |
66-
| Autoencoders and Variational Autoencoders (VAEs) | [Variational Autoencoders - ArxivInsights▶️](https://www.youtube.com/watch?v=9zKuYvjFFS8&t=346s&pp=ygUXdmFyaWF0aW9uYWwgYXV0b2VuY29kZXI%3D)<br> [Autoencoders Explained Easily▶️](https://youtu.be/SSXDkfiPs7c?si=3KD2T44sQQSMFjdG)<br>[Autoencoders - Jeremy Jordan🧾 ](https://www.jeremyjordan.me/autoencoders/) |
67-
|Generative Adversarial Networks (GANs)| [A Friendly Introduction to Generative Adversarial Networks (GANs) - Serrano.Academy▶️](https://www.youtube.com/watch?v=8L11aMN5KY8&t=1076s&pp=ygUER2Fucw%3D%3D) <br> [6 GAN Architectures You Really Should Know - neptune.ai🧾](https://neptune.ai/blog/6-gan-architectures)|
68-
|Autoregressive Models and RBMs|[Guide to Autoregressive Models- Turing🧾](https://www.turing.com/kb/guide-to-autoregressive-models)<br> [Autoregressive Diffusion Models - Yannic Kilcher▶️](https://www.youtube.com/watch?v=2h4tRsQzipQ)<br>[Restricted Boltzmann Machines (RBM)- Serrano.Academy▶️](https://www.youtube.com/watch?v=Fkw0_aAtwIw)<br>|
69-
|Text Generation and Language Modeling| [Text Generation-HuggingFace](https://huggingface.co/tasks/text-generation)🧾|
70-
71-
### Module 2 - Deep Learning Based Natural Language Processing
72-
| Topic | References |
73-
| ----- |:---------- |
74-
| Word Embedding | [Word Embedding and Word2Vec -StatQuest](https://www.youtube.com/watch?v=viZrOnJclY0)▶️<br> [Word2Vec, GloVe, FastText- CodeEmporium](https://www.youtube.com/watch?v=9S0-OC4LFNo&t=386s&pp=ygUQV29yZCBFbWJlZGRpbmdzIA%3D%3D)▶️<br> |
75-
|Representation Learning|[Representation Learning Complete Guide-AIM](https://analyticsindiamag.com/a-comprehensive-guide-to-representation-learning-for-beginners/#:~:text=Representation%20learning%20is%20a%20class,them%20to%20a%20given%20activity.)🧾|
76-
|Sequence-to-Sequence Models,Encoder-Decoder Architectures|[Sequence-to-Sequence (seq2seq) EncoderDecoder Neural Networks - StatQuest](https://www.youtube.com/watch?v=L8HKweZIOmg&pp=ygUbU2VxdWVuY2UtdG8tU2VxdWVuY2UgTW9kZWxz)▶️<br>[EncoderDecoder Seq2Seq Models - Kriz Moses](https://medium.com/analytics-vidhya/encoder-decoder-seq2seq-models-clearly-explained-c34186fbf49b)🧾|
77-
|seq2seq with Attention|[Sequence to Sequence (seq2seq) and Attention - Lena Voita](https://lena-voita.github.io/nlp_course/seq2seq_and_attention.html)🧾<br>[Attention for Neural Networks-StatQuest](https://www.youtube.com/watch?v=PSs6nxngL6k&pp=ygUsU2VxdWVuY2UgdG8gU2VxdWVuY2UgKHNlcTJzZXEpIGFuZCBBdHRlbnRpb24%3D)▶️|
78-
|Self Attention,Transformers| [Introduction to Transformers - Andrej Karpathy](https://www.youtube.com/watch?v=XfpMkf4rD6E)▶️<br>[Attention for Neural Networks-StatQuest](https://www.youtube.com/watch?v=PSs6nxngL6k)▶️<br>[Self attention-H2O.ai](https://h2o.ai/wiki/self-attention/)🧾<br>[What are Transformer Models and how do they work?-Serrano.Academy](https://www.youtube.com/watch?v=qaWMOYf4ri8)▶️|
79-
|Self-Supervised Learning |[Self-Supervised Learning: The Dark Matter of Intelligence-Yannic Kilcher](https://www.youtube.com/watch?v=Ag1bw8MfHGQ)▶️<br>[Self-Supervised Learning and Its Applications-neptune.ai🧾](https://neptune.ai/blog/self-supervised-learning)|
80-
|Advanced NLP|[Stanford CS224N: NLP with Deep Learning▶️](https://www.youtube.com/watch?v=rmVRLeJRkl4&list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4&pp=iAQB)<br>[Natural Language Processing: Advance Techniques🧾](https://medium.com/analytics-vidhya/natural-language-processing-advance-techniques-in-depth-analysis-b67bca5db432) |

generative_ai/agents.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
## AI Agents Roadmap
2+
3+
📌 Level 1: Learning the Basics of GenAI and RAG
4+
5+
1. GenAI Introduction
6+
- Key Concepts to Learn:
7+
a. Overview of Generative AI and its applications.
8+
b. Differences between Generative and traditional AI.
9+
10+
2. Basics of LLMs
11+
- Key Concepts to Learn:
12+
a. Transformer architecture and attention mechanisms.
13+
b. Tokenization and embeddings.
14+
15+
3. Basics of Prompt Engineering
16+
- Key Concepts to Learn:
17+
a. Using zero-shot, few-shot, and chain-of-thought prompting.
18+
b. Techniques like temperature control for refining output.
19+
20+
4. Data Handling and Processing
21+
- Key Concepts to Learn:
22+
a. Cleaning and structuring data for training and inference.
23+
b. Preprocessing techniques like tokenization and normalization.
24+
25+
5. Introduction to API Wrappers
26+
- Key Concepts to Learn:
27+
a. Automating tasks using API calls.
28+
b. Basics of REST and GraphQL APIs.
29+
30+
6. RAG Essentials
31+
- Key Concepts to Learn:
32+
a. Basics of Retrieval-Augmented Generation (RAG).
33+
b. Embedding-based search with vector databases like ChromaDB, Milvus.
34+
35+
📌 Level 2: AI Agent-Focused Learning
36+
37+
1. Introduction to AI Agents
38+
- Key Concepts to Learn
39+
a. Agent-environment interaction.
40+
41+
2. Learn Agentic Frameworks
42+
- Key Concepts to Learn
43+
a. Agent workflows with frameworks like LangChain.
44+
b. Explore low-code langflow
45+
46+
3. Building a Simple AI Agent
47+
- Key Concepts to Learn
48+
a. Creating an agent with framework
49+
b. LLM APIs keys and integration
50+
51+
4. Basics of Agentic Workflow
52+
- Key Concepts to Learn:
53+
a. Break tasks into logical steps and optimize orchestration for seamless agent collaboration.
54+
b. Learning to Implement robust error recovery mechanisms
55+
56+
5. Learning About Agentic Memory
57+
- Key Concepts to Learn
58+
a. Short-term vs long-term memory vs episodic memory
59+
b. Storage and retrieval mechanism (vector, key-value, knowledge graph)
60+
61+
6. Basics of Agentic Evaluation
62+
- Key Concepts to Learn
63+
a. Measuring success metrics like accuracy and response time
64+
b. Evaluating agent decision-making and context retention
65+
66+
7. Basics of Multi-Agent Collaboration
67+
- Key Concepts:
68+
a. Collaboration strategies and agent dependencies
69+
b. Agent communication protocols
70+
71+
8. Learning Agentic RAG
72+
- Key Concepts:
73+
a. Context handling and memory
74+
b. Building agentic pipelines

generative_ai/anthropic.md

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
2+
3+
1. Prompting guide
4+
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
5+
6+
2. Claude Code: Best practices for agentic coding
7+
https://www.anthropic.com/engineering/claude-code-best-practices
8+
9+
3. Building effective agents
10+
https://www.anthropic.com/engineering/building-effective-agents
11+
12+
4. AI Fluency: The AI Fluency Framework
13+
https://www.anthropic.com/ai-fluency/overview
14+
15+
5. Build with Claude
16+
https://docs.anthropic.com/en/home
17+
18+
6. Claude for Work
19+
https://www.anthropic.com/learn/claude-for-work
20+
21+
7. Anthropic API fundamentals
22+
https://github.com/anthropics/courses/blob/master/anthropic_api_fundamentals/README.md#anthropic-api-fundamentals
23+
24+
8. Real world prompting
25+
https://github.com/anthropics/courses/blob/master/real_world_prompting/README.md
26+
27+
9. Prompt evaluations
28+
https://github.com/anthropics/courses/blob/master/prompt_evaluations/README.md
29+
30+
10. Claude Customer Support Agent
31+
https://github.com/anthropics/anthropic-quickstarts/tree/main/customer-support-agent
32+
33+
# Anthropic courses
34+
35+
Welcome to Anthropic's educational courses. This repository currently contains five courses. We suggest completing the courses in the following order:
36+
37+
1. [Anthropic API fundamentals](./anthropic_api_fundamentals/README.md) - teaches the essentials of working with the Claude SDK: getting an API key, working with model parameters, writing multimodal prompts, streaming responses, etc.
38+
2. [Prompt engineering interactive tutorial](./prompt_engineering_interactive_tutorial/README.md) - a comprehensive step-by-step guide to key prompting techniques. [[AWS Workshop version](https://catalog.us-east-1.prod.workshops.aws/workshops/0644c9e9-5b82-45f2-8835-3b5aa30b1848/en-US)]
39+
3. [Real world prompting](./real_world_prompting/README.md) - learn how to incorporate prompting techniques into complex, real world prompts. [[Google Vertex version](https://github.com/anthropics/courses/tree/vertex/real_world_prompting)]
40+
4. [Prompt evaluations](./prompt_evaluations/README.md) - learn how to write production prompt evaluations to measure the quality of your prompts.
41+
5. [Tool use](./tool_use/README.md) - teaches everything you need to know to implement tool use successfully in your workflows with Claude.
42+

0 commit comments

Comments
 (0)