|
1 | 1 | ## Generative AI |
2 | 2 | This is a comprehensive guide to understanding and navigating the realm of Generative AI. Generative AI has gained significant traction in recent years due to its wide range of applications across various domains. From generating realistic images to aiding in natural language processing tasks, Generative AI has revolutionized how we interact with and create content. |
3 | | - |
4 | | -### LLMs From Scratch Series |
5 | | - |
6 | | -1. Andrej Karpathy - [10 Vidoes](https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ&si=EuGApF9EXdu1_an5) |
7 | | - - The spelled-out intro to neural networks and backpropagation: building micrograd |
8 | | - - The spelled-out intro to language modeling: building makemore |
9 | | - - Building makemore Part 2: MLP |
10 | | - - Building makemore Part 3: Activations & Gradients, BatchNorm |
11 | | - - Building makemore Part 4: Becoming a Backprop Ninja |
12 | | - - Building makemore Part 5: Building a WaveNet |
13 | | - - Let's build GPT: from scratch, in code, spelled out. |
14 | | - - State of GPT |
15 | | - - Let's build the GPT Tokenizer |
16 | | - - Let's reproduce GPT-2 (124M) |
17 | | - |
18 | | -2. StatQuest with Josh Starmer - [15 Videos](https://youtube.com/playlist?list=PLblh5JKOoLUIxGDQs4LFFD--41Vzf-ME1&si=DfkDMWz58VJgBrsD) |
19 | | - - Recurrent Neural Networks (RNNs), Clearly Explained!!! |
20 | | - - Long Short-Term Memory (LSTM), Clearly Explained |
21 | | - - Word Embedding and Word2Vec, Clearly Explained!!! |
22 | | - - Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks, Clearly Explained!!! |
23 | | - - Attention for Neural Networks, Clearly Explained!!! |
24 | | - - Transformer Neural Networks, ChatGPT's foundation, Clearly Explained!!! |
25 | | - - Decoder-Only Transformers, ChatGPTs specific Transformer, Clearly Explained!!! |
26 | | - - Tensors for Neural Networks, Clearly Explained!!! |
27 | | - - Essential Matrix Algebra for Neural Networks, Clearly Explained!!! |
28 | | - - The matrix math behind transformer neural networks, one step at a time!!! |
29 | | - - The StatQuest Introduction to PyTorch |
30 | | - - Introduction to Coding Neural Networks with PyTorch and Lightning |
31 | | - - Long Short-Term Memory with PyTorch + Lightning |
32 | | - - Word Embedding in PyTorch + Lightning |
33 | | - - Coding a ChatGPT Like Transformer From Scratch in PyTorch |
34 | | -3. Sebastian Raschka - [5 Videos](https://youtube.com/playlist?list=PLTKMiZHVd_2Licpov-ZK24j6oUnbhiPkm&si=9oqXgWnDumkgA176) |
35 | | - - Developing an LLM: Building, Training, Finetuning |
36 | | - - Understanding PyTorch Buffers |
37 | | - - Finetuning Open-Source LLMs |
38 | | - - Insights from Finetuning LLMs with Low-Rank Adaptation |
39 | | - - Building LLMs from the Ground Up: A 3-hour Coding Workshop |
40 | | -4. CodeEmporium - [12 Videos](https://youtube.com/playlist?list=PLTl9hO2Oobd97qfWC40gOSU8C0iu0m2l4&si=_kt_U8h_i2QtJyPj) |
41 | | - - Self Attention in Transformer Neural Networks (with Code!) |
42 | | - - Multi Head Attention in Transformer Neural Networks with Code! |
43 | | - - Positional Encoding in Transformer Neural Networks Explained |
44 | | - - Layer Normalization - EXPLAINED (in Transformer Neural Networks) |
45 | | - - Blowing up the Transformer Encoder! |
46 | | - - Transformer Encoder in 100 lines of code! |
47 | | - - Blowing up Transformer Decoder architecture |
48 | | - - Transformer Decoder coded from scratch |
49 | | - - Sentence Tokenization in Transformer Code from scratch! |
50 | | - - The complete guide to Transformer neural Networks! |
51 | | - - The complete Transformer Neural Network Code in 300 lines! |
52 | | - - Building a Translator with Transformers |
53 | | - |
54 | | -5. Jay Alammar |
55 | | - - Coming soon |
56 | | - |
57 | | -6. Luis Serrano |
58 | | - - Coming soon |
59 | | - |
60 | | - |
61 | | -### Modue 1 - Introduction to Generative AI |
62 | | - |
63 | | -| Topic | References | |
64 | | -| --------------------------------------------------------- |:-------------------------------------------------------------------------------------------------------------------------------------------- | |
65 | | -| Introduction to Generative AI,Importance and Applications | [Intro to Generartive AI - Google Cloud Tech▶️](https://www.youtube.com/watch?v=G2fqAlgmoPo&pp=ygUdaW50cm9kdWN0aW9uIHRvIGdlbmVyYXRpdmUgYWk%3D) | |
66 | | -| Autoencoders and Variational Autoencoders (VAEs) | [Variational Autoencoders - ArxivInsights▶️](https://www.youtube.com/watch?v=9zKuYvjFFS8&t=346s&pp=ygUXdmFyaWF0aW9uYWwgYXV0b2VuY29kZXI%3D)<br> [Autoencoders Explained Easily▶️](https://youtu.be/SSXDkfiPs7c?si=3KD2T44sQQSMFjdG)<br>[Autoencoders - Jeremy Jordan🧾 ](https://www.jeremyjordan.me/autoencoders/) | |
67 | | -|Generative Adversarial Networks (GANs)| [A Friendly Introduction to Generative Adversarial Networks (GANs) - Serrano.Academy▶️](https://www.youtube.com/watch?v=8L11aMN5KY8&t=1076s&pp=ygUER2Fucw%3D%3D) <br> [6 GAN Architectures You Really Should Know - neptune.ai🧾](https://neptune.ai/blog/6-gan-architectures)| |
68 | | -|Autoregressive Models and RBMs|[Guide to Autoregressive Models- Turing🧾](https://www.turing.com/kb/guide-to-autoregressive-models)<br> [Autoregressive Diffusion Models - Yannic Kilcher▶️](https://www.youtube.com/watch?v=2h4tRsQzipQ)<br>[Restricted Boltzmann Machines (RBM)- Serrano.Academy▶️](https://www.youtube.com/watch?v=Fkw0_aAtwIw)<br>| |
69 | | -|Text Generation and Language Modeling| [Text Generation-HuggingFace](https://huggingface.co/tasks/text-generation)🧾| |
70 | | - |
71 | | -### Module 2 - Deep Learning Based Natural Language Processing |
72 | | -| Topic | References | |
73 | | -| ----- |:---------- | |
74 | | -| Word Embedding | [Word Embedding and Word2Vec -StatQuest](https://www.youtube.com/watch?v=viZrOnJclY0)▶️<br> [Word2Vec, GloVe, FastText- CodeEmporium](https://www.youtube.com/watch?v=9S0-OC4LFNo&t=386s&pp=ygUQV29yZCBFbWJlZGRpbmdzIA%3D%3D)▶️<br> | |
75 | | -|Representation Learning|[Representation Learning Complete Guide-AIM](https://analyticsindiamag.com/a-comprehensive-guide-to-representation-learning-for-beginners/#:~:text=Representation%20learning%20is%20a%20class,them%20to%20a%20given%20activity.)🧾| |
76 | | -|Sequence-to-Sequence Models,Encoder-Decoder Architectures|[Sequence-to-Sequence (seq2seq) EncoderDecoder Neural Networks - StatQuest](https://www.youtube.com/watch?v=L8HKweZIOmg&pp=ygUbU2VxdWVuY2UtdG8tU2VxdWVuY2UgTW9kZWxz)▶️<br>[EncoderDecoder Seq2Seq Models - Kriz Moses](https://medium.com/analytics-vidhya/encoder-decoder-seq2seq-models-clearly-explained-c34186fbf49b)🧾| |
77 | | -|seq2seq with Attention|[Sequence to Sequence (seq2seq) and Attention - Lena Voita](https://lena-voita.github.io/nlp_course/seq2seq_and_attention.html)🧾<br>[Attention for Neural Networks-StatQuest](https://www.youtube.com/watch?v=PSs6nxngL6k&pp=ygUsU2VxdWVuY2UgdG8gU2VxdWVuY2UgKHNlcTJzZXEpIGFuZCBBdHRlbnRpb24%3D)▶️| |
78 | | -|Self Attention,Transformers| [Introduction to Transformers - Andrej Karpathy](https://www.youtube.com/watch?v=XfpMkf4rD6E)▶️<br>[Attention for Neural Networks-StatQuest](https://www.youtube.com/watch?v=PSs6nxngL6k)▶️<br>[Self attention-H2O.ai](https://h2o.ai/wiki/self-attention/)🧾<br>[What are Transformer Models and how do they work?-Serrano.Academy](https://www.youtube.com/watch?v=qaWMOYf4ri8)▶️| |
79 | | -|Self-Supervised Learning |[Self-Supervised Learning: The Dark Matter of Intelligence-Yannic Kilcher](https://www.youtube.com/watch?v=Ag1bw8MfHGQ)▶️<br>[Self-Supervised Learning and Its Applications-neptune.ai🧾](https://neptune.ai/blog/self-supervised-learning)| |
80 | | -|Advanced NLP|[Stanford CS224N: NLP with Deep Learning▶️](https://www.youtube.com/watch?v=rmVRLeJRkl4&list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4&pp=iAQB)<br>[Natural Language Processing: Advance Techniques🧾](https://medium.com/analytics-vidhya/natural-language-processing-advance-techniques-in-depth-analysis-b67bca5db432) | |
0 commit comments