Foundations
- A grand unified theory of the AI hype cycle
- Foundation model
- Model context protocol
- Model selection
- Quantization in LLM
Prompt engineering
- Adversarial prompting
- Easy Prompt engineering For Business Use And Mitigating Risks In Llms
- How to talk to ChatGPT effectively
- Journey of thought prompting
Model training & fine-tuning
- Exploring machine learning approaches for fine tuning Llama models
- RLHF with Open Assistant
- Reinforcement learning
- Reward model
- Proximal policy optimization
- Q learning
Retrieval & caching
- Caching with RAG system
- Chunking strategies to overcome context limitation in LLM
- Dealing with long-term memory in AI chatbot
- Hybrid search
- Raptor LLM retrieval
- Re-ranking in RAG
- LLM query caching
- Select vector database for LLM
- Multimodal in RAG
- Workaround with OpenAI's token limit with Langchain
- Working with Langchain document loaders
Evaluation & metrics
- Evaluation guideline for LLM application
- LLM as a judge
- Feedback mechanism
- Logs pillar
- Metric pillar
- Observability in AI platforms
- Trace pillar
- Thumbs up and thumbs down pattern
AI agents & workflows
- Building agent supervisors to generate insights
- Multi-agent collaboration for task completion
- ReAct (Reason + Act) in LLM
- ReWOO in LLM
- Function calling
- Supervisor AI agents