Open-Source Models: Llama, Mistral & Running Locally and on the Cloud
Self-hosting LLMs Local Machines and on EC2/GKE with Ollama — when open-source beats API services (Day 15)
Self-hosting LLMs Local Machines and on EC2/GKE with Ollama — when open-source beats API services (Day 15)
What vectors are, why semantic search matters, and how Pinecone/Weaviate/pgvector fit in (Day 8)
Why AI lies confidently and how to build guardrails as an infrastructure problem (Day 7)
How to think about context limits, pricing models, and request optimization (Day 6)
Comparing model APIs like you compare managed services (Day 5)
System prompts, few-shot examples, chain-of-thought — the new "config files" of AI (Day 4)
Example: Kubernetes, Terraform, Docker, AWS, MLOps...