Category: Article
-
Inefficiencies in Markets and Evolution
Reading Time: 6 minutes
Q: Is there any ideas common between efficient market hypothesis and red queen hypothesis A: Let me explore the connections between the Efficient Market Hypothesis (EMH) and the Red Queen Hypothesis by analyzing their core principles. The Efficient Market Hypothesis, primarily from economic theory, suggests that financial markets are informationally efficient, meaning stock prices reflect…
-
LLM-as-a-Judge for AI Systems
Reading Time: 10 minutes
Introduction Common Patterns of LLM-as-a-Judge Method Basic Evaluating Judge Model Improving Judge Performance Scaling Judgments Closing References
-
Piecewise Linear Curves in PyTorch
Reading Time: 8 minutes
In this blog, I will train a simple piecewise linear curve on a dummy data using pytorch. But first, why piecewise linear curve? PWL curves are set of linear equation joined at common points. They allow you to mimic any non linear curve and their simplicity helps you explain the predictions. Moreover, they can be…
-
On Preference Optimization and DPO
Reading Time: 6 minutes
Introduction Training with preference data has allowed large language models (LLMs) to be optimized for specific qualities such as trust, safety, and harmfulness. Preference optimization is the process of using this data to enhance LLMs. This method is particularly useful for tuning the model to emphasize certain features or for training scenarios where relative feedback…
-
Keeping Up with RAGs: Recent Developments and Optimization Techniques
Reading Time: 10 minutes
[medium discussion] RAG Basics Indexing Indexing Inference Inference Query Query Vector DB Vector DB Response Response nn scan nn scan Embedding Embedding Prompt +Passages Prompt +… LLM LLM Retrieval Retrieval Generation Generation Documents Documents Chunking Chunking Chunks Chunks LLM LLM Embeddings Embeddings write writeText is not SVG – cannot display Chunking Embedding Model Fine-tuning Embedding…
-
External Knowledge in LLMs
Reading Time: 10 minutes
[medium and substack discussion] LLMs are trained on finite set of data. While it can answer wide variety of questions across multiple domain, it often fails to answer questions which are highly domain-specific and out of its training context. Additionally, training LLMs from scratch for any new information is not possible like traditional models with…
-
Configuring WordPress for Technical Blog
Reading Time: 5 minutes
After a lot of thinking and weighing pros and cons, I have decided to use WordPress for my personal blog and website solution. I have explore other solutions like notion, Hugo+PaperMod, Obsdian+Jenkyll. Andrej Karpathy posted about what an ideal blogging solution might look like. While I agree to these requirements, I would also like a…
-
Writing Better Prompts
Reading Time: 12 minutes
In a world where everyone can be a programmer through natural language, the art of effective communication with Large Language Models (LLMs) becomes crucial. While machines comprehend plain English, nuances exist in crafting prompts tailored to the model’s interpretative abilities. This blog explores the emerging field of “Prompt Engineering,” delving into key methods for designing…