Category: Article

  • Inefficiencies in Markets and Evolution

    Q: Is there any ideas common between efficient market hypothesis and red queen hypothesis A: Let me explore the connections between the Efficient Market Hypothesis (EMH) and the Red Queen Hypothesis by analyzing their core principles. The Efficient Market Hypothesis, primarily from economic theory, suggests that financial markets are informationally efficient, meaning stock prices reflect…

  • LLM-as-a-Judge for AI Systems

    Introduction Common Patterns of LLM-as-a-Judge Method Basic Evaluating Judge Model Improving Judge Performance Scaling Judgments Closing References

  • Piecewise Linear Curves in PyTorch

    In this blog, I will train a simple piecewise linear curve on a dummy data using pytorch. But first, why piecewise linear curve? PWL curves are set of linear equation joined at common points. They allow you to mimic any non linear curve and their simplicity helps you explain the predictions. Moreover, they can be…

  • Evolution of Information Consumption on the Internet

    I have been wondering how consumption on the internet has changed. The internet started in 1991, and since then, it has gone through multiple revolutions. From static pages -> dynamic pages, authoritative content -> user-generated content, text -> visual, desktop -> mobile, browser -> apps. Each of these revolutions has impacted and increased the adoption…

  • On Preference Optimization and DPO

    Introduction Training with preference data has allowed large language models (LLMs) to be optimized for specific qualities such as trust, safety, and harmfulness. Preference optimization is the process of using this data to enhance LLMs. This method is particularly useful for tuning the model to emphasize certain features or for training scenarios where relative feedback…

  • Creating a Tiny Vector Database, Part 1

    [medium discussion] Introduction Vector databases allow you to search for approximate nearest neighbors from a large set of vectors. They provide an alternative to brute force nearest neighbor searches at the cost of accuracy and additional memory. Previously, many advancements in NLP and deep learning were limited to small scales due to the lack of…

  • Keeping Up with RAGs: Recent Developments and Optimization Techniques

    [medium discussion] RAG Basics Indexing Indexing Inference Inference Query Query Vector DB Vector DB Response Response nn scan nn scan Embedding Embedding Prompt +Passages Prompt +… LLM LLM Retrieval Retrieval Generation Generation Documents Documents Chunking Chunking Chunks Chunks LLM LLM Embeddings Embeddings write writeText is not SVG – cannot display Chunking Embedding Model Fine-tuning Embedding…

  • External Knowledge in LLMs

    External Knowledge in LLMs

    [medium and substack discussion] LLMs are trained on finite set of data. While it can answer wide variety of questions across multiple domain, it often fails to answer questions which are highly domain-specific and out of its training context. Additionally, training LLMs from scratch for any new information is not possible like traditional models with…

  • Configuring WordPress for Technical Blog

    After a lot of thinking and weighing pros and cons, I have decided to use WordPress for my personal blog and website solution. I have explore other solutions like notion, Hugo+PaperMod, Obsdian+Jenkyll. Andrej Karpathy posted about what an ideal blogging solution might look like. While I agree to these requirements, I would also like a…

  • Writing Better Prompts

    In a world where everyone can be a programmer through natural language, the art of effective communication with Large Language Models (LLMs) becomes crucial. While machines comprehend plain English, nuances exist in crafting prompts tailored to the model’s interpretative abilities. This blog explores the emerging field of “Prompt Engineering,” delving into key methods for designing…