From Words to Vectors: A Dive into Embedding Model Taxonomy

Embedding models are foundational in modern NLP, turning raw text into numerical vectors that preserve semantic significance. These representations power everything from semantic search to Retrieval-Augmented Generation or Prompt Engineering for LLM Agents. With growing demand for domain-specific applications, understanding which is the best fit for your system is more important than ever. Introduction In modern NLP, a text embedding is a vector that represents a piece of text in a mathematical space. The magic of embeddings is that they encode semantic meaning: texts with similar meaning end up with vectors that are close together. For example, an embedding model might place “How to change a tier” near “Steps to fix a flat tire” in its vector space, even though the wording is different. This property makes embedding models incredibly useful for tasks like search, clustering or recommendation, where we care about semantic similarity rather than exact keyword matches. By converting text into vectors, embedding models allow computers to measure meaning and relevance via distances in vector space. ...

Date: October 25, 2025 · Estimated Reading Time: 18 min · Author: Oriol Alàs Cercós

The Generative Trilemma: A quick overview

Generative models are a class of machine learning that learn a representation of the data trained on and they model the data itself. Ideally, generative models should satisfy the following key requirements in a real environment: High quality samples refers to those samples that captures the underlying patterns and structures present in the data making them indistinguishable from human observers. Fast Sampling is about the efficiency of image generation and the computational overhead that can cause generative models. Mode Coverage/Diversity points out how the model is able to generate a full range of mods and diverse patterns present in the training data Fig. 1. The Generative Learning Trilemma ...

Date: July 10, 2025 · Estimated Reading Time: 15 min · Author: Oriol Alàs Cercós

Introduction to Attention Mechanism and Transformers

Transformers have demonstrated excellent capabilities and they overcome challenges such NLP, Text-To-Image Generation or Image Completion with large datasets, great model size and enough compute. Talking about transformers nowadays is as casual as talking about CNNs, MLPs or Linear Regressions. Why not take a glance through this state-of-the-art architecture? In this post, we’ll introduce the Sequence-to-Sequence (Seq2Seq) paradigm, explore the attention mechanism, and provide a detailed, step-by-step explanation of the components that make up transformer architectures. ...

Date: February 17, 2025 · Estimated Reading Time: 10 min · Author: Oriol Alàs Cercós

Computer Vision

Contains posts related to traditional computer vision techniques

Date: October 25, 2024 · Estimated Reading Time: 0 min · Author: Oriol Alàs Cercós