Loss functions and their final-layer activations

When making the first steps with deep learning, we grasp the idea of using a neural network to learn a function that maps data to other data. We are often told that neural networks are a powerful tool in machine learning because of their non-linearity and their ability to learn complex functions from data, which results in minizing some loss function. In this post, we will explore how the final-layer activations are dependent on the loss function of our problem. ...

Date: March 20, 2026 · Estimated Reading Time: 11 min · Author: Oriol Alàs Cercós

Variational AutoEncoders (VAE) for Tabular Data

The post of today is going to be a bit different. We have already talked about Variational Autoencoders (VAE) in the past, but today we are going to see how to implement it from scratch, train it on a dataset and see how it behaves with tabular data. Yes, VAEs can be used for tabular data as well. To do so, we will use the CRISP-DM framework to guide us through the process. ...

Date: December 21, 2025 · Estimated Reading Time: 20 min · Author: Oriol Alàs Cercós

From Words to Vectors: A Dive into Embedding Model Taxonomy

Embedding models are foundational in modern NLP, turning raw text into numerical vectors that preserve semantic significance. These representations power everything from semantic search to Retrieval-Augmented Generation or Prompt Engineering for LLM Agents. With growing demand for domain-specific applications, understanding which is the best fit for your system is more important than ever. Introduction In modern NLP, a text embedding is a vector that represents a piece of text in a mathematical space. The magic of embeddings is that they encode semantic meaning: texts with similar meaning end up with vectors that are close together. For example, an embedding model might place “How to change a tier” near “Steps to fix a flat tire” in its vector space, even though the wording is different. This property makes embedding models incredibly useful for tasks like search, clustering or recommendation, where we care about semantic similarity rather than exact keyword matches. By converting text into vectors, embedding models allow computers to measure meaning and relevance via distances in vector space. ...

Date: October 25, 2025 · Estimated Reading Time: 18 min · Author: Oriol Alàs Cercós

The Generative Trilemma: A quick overview

Generative models are a class of machine learning that learn a representation of the data trained on and they model the data itself. Ideally, generative models should satisfy the following key requirements in a real environment: High quality samples refers to those samples that captures the underlying patterns and structures present in the data making them indistinguishable from human observers. Fast Sampling is about the efficiency of image generation and the computational overhead that can cause generative models. Mode Coverage/Diversity points out how the model is able to generate a full range of mods and diverse patterns present in the training data Fig. 1. The Generative Learning Trilemma ...

Date: July 10, 2025 · Estimated Reading Time: 15 min · Author: Oriol Alàs Cercós

Introduction to Attention Mechanism and Transformers

Transformers have demonstrated excellent capabilities and they overcome challenges such NLP, Text-To-Image Generation or Image Completion with large datasets, great model size and enough compute. Talking about transformers nowadays is as casual as talking about CNNs, MLPs or Linear Regressions. Why not take a glance through this state-of-the-art architecture? In this post, we’ll introduce the Sequence-to-Sequence (Seq2Seq) paradigm, explore the attention mechanism, and provide a detailed, step-by-step explanation of the components that make up transformer architectures. ...

Date: February 17, 2025 · Estimated Reading Time: 10 min · Author: Oriol Alàs Cercós

Introduction to ML & DL

Overview Introductory workshop on Machine Learning and Deep Learning fundamentals designed for hackathon participants. The session covers core concepts, practical algorithms, and hands-on implementation using popular frameworks. Key Topics Supervised vs unsupervised learning Neural network fundamentals Common ML/DL architectures Practical tools and frameworks (scikit-learn, PyTorch, TensorFlow) Tips for rapid prototyping in hackathons Event HackEPS is the largest hackathon in Catalonia, organized by students at the University of Lleida, bringing together hundreds of participants to build innovative projects in 24 hours.

Date: November 1, 2022 · Estimated Reading Time: 1 min · Author: Oriol Alàs Cercós