Posts

How to Use MLflow for Machine Learning Experiments

How to Use MLflow for Machine Learning Experiments MLflow is an open-source platform designed to manage the entire machine learning lifecycle . It helps researchers and engineers track experiments, package models, manage versions, and deploy models efficiently. If you train multiple models, perform hyperparameter tuning, or compare architectures, MLflow becomes extremely valuable because it allows you to log and visualize experiments automatically . 1. What MLflow Is Used For MLflow contains four major components: Component Purpose Tracking Log experiments, metrics, parameters, and artifacts Projects Package ML code for reproducible execution Models Standard format to store and share ML models Model Registry Manage model versions and deployment stages Most users start with MLflow Tracking because it is the easiest way to monitor experiments. 2. Installing MLflow pip install mlflow To start the MLflow user interface: mlflow ui...

SQL Foundations, Database Design & Relational Algebra

📚 Lesson 01 — Databases & Data Foundations From Relational Algebra to SQL Mastery Understand the theory first. Then every SQL query will make perfect sense. 📋 What we cover Part 1 — Relational Algebra: the theory behind SQL Selection (σ), Projection (π), Rename (ρ) Set operations: Union, Intersection, Difference, Cartesian Product Join (⋈) and the RA → SQL mapping Part 2 — SQL, Step by Step Step 1 — SELECT & FROM: choosing columns Step 2 — WHERE: filtering rows Step 3 — ORDER BY & LIMIT: sorting and paging Step 4 — JOINs: combining tables Step 5 — Aggregation functions: COUNT, SUM, AVG, MIN, MAX Step 6 — GROUP BY & HAVING Step 7 — Nested Queries (Subqueries & CTEs) Part 3 — Database Design & Normalization 🧪 Practice Exercises on...

Recommender Systems · Item-Item Models · Conceptual Analysis

SLIM · GL-SLIM · EASE — Conceptual Comparison Recommender Systems · Item-Item Models · Conceptual Analysis SLIM, GL-SLIM & EASE A conceptual comparison of three item-item collaborative filtering models — how they think about the same problem differently Abstract All three models answer the same question: given a user's interaction history, which items should we recommend? They all do it by learning an item-item weight matrix W such that a user's predicted preference vector is X·W . Yet they arrive at radically different solutions — one iterates with gradient descent over thousands of steps, one solves a single linear system in seconds, and one sits between both worlds by adding group-aware local models on top. Understanding why they differ is more useful than memorising their equations. §1 The Shared Foundation Every model in this family makes the same fundamental assumption:...