Large Language Models
My latests articles
Fine tuning LLMs
Fine-tuning allows to adapt generalist models into specialists, boosting their effectiveness for our unique needs. But is it always the best approach?
LLM quantization
Quantization is a model compression technique that reduces the size and computational requirements of LLMs. This guide explains how quantization works, its advantages and disadvantages, and practical tips for software developers.
Understand LLM benchmarks
A practical guide to finally understanding most popular LLM benchmarks
Base and instruction-tuned models
What is the difference between base and instruction-tuned models?
Understand parameters in LLM
Parameter is a key concept in LLMs. This article explains the difference between total and activated parameters.
Web development news
Frimousse: Lightweight, Composable Emoji Picker for React
A new lightweight, unstyled, and accessible React component for picking emojis, designed to only display emojis supported on the user's device.
Understanding the JavaScript Float16Array Type
Exploration and explanation of the 16-bit floating point array type (Float16Array) available in JavaScript environments.
pnpm 10.9 Released with JSR Package Support
Version 10.9 of the efficient package manager pnpm has been released, notably adding support for installing packages from the JSR (JavaScript Registry).
Microsoft Edge Seeks Feedback on New console.context() Method
The Microsoft Edge team is proposing a new console.context() method to improve contextual logging within browser developer tools and is seeking developer feedback.
ECMAScript Records and Tuples Proposal Withdrawn
The proposal for adding deeply immutable Records and Tuples data structures to JavaScript has been withdrawn due to lack of consensus in TC39.