Kamil Józwik

All articles

Fine tuning LLMs

Fine-tuning allows to adapt generalist models into specialists, but is it always the best approach?

LLM quantization

Quantization is a model compression technique that reduces the size and computational requirements of LLMs.

Understand LLM benchmarks

A practical guide to finally understanding most popular LLM benchmarks

Base and instruction-tuned models

What is the difference between base and instruction-tuned models?

Understand parameters in LLM

Parameter is a key concept in LLMs. This article explains the difference between total and activated parameters.

AI agents

What are AI agents, how they work, and how they differ from LLMs and workflows?

Embeddings and vector stores

Two quite old, but very powerful concepts that are the foundation of modern AI applications.

Model Context Protocol (MCP)

Unlocking AI's potential with the Model Context Protocol (MCP).

Large Language Models 101

A (not only) software developer's guide to Large Language Models (LLMs).

What is the Hugging Face?

An overview of Hugging Face 🤗, its ecosystem (libraries, tools, etc.), and how to leverage it for AI development.

Models distillation

Distillation is a technique for creating smaller, faster, and more efficient AI models that inherit the wisdom of their larger counterparts. Why and how to use it?

What are llms.txt files?

AI and LLMs are changing the way we interact with the web. The llms.txt file is a new proposed standard that helps LLMs understand website content more effectively.

GitHub AI Tools

AI is everywhere. As majority of companies already is using (and paying for) GitHub, it is worth to know what AI tools GitHub offers to developers.