Kamil Józwik

Hi, my name is Kamil.

Welcome to my website

Large Language Models

My latests articles

Fine tuning LLMs

Fine-tuning allows to adapt generalist models into specialists, boosting their effectiveness for our unique needs. But is it always the best approach?

LLM quantization

Quantization is a model compression technique that reduces the size and computational requirements of LLMs. This guide explains how quantization works, its advantages and disadvantages, and practical tips for software developers.

Understand LLM benchmarks

A practical guide to finally understanding most popular LLM benchmarks

Base and instruction-tuned models

What is the difference between base and instruction-tuned models?

Understand parameters in LLM

Parameter is a key concept in LLMs. This article explains the difference between total and activated parameters.

See all articles

Web development news