mkmlabs.pl
AI Chronicles

80 years of the
AI revolution

From the first artificial neuron to autonomous AI agents. An interactive story of the breakthroughs, winters and explosions that shaped our era.

80+years of AI history1943 — 2026
8eras of developmentfrom birth to AI-native
70+breakthroughsdiscoveries, models, events
1943–1969

Origins

The birth of artificial intelligence as an idea — from the first model of a neuron to a program impersonating a psychotherapist. Dreamers, mathematicians, and visionaries lay the foundations.

1970–1979

The First AI Winter

Disillusionment follows broken promises. Governments pull funding, scientists abandon the field. AI enters hibernation.

1980–1987

Expert Systems

AI returns in the form of expert systems — rule-based "if-then" programs. Corporations invest billions; Japan launches a mega-project.

1988–1992

The Second AI Winter

Expert systems prove brittle and expensive to maintain. Japan abandons its mega-project. But in the quiet, convolutional networks are being born...

1993–2010

Machine Learning

A quiet renaissance: statistics replaces symbolism. Deep Blue defeats Kasparov at chess. The internet supplies the data. Hinton thaws deep learning back to life.

2011–2019

Deep Learning

Deep neural networks explode onto the scene. AlexNet, AlphaGo, the Transformer — AI beats humans in one domain after another. GPUs plus big data equal revolution.

2020–2024

The LLM Era

Large language models change the world. ChatGPT reaches 100 million users. Claude, Gemini, Llama — the race accelerates. AI stops being an academic curiosity.

2025–2026

AI-Native

AI stops being a tool — it becomes a foundation. Multi-agent systems, Claude Code, vibecoding. Those who do not adapt to AI-native get left behind.

Model race

Who led the pack? 2020–2026

Five players, dozens of models, a fight for the top spot in AI. Stars mark the models that were the best in the world at the moment of release.

2020202120222023202420252026AnthropicOpenAIGooglexAIChina= leader at release
IQ race

How has AI intelligence grown? 2020–2026

Estimated IQ equivalents for AI models based on benchmark results (MMLU, reasoning, coding, mathematics). AI does not possess IQ in the psychometric sense — these values are indicative only and serve purely to illustrate the pace of progress. Source: academic benchmarks, Arena, MMLU, HumanEval.

IQ 85Below averageIQ 100Human averageIQ 115Above averageIQ 130Exceptional (top 2%)8090100110120130140IQ equivalentQ2 2020Q1 2022Q4 2022Q1 2023Q3 2023Q4 2023Q1 2024Q2 2024Q3 2024Q4 2024Q1 2025Q2 2025Q3 2025Q4 2025Q1 2026Q2 2026AnthropicOpenAIGooglexAIChina= quarterly leader
Sources:trackingai.org|arena.aiEstimates based on benchmarks, February 2026
Current ranking

Top 15 AI models in the world

Ranking based on Arena (arena.ai) — March 2026. Arena measures user preferences in blind side-by-side comparisons, drawing on more than 6 million votes. Elo scores shift daily. The top 3 models sit within the margin of error.

#ModelCompanyArena Elo
1Claude Opus 4.7 (thinking)Anthropic1512
2Claude Opus 4.7Anthropic1510
3Gemini 3.1 Pro PreviewGoogle1502
4Claude Opus 4.6 (thinking)Anthropic1500
5Grok-4.20 betaxAI1493
6Claude Opus 4.6Anthropic1492
7GPT-5.4 ProOpenAI1490
8Gemini 3 ProGoogle1485
9GPT-5.4 (thinking)OpenAI1483
10GLM-5Zhipu AI1478
11Grok-4.1 (thinking)xAI1473
12Gemini 3 FlashGoogle1473
13Dola Seed 2.0ByteDance1470
14GPT-5.4OpenAI1468
15Claude Sonnet 4.6Anthropic1460
Source:arena.ai/leaderboard|trackingai.orgData from March 2026 (approximate)
Polish AI

Bielik — the Polish eagle among AI models

The first Polish large language model, built by volunteers. A story that proves you don't need billions of dollars to build AI.

From zero to 32 languages

2022: The SpeakLeash Foundation (Spichlerz) emerges as a grassroots initiative. A group of volunteers — developers, researchers, and students — sets out to build the first genuine Polish LLM. They work evenings and weekends.

April 2024: The release of Bielik 7B v0.1 — based on Mistral-7B, trained on 70 billion Polish tokens with its own APT4 tokenizer. The name comes from the white-tailed eagle (bielik) — a symbol of Polish nature.

2024–2025: Evolution to 11B parameters (depth up-scaling), iterations from v2.0 to v2.6 with successive improvements. Bielik 11B v2.3 Instruct beats GPT-3.5 by 21% on Polish-language tasks.

July 2025: Bielik 11B v3 — a multilingual model supporting 32 European languages. The smaller variants (1.5B and 4.5B), based on Qwen2.5, compete with models 2-3× their size.

Bielik by the numbers

Parameters1.5B → 11B
Training tokens400B
Cyfronet GPUs450 cards
Languages (v3)32 European
vs GPT-3.5 (PL)+21.2% better
CostVolunteer work

Bielik vs PLLuM

Bielik = community. Built by SpeakLeash volunteers with Cyfronet AGH. Open source, no state budget.

PLLuM = state. A government model (14.5M PLN), six research institutions, 12B-70B models based on Llama. Deployed in the mObywatel app and public administration.

Together they form a complementary ecosystem for Polish AI.

Fun facts

🧠

GPT does not "understand" language — it predicts the next token in a sequence.

💰

Training GPT-4 cost an estimated ~$100 million. 80% of that cost was energy and GPUs.

🔢

The human brain has roughly 100-600 trillion synapses. The largest AI models have a few trillion parameters — still 20-100× less than the brain.

The NVIDIA H100 is the "new oil" — it dominates the GPU market for training AI.

Want to be part of this story?

MKM Labs builds with AI as the foundation. Join the companies that do not wait for the future — they create it.

Start an AI project

Ready for AI transformation?

Every conversation is free and without obligation. Tell us about your project — we'll respond within hours.

Start project →