80 years of the
AI revolution
From the first artificial neuron to autonomous AI agents. An interactive story of the breakthroughs, winters and explosions that shaped our era.
Origins
The birth of artificial intelligence as an idea — from the first model of a neuron to a program impersonating a psychotherapist. Dreamers, mathematicians, and visionaries lay the foundations.
The First AI Winter
Disillusionment follows broken promises. Governments pull funding, scientists abandon the field. AI enters hibernation.
Expert Systems
AI returns in the form of expert systems — rule-based "if-then" programs. Corporations invest billions; Japan launches a mega-project.
The Second AI Winter
Expert systems prove brittle and expensive to maintain. Japan abandons its mega-project. But in the quiet, convolutional networks are being born...
Machine Learning
A quiet renaissance: statistics replaces symbolism. Deep Blue defeats Kasparov at chess. The internet supplies the data. Hinton thaws deep learning back to life.
Deep Learning
Deep neural networks explode onto the scene. AlexNet, AlphaGo, the Transformer — AI beats humans in one domain after another. GPUs plus big data equal revolution.
The LLM Era
Large language models change the world. ChatGPT reaches 100 million users. Claude, Gemini, Llama — the race accelerates. AI stops being an academic curiosity.
AI-Native
AI stops being a tool — it becomes a foundation. Multi-agent systems, Claude Code, vibecoding. Those who do not adapt to AI-native get left behind.
Who led the pack? 2020–2026
Five players, dozens of models, a fight for the top spot in AI. Stars mark the models that were the best in the world at the moment of release.
How has AI intelligence grown? 2020–2026
Estimated IQ equivalents for AI models based on benchmark results (MMLU, reasoning, coding, mathematics). AI does not possess IQ in the psychometric sense — these values are indicative only and serve purely to illustrate the pace of progress. Source: academic benchmarks, Arena, MMLU, HumanEval.
Top 15 AI models in the world
Ranking based on Arena (arena.ai) — March 2026. Arena measures user preferences in blind side-by-side comparisons, drawing on more than 6 million votes. Elo scores shift daily. The top 3 models sit within the margin of error.
| # | Model | Company | Arena Elo |
|---|---|---|---|
| 1 | Claude Opus 4.7 (thinking) | Anthropic | 1512 |
| 2 | Claude Opus 4.7 | Anthropic | 1510 |
| 3 | Gemini 3.1 Pro Preview | 1502 | |
| 4 | Claude Opus 4.6 (thinking) | Anthropic | 1500 |
| 5 | Grok-4.20 beta | xAI | 1493 |
| 6 | Claude Opus 4.6 | Anthropic | 1492 |
| 7 | GPT-5.4 Pro | OpenAI | 1490 |
| 8 | Gemini 3 Pro | 1485 | |
| 9 | GPT-5.4 (thinking) | OpenAI | 1483 |
| 10 | GLM-5 | Zhipu AI | 1478 |
| 11 | Grok-4.1 (thinking) | xAI | 1473 |
| 12 | Gemini 3 Flash | 1473 | |
| 13 | Dola Seed 2.0 | ByteDance | 1470 |
| 14 | GPT-5.4 | OpenAI | 1468 |
| 15 | Claude Sonnet 4.6 | Anthropic | 1460 |
Bielik — the Polish eagle among AI models
The first Polish large language model, built by volunteers. A story that proves you don't need billions of dollars to build AI.
From zero to 32 languages
2022: The SpeakLeash Foundation (Spichlerz) emerges as a grassroots initiative. A group of volunteers — developers, researchers, and students — sets out to build the first genuine Polish LLM. They work evenings and weekends.
April 2024: The release of Bielik 7B v0.1 — based on Mistral-7B, trained on 70 billion Polish tokens with its own APT4 tokenizer. The name comes from the white-tailed eagle (bielik) — a symbol of Polish nature.
2024–2025: Evolution to 11B parameters (depth up-scaling), iterations from v2.0 to v2.6 with successive improvements. Bielik 11B v2.3 Instruct beats GPT-3.5 by 21% on Polish-language tasks.
July 2025: Bielik 11B v3 — a multilingual model supporting 32 European languages. The smaller variants (1.5B and 4.5B), based on Qwen2.5, compete with models 2-3× their size.
Bielik by the numbers
Bielik vs PLLuM
Bielik = community. Built by SpeakLeash volunteers with Cyfronet AGH. Open source, no state budget.
PLLuM = state. A government model (14.5M PLN), six research institutions, 12B-70B models based on Llama. Deployed in the mObywatel app and public administration.
Together they form a complementary ecosystem for Polish AI.
Fun facts
GPT does not "understand" language — it predicts the next token in a sequence.
Training GPT-4 cost an estimated ~$100 million. 80% of that cost was energy and GPUs.
The human brain has roughly 100-600 trillion synapses. The largest AI models have a few trillion parameters — still 20-100× less than the brain.
The NVIDIA H100 is the "new oil" — it dominates the GPU market for training AI.
Want to be part of this story?
MKM Labs builds with AI as the foundation. Join the companies that do not wait for the future — they create it.
Start an AI project