Journal

all →
May 9, 2026 ESSAY
When Algorithms Carve: Classification at Machine Speed
A sequel on classification, this time at industrial scale: the labels of supervised learning, the impossibility theorems of algorithmic fairness, the latent geometries of foundation models, the… read → 14 min
#AI #classification #machine-learning #fairness #interpretability
May 8, 2026 ESSAY
Carving at the Joints: On Segmentation, Its Limits, Its Use
A tour through the literature on classification, from Plato and Mill to Hacking, Foucault, Bowker and Star, on why we cannot think without categories and why the line drawn is never innocent. In… read → 14 min
#AI #classification #epistemology #philosophy #social-theory
Andler on Problem and Situation
Reading notes on chapters 1.4, 6.5, 7.4, and 8.1-8.2 of Daniel Andler's Intelligence artificielle, intelligence humaine. I have been making my way through Daniel Andler's Intelligence artificielle,… read → 16 minLONG READ
— Daniel Andler, *Intelligence artificielle, intelligence humaine* (2023), chapters 1.4, 6.5, 7.4, 8.1-8.2 #AI #philosophy #cognitive-science #epistemology #verification #books

Deep Dives

all →
Long-form essays and multi-part series: strategy, finance, technology.
Current series
Reading Nvidia: On strategy, defensibility, and capital in the AI era Moats, tokens-per-watt, and capex—read through Nvidia and the AI build-out. · Series started Apr 2026 · 4 of 7 sections published
Part 0 · Series overview 6 min Apr 2026
Part 2 · The Moat That Cannot Be Coded 24 min LONG READ May 2026
Part 3 · Pareto Frontiers and Lock-In 25 min LONG READ May 2026
Part 4 · The Return of the Real World upcoming
Part 5 · The Job and the Task upcoming
Part 6 · A Note on China upcoming
Part 7 · Drawing the AI Stack upcoming
Series
AI × Energy: Powering the AI Revolution
Grids, GPUs, and the physics of running intelligence at planetary scale.
Watts, wires, and who controls the stack—not the headline model. · Series started Mar 2026 · 4 of 5 sections published
Part 0 · Introduction 7 min Mar 2026
Part 1 · Energy and Civilization (Smil) 17 min LONG READ Mar 2026
Part 2 · Power Density (Smil) 18 min LONG READ Apr 2026
Part 3 · The 84% Gap (Subran) 24 min LONG READ Apr 2026
Part 4 · The New Map (Yergin) upcoming
Part 5 · The Synthesis upcoming
Recent standalones
AI Moats Are Different Oct 2025 Why traditional moats fail in AI, and a four-level framework for building defensible positions: technical foundation, product depth, market position, sustainability. · 16 min LONG READ
Market Sizing for Founders Oct 2025 A practical framework for sizing markets when you're building something new. · 23 min LONG READ
AI as Normal Technology Sep 2025 General-purpose technologies diffuse slowly and unevenly. AI will follow similar patterns. Value creation depends on when bottlenecks collapse, not benchmark scores. · 21 min LONG READ

Library

all →
Recently read
Aliénation et accélération Hartmut Rosa #philosophy ★★★★★ my notes
Development as Freedom Amartya Sen #philosophy #economics #development ★★★★★ my notes
Why Nations Fail: The Origins of Power, Prosperity, and Poverty Daron Acemoglu & James A. Robinson #economics #institutions #strategy ★★★★★ my notes
Steak Machine Geoffrey Le Guilcher #animals #food #journalism ★★★★ my notes
Changement de quart Bruno Colmant, Laurent Hublet & Marie Vancutsem #economics #geopolitics #AI ★★★★ my notes
Énergie et inégalités : une histoire politique Lucas Chancel #energy #economics #inequality ★★★★ my notes
Innovations : une économie pour les temps à venir Dominique Foray #economics #innovation #policy ★★★★★ my notes
Multinationales : une histoire du monde contemporain Olivier Petitjean & Ivan du Roy #economics #history #corporate-power ★★★★★ my notes
La planification écologique Mathilde Viennot #economics #ecology #France #policy ★★★★★ my notes
What Is Life?: Evolution as Computation Blaise Agüera y Arcas #AI #tech #biology #philosophy ★★★★★ my notes
The New Map: Energy, Climate, and the Clash of Nations Daniel Yergin #energy #geopolitics ★★★★★
Energy and Civilization: A History Vaclav Smil #energy #history ★★★★★ my notes
Technology and the Rise of Great Powers Jeffrey Ding #geopolitics #tech #strategy ★★★★★
Strategy: A History Lawrence Freedman #strategy #history ★★★★★ my notes
These Strange New Minds Christopher Summerfield #AI #philosophy ★★★★★ my notes
How To Think Like Socrates Donald Robertson #philosophy ★★★★★
Reading list
Atlas of AI Kate Crawford #AI #geopolitics
Chip War Chris Miller #tech #geopolitics #history
L'Évangile selon Pilate Éric-Emmanuel Schmitt #fiction #philosophy
L'Individu, fin de parcours ? Julien Gobin #philosophy #economics #France my notes
La mort et le printemps Mercè Rodoreda #fiction
La Peau de chagrin Honoré de Balzac #fiction #philosophy #France my notes
La Pharmacie de Platon Jacques Derrida #philosophy #France my notes
Phenomenology of Spirit G. W. F. Hegel #philosophy my notes
Selfpressionnisme : Et si l'IA nous rendait plus humains ? Marie Dollé #AI #philosophy #France #humanism my notes
Tentative d'épuisement d'un lieu parisien Georges Perec #literature #France my notes
The Investment Philosophers Erik Everett #finance #philosophy
Water: The Epic Struggle for Wealth, Power, and Civilization Steven Solomon #history #geopolitics
Top reads
AI Moats Are Different deep dive Why traditional moats fail in AI, and a four-level framework for building defensible positions: technical foundation, product depth, market position, sustainability. · 16 min LONG READ
The Moat That Cannot Be Coded deep dive Tacit knowledge, CUDA as institutional memory, and who captures AI value when models commoditise. · 24 min LONG READ
On Systemic Errors journal A frequent critique of GenAI is that because it is probabilistic, it will inevitably make… · 8 min
Foundation Models in Biology Hit the Noise Floor journal A new benchmark of 600+ model configurations reveals that biological foundation models do… · 12 min
Market Sizing for Founders deep dive A practical framework for sizing markets when you're building something new. · 23 min LONG READ
Food for Thoughtall →
The $1 to $10 rule that breaks every AI business case
AI Adopters Club · 12 min · 30-04-2026 article
Brynjolfsson’s J-curve ratio: for every dollar on the model, enterprises spend up to ten on process, data, and change—why pilots stall and budgets lie.
Deep Dive: Where Value Accrues in the AI Stack
Chamath (Substack) · 25 min · 01-05-2026 article
A six-layer map of AI (infrastructure through application), fulcrum assets per layer, and how the stack forks into software AI vs physical AI.
The Three Forms of Scientific Intelligence: A Conversation with DeepMind's Pushmeet Kohli
Decoding Bio · 35 min · 01-05-2026 interview
How DeepMind picks a handful of transformative science bets per year, scaling laws in biology, and three kinds of intelligence reshaping research.
AI Coding Works. That's the Problem
SimonDev (YouTube) · 20 min · 02-05-2026 video
A developer take on why stronger AI coding tools can worsen maintenance, debt, and the gap between what ships and what stays understandable.
Claude Code Is Not Making Your Product Better
Ethan Ding · 10 min · 21-04-2026 essay
Competitive Strategy in the Age of AI
Tomasz Tunguz · 1 min · 24-04-2026 article
A strategic lens on AI distribution: commoditize complements to reinforce the model core. The piece compares Google’s historical playbook with Anthropic’s recent product moves, and argues startups can still win through focused specialization where giants cannot sustain depth.
The "Cognitive Offloading" Paradox
Dr Phil's Newsletter · 12 min · 16-04-2026 article
A synthesis of new findings on AI-supported learning: shallow, scattered AI use can hurt outcomes, while strategic delegation of substantive tasks can free cognitive capacity for higher-order thinking. The key variable is not AI use itself, but instructional design that pairs offloading with verification, retrieval, and unassisted assessment.
The illusion of thinking
The Signal (Alex Banks) · 8 min · 17-04-2026 article
What AI's biggest weakness reveals about our own minds.
When AIs act emotional
Anthropic · 5 min · 02-04-2026 video
This showed us that the activation of these patterns could actually drive Claude's behavior. And what our experiments suggest is that this Claude character has what we're calling "functional emotions," regardless of whether they're anything like human feelings. We look inside the model's "brain", the giant neural network that powers it, and by seeing which neurons "light up" in different situations, and how they're connected, we can start to understand how models think.
The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI
Dario Amodei · 95 min · 01-01-2026 essay
How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI,about what we’re on the cusp of,my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. In my essa...
Anthropic chief Dario Amodei: 'I don't want AI turned on our own people'
Financial Times · 18 min · 17-04-2026 interview
The tech entrepreneur on Claude Mythos, repercussions from the Pentagon dispute, and his message for the super-rich
Machines of Loving Grace: How AI Could Transform the World for the Better
Dario Amodei · 65 min · 01-10-2024 essay
Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. In particular, I’ve made this choice out of a desire to: Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. Because of this, people sometimes draw the conclusion that I’m a p...
Scaling Laws for Neural Language Models
Kaplan et al. (arXiv) · 45 min · 23-01-2020 paper
We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
The Bitter Lesson
Rich Sutton · 5 min · 13-03-2019 essay
AI Assistance Reduces Persistence and Hurts Independent Performance
arXiv · 35 min · 06-04-2026 paper
People often optimize for long-term goals in collaboration: A mentor or companion doesn't just answer questions, but also scaffolds learning, tracks progress, and prioritizes the other person's growth over immediate results. In contrast, current AI systems are fundamentally short-sighted collaborators, optimized for providing instant and complete responses, without ever saying no (unless for safety reasons). What are the consequences of this dynamic? Here, through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence for two key consequences of AI assistance: reduced persistence and impairment of unassisted performance. Across a variety of tasks, including mathematical reasoning and reading comprehension, we find that although AI assistance improves performance in the short-term, people perform significantly worse without AI and are more likely to give up. Notably, these effects emerge after only brief interactions with AI (approximately 10 minutes). These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning. We posit that persistence is reduced because AI conditions people to expect immediate answers, thereby denying them the experience of working through challenges on their own. These results suggest the need for AI model development to prioritize scaffolding long-term competence alongside immediate task completion.