AI Effectiveness
Home Thesis Journal Labs About

Journal

A record of evolving understanding. Each entry starts with studying how AI learns — and how it helps us learn. The ideas get applied, tested in prototypes, and eventually shaped into something that might work in practice.

Along the way, thinking has evolved: about how AI accelerates its own growth, how it can help individuals and teams evolve their own reasoning, and whether any of this can genuinely transform how organizations operate. This journal is that thinking, made visible.

Scale of Impact

Topic

Individual How AI helps a person reason, learn, and grow.
Unstructured Data & RAG

Teaching AI to Forget (Part 2/3: The Art of Forgetting)

AI systems that remember everything eventually fail. New architectures like Titans and MIRAS show how surprise-driven memory and adaptive forgetting create AI that learns continuously.

February 20, 2026

Unstructured Data & RAG

Forgetting Makes You Smarter (Part 1/3: The Art of Forgetting)

The brain doesn't forget because it fails — it forgets on purpose. Research shows active forgetting optimizes decision-making, enables flexibility, and is actually a form of learning.

February 18, 2026

Agents & Emergence

Is Density Dead? The Rise of STEM (Part 3/3: The Sparsity Revolution)

Fine-grained sparsity is not a compromise — it is an upgrade. STEM architecture shows how 70B parameter models can outperform 175B giants across diverse domains.

January 19, 2026

Agents & Emergence

Is AI Smarter Than We Think, or Just Luckier?

When AI suddenly solves a complex physics problem, is it reasoning or pattern matching? The grokking phenomenon suggests the answer is stranger than either.

January 10, 2026

Agents & Emergence

Your AI Needs a Map: How Sequential Monte Carlo Changes Reasoning

What if your AI is driving without a map — only knowing the next turn? Sequential Monte Carlo might give AI the ability to plan ahead, explore multiple paths, and choose the one most likely to reach the destination.

December 16, 2025

Context Engineering

Your AI Has Amnesia, Not Hallucinations (Part 2/3: Context Engineering)

What if your AI agent isn't hallucinating — but has amnesia? Context collapse is what seems to happen when a model is overwhelmed with context it cannot retain, and performance drops at a critical threshold.

December 13, 2025

Context Engineering

Optimizing Prompts for the Wrong Audience (Part 1/3: Context Engineering)

What if we're optimizing AI prompts for the wrong audience? The reader is a machine that processes context in fundamentally different ways. This mismatch — brevity bias — might be silently degrading AI output quality.

December 10, 2025

Team How groups turn shared knowledge into understanding.
Organization How enterprises build systems that sense, decide, and evolve.
Ecosystem How the AI tide might raise all boats.