The Effectiveness Thesis
What if we're starting
from the wrong end?
Peter Drucker ranked seven sources of innovation from most reliable to least. Technology came dead last. So why does every AI conversation start with the technology — and what might we see if we started somewhere else?
Everything Begins with a Question
Einstein once said that the formulation of a problem is often more essential than its solution. Elie Wiesel put it differently: people are united by questions; it is the answers that divide them.
Watch a two-year-old learn to talk. They don't study linguistics. They babble, observe what happens, adjust, and try again. "Mama" gets a smile. "MAMA" at 3am gets a very different reaction. Same word, different context, wildly different outcome. They're running hundreds of tiny experiments a day, each one a hypothesis about how the world works. They're not embarrassed by wrong answers because wrong answers are data.
That might be closer to what effective AI use looks like than any strategic framework. Not grand theories about transformation, but a willingness to ask: what happens if I point this tool at that problem? And then — the harder part — actually sitting with what comes back.
There's a form of knowledge in what we don't know. Acknowledging the shape of our ignorance — surfacing our assumptions, noticing what we feel but haven't examined — is often where the real learning starts. Questioning well requires something uncomfortable: humility, persistence, and the willingness to say "I don't actually know."
The Core Question
Most organizations start their AI journey with the technology: which model, which vendor, which benchmark score. Drucker would probably ask why. Technology was the source he found least reliable — the one with the longest lead time, the highest failure rate, and the most uncertainty.
What if the more useful question isn't "what can AI do?" but "what should AI be pointed at?" — and what if Drucker's six higher-ranked sources are a good place to start looking?
AI Effectiveness = Technology Capability × Problem Selection × Organizational Fit
The Seven Sources as Questions
Drucker ordered his sources by reliability. The first four are internal to an industry — visible to insiders who pay attention. The last three are external — shifts in the wider world. What if we treated each one not as a category to fill, but as a question to ask?
In each case, the most interesting opportunity might be the one hiding in plain sight — the thing everyone walks past because they're too busy talking about the technology.
The Unexpected
Unexpected successes, failures, and outside events
Are we rejecting a success because it doesn't fit our expectations?
The percentage of students admitting to academic dishonesty held steady at 60-70% both before and after ChatGPT. AI didn't create a cheating crisis — it created anxiety about one. Meanwhile, only 7% of students use AI to generate entire papers. Most use it to understand difficult concepts. What if the unexpected success isn't AI-as-ghostwriter but AI-as-tutor? And if ChatGPT can pass our exam, what is the exam actually measuring?
Incongruities
Gaps between reality and assumptions
Where does what we assume differ from what the data shows?
85% of employers claim to use skills-based hiring. But the Burning Glass Institute found fewer than 1 in 700 actual hires in 2023 benefited from degree requirement removal. Meanwhile, 77% of graduates say they learned more in six months on the job than during their entire degree. The alternative credentials market is growing at 18% annually. The market senses the gap — the institutions haven't caught up.
Process Need
A weak link everyone works around
What process was designed for a world that no longer exists?
Americans spend roughly 7.9 billion hours and $546 billion annually on tax compliance — about 2.5% of GDP — almost invisible because it's distributed across every household and business. The weak link probably isn't the analysis; it's the documentation. Transfer pricing compliance involves hundreds of hours of manual data mapping. Early AI automation compresses weeks into days. And the IRS grew from 54 to 129 AI use cases in a single year. What happens to companies on annual cycles when enforcement moves to continuous risk scoring?
Market Structure
Shifts that catch incumbents off guard
What stories, products, or services were impossible yesterday that are possible today?
Netflix confirmed the first generative AI footage in a major series — El Eternauta (2024). A collapsing-building sequence produced 10x faster than traditional VFX. The key: it wouldn't have been financially feasible otherwise. That's not a cost-savings story. It's a question about which stories become economically viable when below-the-line costs shift. Meanwhile, SAG-AFTRA is trying to make synthetic performers cost at least as much as humans — a deliberate price floor. Whether it holds is probably one of the industry's defining questions.
Demographics
Changes in population, education, and composition
Which systems were built for a population that no longer exists?
The FCC received 21.7 million comments on net neutrality in 2017. 94% were duplicates. The participation infrastructure was designed when writing to your congressman required a stamp. A toddler wouldn't try to process 21 million comments — a toddler would ask: why are there 21 million? What were people trying to do? The deeper question might be: what does public participation look like when it's designed for conversation rather than forms?
Changes in Perception
Same facts, different meaning
Where has the meaning changed even though the facts haven't?
Compliance used to mean avoiding penalties — a cost to minimize. KPMG found 40% of C-suite leaders now plan to invest $10M+ in tax departments as a strategic investment, not a cost. When real-time compliance monitoring gives you visibility into working capital that competitors lack, compliance shifts from burden to advantage. Most compliance departments probably haven't noticed the shift yet — which, if Drucker was right, is exactly what makes it an opportunity.
New Knowledge (Technology itself)
Scientific and technological breakthroughs
Are we competing on something we don't control?
This is where most AI conversations start and end — the model, the benchmark, the capability score. A company that announces "we're built on the latest model" has differentiation that lasts until the next model release. The technology matters enormously — but it matters the way a telescope matters. For what it lets you see, not for what it is. The Labs section tracks how fast this capability is growing.
Where the Questions Compound
Something I keep noticing: the sources don't stay separate. The most interesting opportunities seem to sit at the intersection of several sources at once.
Education
The unexpected success of AI-as-tutor (#1) exposes the incongruity between what universities test and what employers need (#2), against a demographic shift in how a generation raised on conversational AI expects information to work (#5). Three sources, probably one opportunity — if we can figure out the right question to ask about it.
Government
The demographic shift in participation volume (#5) creates an incongruity between citizen expectations and government infrastructure (#2), which reveals a process bottleneck in how agencies consider public input (#3). The Dodd-Frank Act required a rule on executive compensation by May 2011. As of 2025, it remains unfinalized. The gap between legislation and implementation might be one of the most overlooked process needs in governance.
Taxes & Compliance
The $546 billion process need (#3) sits within a perception shift where compliance moves from cost to competitive advantage (#6), while the IRS's rapid AI adoption creates a market structure change (#4) that companies on annual compliance cycles may not have noticed yet.
Film & Entertainment
The structural shift in what's economically viable to produce (#4) meets a perception change about transparency and AI disclosure (#6). And upstream of all of it, the most unglamorous process need (#3): six to eight hours of manual script breakdown per production that AI can compress into minutes. Not dramatic, but it sits upstream of every dollar spent.
Measuring What Matters
If effectiveness means pointing AI at the right problems, how might you know if it's working? Probably not with benchmark scores. These five dimensions are an attempt at something better — still evolving, but more useful:
Learning Velocity
How quickly does the system acquire new capabilities in your domain? Not training speed — the rate at which it moves from naive to useful for your specific problems.
Skill Depth
Can it handle the edge cases, ambiguity, and nuance that define real-world expertise? Surface-level pattern matching isn't depth — it's a party trick.
Domain Breadth
Does it transfer what it learned in one context to another? A system trained on healthcare processes should develop intuitions about supply chain — the patterns rhyme.
Organizational Adoption
A 99% accurate model that nobody trusts or uses is 0% effective. Real effectiveness probably requires human-AI collaboration, governance, and earned trust.
Evolution Rate
Does it improve through feedback loops? Static models decay in dynamic environments. Effective AI might need a metabolism — getting better because it's being used.
The Sacred Cows Question
Are We Exploiting or Rejecting?
Drucker pointed out that more wealth has been lost by rejecting unexpected successes than by any other mistake. IBM became IBM because they noticed a library — not a bank — ordering their accounting machine. The German chemist who synthesized Novocaine tried to stop dentists from using it because he'd designed it for "serious" surgery. The question isn't whether an unexpected success exists. It's whether we're willing to see it.
What Are Our Hidden Assumptions?
Every organization runs on assumptions — most of them implicit, rarely examined. Blockbuster couldn't take Netflix seriously, possibly because it was hard to let go of the extremely profitable late-fee revenue model. Surfacing those assumptions requires something most organizations find uncomfortable: a culture where people feel safe asking questions even when the questions are inconvenient.
Explain It, Apply It, Question It
Feynman said knowing the name of something tells you practically nothing about it. Socrates built an entire method around the idea that wisdom begins with acknowledging what you don't know. Drucker argued that innovation isn't about brilliant ideas — it's about systematically looking for opportunities.
This site is an attempt to do all three. The journal tries to explain the ideas — sometimes getting it right, sometimes learning from what doesn't hold up. The labs try to apply them — interactive tools for exploring the data and forming your own view. And the whole thing is an ongoing question: does this framework actually help us find the opportunities hiding in plain sight?
If the thesis holds up, it should survive the tests. If it doesn't, that's worth knowing too. A toddler doesn't have a strategy for learning language. They have a method: try something, see what happens, adjust, try again. Maybe that's enough.
The telescope is remarkable. The question is where to point it.