A Vision for Machine-Amplified Learning
Extracting every ounce of learning from your actions is critical to solving hard problems. Every time you poke at the world is an opportunity to discover something new about the dynamics of a problem space.
Even for a small team, maximizing learning is hard. It requires discipline to routinely loop back to your previous endeavors to analyze what worked and what didn’t. It’s much easier to leave the past behind and blast forward to the next enticing plan.
For small teams, at least, it’s easier for contributors to remember the outcomes of their previous attempts and share insights. A startup’s ability to learn enables nimble pivots en route to the promised land.
Multi-team companies face larger obstacles in the learning process:
- It takes energy for contributors to communicate their learnings widely.
- Knowledge doesn’t easily translate across departments or cross-functional roles.
- Knowledge walks out the door with attrition.
- For new employees, the learning process starts from scratch.
Today, maximizing learning at scale is almost impossible which takes away some advantages that bigger companies should have:
- Companies with longer histories should have accumulated more learning to make solving future problems easier.
- Companies with more people should be able to create a network effect of knowledge transfer across teams.
The Double-Loop master plan is to remove the obstacles that prevent learning at scale. Here’s how we’ll do it.
The foundation of learning at scale is recording launches and results. Much of the data already exists in project management, deployment, code versioning, and analytics tools. Humans must add context such as strategies, goals, hypotheses, pictures, and results summaries.
Everyone in the company should be able to create, access, search the history of launches and results.
In realtime, every contributor should be able to follow the actions of other contributors that relate to their own work.
At the bare minimum, this can be accomplished by autogenerating high-level summaries, distributed by Slack or email, based on the record of launches and results.
But true learning at scales requires granular notifications. Teams should be able to subscribe to targeted facets of the launch record. For example, for a particular product change, customer support might need to know the details the UI while the sales team is more interested in the impact on the overall value proposition. Similarly, an engineer working on SEO should be able to see what other teams have done in the domain, what’s worked, not worked, etc..
While record keeping and communication provide the building blocks of learning at scale, there is potential for software to play a new role to amplify memory and learning. Here are a few ideas.
- Software can automatically generate a timeline of product launches based on deployments and project management software. Given the trend towards high-frequency, small deployments. Tools are needed to separate the signal from the noise.
- Based on the above, algorithms can guide team members to communicate or analyze the most important launches and retrospectively analyze results. Imagine a feed of product changes, consumed by data scientists, ranked by their propensity to impact business metrics.
- Software can learn which launches across a big company you care about to create real-time feeds of knowledge cross-pollination.
- Based on defined key results, software could (A) automatically classify the success level of product changes based on app analytics and (B) train a system to predict success likelihood in advanced of engineering commitment based on the structure of plans in project management tools.
I believe we’ve only scratched the surface of systematically cultivating learning in the innovation process.