• Type:

How can we enable more science fiction to become reality?

If you want to do something, it usually pays to study those who have done that thing successfully in the past. Asking ‘what is this outlier’s production function?’
can provide a starting point.

DARPA is an outlier organization in the world of turning science fiction into reality. Since 1958, it has been a driving force in the creation of weather satellites, GPS, personal computers, modern robotics, the
Internet, autonomous cars, and voice interfaces, to name a few. However, it is primarily limited to the domain of defense technology – there are DARPA-style ideas that are out of its scope.  Which emulatable attributes contributed to
DARPA’s outlier results?
What does a domain-independent “ARPA Model” look like? Is it possible to
build other organizations that generate equally huge results in other domains by riffing on that model?

Gallons of ink have been spilled describing how DARPA works1,
but
in a nutshell here is how DARPA works. Around 100 program managers (PMs) with ~5 year appointments create and run programs to pursue high-level visions like “actualize the idea of man-computer symbiosis.” In these programs they fund researchers at universities and both
big and small companies to do research
projects of different sizes. Collectively, groups working on projects are called performers. Top-level
authority lies with a Director who ultimately reports to the Secretary of Defense.

DARPA has an incredibly powerful model for innovation in defense research, and I believe an abstract ‘ARPA Model’ could yield similar results in other domains. In this piece I’ll explain in detail why
DARPA works. I’ll use that description to feel out and describe to the best of my ability a platonic ARPA Model.  I’ll also distill some of the model’s implications for potential riffs on the model. Incidentally,
I’m working on just such an imitator, and in future essays, I’ll explain why this model could be incredibly powerful when executed in a privately-funded context.

How to use this document

This document acts more like a collection of atomic notes than a tight essay – a DARPA-themed tapas if you will. The order of the sections is more of a guideline than a law so feel free to skip around. Throughout you will come across internal links that look like this.
These links are an attempt to illustrate the interconnectedness of the ARPA Model.

There are two stand-alone pieces to accomodate your time and interest: a
distillation, and the full work. The
distillation is meant to capture and compress the main points of the full work. Each section of the distillation internally links to the corresponding section one level deeper so if you want more info and nuance you can get it.

I would rather this be read by a few people motivated to take action than by a broad audience who will find it merely interesting. In that vein, if you find yourself wanting to share this on Twitter or Hacker News, consider instead sharing it with one or two friends who will take action on it. Thank you for indulging me!

At the end of the day the ARPA Model depends on badass program managers. Why is this the case? PMs need to
think for themselves and go up and down the ladder of abstraction in an unstructured environment. On top of that they need to be effective communicators and coordinators because so much of their jobs is building networks. There’s a
pattern that the abstract qualities that make “great talent” in different high-variance industries boils down to the ability to successfully make things happen under a lot of uncertainty. Given that pattern, the people who would
make good DARPA PMs would also make good hedge fund analysts, first employees at startups, etc. so digging into
people’s motivations for becoming a PM is important. More precise details about what makes a PM good prevent you from going after the exact same people as every other high-variance industry. When ‘talent’ isn’t code for ‘specialized
training’ it means the role or industry has not been systematized. Therefore, despite all the talk here and elsewhere about ‘the ARPA Model’ we must keep in mind that we may be attributing more structure to the process than
actually exists.

DARPA program managers pull control and risk away from both researchers and directors. PMs pull control away
from directors by having only one official checkpoint before launching programs and pull control away from performers through their ability to move money around quickly. PMs design programs to be high-risk aggregations of lower-risk projects.
Only 5–10 out of every 100 programs successfully produce transformative research, while only 10% of projects are terminated early. Shifting the risk from the performers to the program managers enables DARPA to tackle
systemic problems where other models cannot.

The best program managers notice systemic biases and attack them. For example, noticing that all of the finite element modeling literature
assumes a locally static situation and asking ‘what if it was dynamic?’
“The best program managers can get into the trees and still see the forest.” Obviously, this
quality is rather fuzzy but leads to two precise questions:

  1. How do you find people who can uncover systemic biases in a discipline?
  2. How could you systematize finding systemic biases in a discipline?

The first question suggests that you should seek out heretics and people with expertise who are not experts. The second question suggests building structured frameworks for mapping a discipline and its assumptions.

A large part of a DARPA program manager’s job is focused network building. DARPA PMs network in the literal
sense of creating networks, not just plugging into them. PMs meet disparate people working on ideas adjacent to the area in which they want to have an impact and bring them together in small workshops to dig into which possibilities are not
impossible and what it would take to make them possible. The PMs host
performer days — small private conferences for all the people working on different pieces of the program where
performers can exchange ideas on what is working, what isn’t working, and build connections that don’t depend on the PM.
J.C.R. Licklider2 is a
paragon here. He brought together all the crazy people interested in human-focused computing. On top of that,  he also helped create the first computer science lab groups. PMs also build networks of people in different classes of organizations –
government, academia, startups, and large companies. These connections smooth the path for technologies to go from the lab to the shelf.

DARPA PMs need to think for themselves, be curious, and have low ego. Why does this matter? When you are
surrounded by smart, opinionated people the easy option is to either 100% accept what they’re saying because it’s eloquent and well-thought through or reject it outright because it sounds crazy or goes against your priors. Thinking
for yourself allows you to avoid these traps. PMs need to be curious because building a complete picture of a discipline requires genuine curiosity to ask questions nobody else is asking. A large ego would lead to a program manager imposing
their will on every piece of the program, killing curiosity and the benefits of top down problems and bottom up solutions.

DARPA is incredibly flexible with who it hires to be program managers. There are legal provisions in place that
let DARPA bypass normal government hiring rules and procedures. Hiring flexibility is important because PMs are the sort of people who are in high demand, so they may be unwilling to jump through hoops. Bureaucracies ensure consistency through
rules – minimal bureaucracy means there are no safeguards against hiring a terrible program manager so the principle that ‘A players hire A players and B players hire C players’ is incredibly important.  

DARPA Program managers have a tenure of four to five years. This transience is important for many reasons.
Transience can inculcate PMs against the temptation to play it safe or play power games because there’s only one clear objective – make the program work. You’re out regardless of success or failure. Explicitly temporary roles can
incentivize people with many options to join because they can have a huge impact, and then do something else. There’s no implicit tension between the knowledge that most people will leave eventually and the uncertainty about when that
will be. Regular program manager turnover means that there is also turnover in ideas.

Why do people become DARPA Program managers? From a career and money standpoint, being a program manager seems pretty rough. There are unique benefits though. It offers an outlet for people frustrated with the conservative nature of academia. The prospect of getting to control a lot of money without a ton of oversight appeals to some people.
Patriotism is definitely a factor, and hard to replicate outside of a government. Being a PM can gain you the respect of a small, elite group of peers who will know what you did. Finally, there may be a particular technological vision they want
to see out in the world and DARPA gives them the agency to make it happen in unique ways.

Incentives and Structure

Opacity is important to DARPA’s outlier success. Congress and the DoD have little default oversight
into how a PM is spending money and running a program.
Opacity removes incentives to go for easy wins or to avoid being criticized by external forces. Of course, opacity can also be abused in too many ways to list,
so it’s important to ask: How does DARPA incentivize people not to abuse opacity? DARPA’s small size and flat structure enable peer pressure to work in positive ways. Finite tenures either make people want to utterly crush it or not
care at all. The former happens when you empower driven program managers.

The government’s influence on DARPA is buffered by opacity and the director. Opacity is especially important for DAPRA because a source of
money like Congress that has many opinions exacerbates the incentive to ‘play it safe.’ Committees lead to median results and ensuring that any action you take is acceptable to a committee is almost as bad as the committee
determining those actions.  An opinionated director is the lynchpin keeping DARPA focused on long-term ideas that nobody is asking for (yet.) At several points in its history, the director went to bat to keep DARPA from being dissolved or
absorbed into the military as a more “normal” R&D organization.

DARPA does multiple levels of top-down problem generation and bottom-up solution generation. In some ways PMs act
as ‘bottom-up’ idea generation – often creating the idea for the program and soliciting project ideas from the community. In other ways PMs act as a ‘top down’ force by pushing researchers to work on a well-defined
problem. DARPA completely sidesteps the always-important question “is top-down or bottom-up better?” by doing both.

The transient nature of most people at DARPA enables ideas to be revisited. Obviously institutional memory
lasts longer than any individual but memory of any specific programs dies out quickly. There is only one PM running any given program and people pay much more attention to their own work than other people’s. This transient memory about
specific ideas allows them to be judged on their own merits instead of dealing with baggage from the past. The ability to revisit ideas enables DARPA to explore ‘unexploited Leonardo Ideas’ – good ideas that are before their
time.

DARPA is relatively tiny and flat. As of April 2020, DARPA has 124 staff and three layers of management. That
number is right around Dunbar’s Number – just small enough for one person to know everyone in the organization. The small organization means that there are no “group projects” so every PM is directly responsible for the
program they’re working on and you can’t have slackers. Small size also helps keep PM quality high by removing pressure to hire a lot of people.

DARPA Employees aren’t paid very much compared to what they could be. In 2017, a program managers’
estimated salary was ~$90k compared to the $170k+ an experienced technical PhD could expect  even outside of software engineering. There is a positive and a negative argument for low pay. The positive argument is that it weeds out people
who are just in it for the money and treat the role as ‘just a job.’ On the other hand, the low pay may weed out people who have some lower bound on what they think they’re worth.

DARPA’s aversion to people with a web presence may be how they avoid asymmetric career risk. According
to a former PM,
DARPA avoids hiring people with a significant web presence. In the 21st century, that’s remarkable and specific enough that it is worth digging into. People with a strong web presence tend to be
focused on playing  status games or at least are in a world where they realize that their career depends on public perception of their output.  If you know it’s easy to shut something down it’s easier to start so less
Internet presence means less grand announcements and fewer expectations. Of course, the explanation could also be that DARPA just doesn’t like people who are too loud because they like to keep things hush hush.

DARPA doesn’t do any research in house. Some subtle advantages of externalizing research:

  • It enables DARPA to be relatively tiny and flat.
  • Actual cutting edge research may require rare equipment or knowledge. There are many pieces of equipment or tacit knowledge that only exist in one or two places in the world and it’s easier to access them
    through finite projects than purchasing or hiring them.
  • It enables strong accountability because for any program there is exactly one responsible person.
  • It enables program managers to have multiple teams working on the same objective without internal politics.
  • You don’t have to lay anyone off or find new work for people when you change direction.
  • You can get people to work on projects who might not want to work for the org full time.

Process

DARPA has many tight feedback loops. Feedback loops start day one –
‘onboarding’ at DARPA is an informal process that involves mostly oral traditions and shadowing. Informal onboarding is normally a best-practice no-no but that is because most organizations don’t limit themselves
to
 Dunbar’s number. Program managers get significant informal feedback from other PMs as they hone their program design. PMs have informal
conversations with researchers, invite them to small workshops, and fund small fast projects to test assumptions. These activities set up feedback loops not just between researchers and the PM but between the researchers themselves, sometimes
spawning new research communities.

The initial exploratory tranche of a DARPA program is approximately $1.5m. Most of this money goes towards small seedling projects.
These seedling projects are small grants or contracts designed to help solidify that an idea is both not impossible and that it is in fact possible within scope. Intriguingly, this is in the same ballpark as a large seed round for a startup or
the amount of money you need to set up an academic lab.

DARPA PMs use seedling projects to ‘acid test’ the riskiest pieces of a program idea. Seedlings are 3–9 month projects that cost
 $0.05M to $1M and are designed to
“Move concepts from ‘disbelief’ to ‘mere doubt’” during program design. There is little oversight on the money spent on
seedling projects as long as the budget is less than ~$500k. Seedlings are not about finding a solution to a problem, they’re designed to verify or disprove a hypothesis.

Every program at DARPA is intensely technically scrutinized by the tech council. The Tech Council is
composed of people with technical expertise in the proposed program’s area and adjacent areas. The
 Tech Council Pitch Meeting is meant to be high level but council members can ask deep
technical questions on anything.  The tech council doesn’t have any power besides advising the director on the program’s technical soundness. A purely advisory tech council seems like a good idea because it both avoids decision
by committee and keeps all responsibility squarely on the director and PM.

The ‘framework’ DARPA PMs use to create a program is by modeling the tech council. According to PMs I’ve grilled on this
question, the closest thing to a framework that PMs use to guide program design is “be able to explain precisely why this idea will work to a group of really smart experts both in the area of the program and adjacent to it.” While
the framework doesn’t have much structure, it does give PMs a very fixed end goal.

DARPA facilitates cross-pollination both between PMs and Performers. Everybody at DARPA has deep experience in
some technical area and there is a culture of people dropping by and asking each other about the subject of their expertise. This cultural artifact is special because it would be easy for everybody to mind their own business as everybody is
working on their own programs. Performers who get DARPA funding are required to share research with each other at various closed-door workshops. This sharing is valuable because they are all working on different aspects of the same problem and
academia often incentivizes sharing only after work is published.

DARPA provides a derisking role for people in other organizations. DARPA derisks working on breakthrough
ideas for three groups: researchers, companies, and other funders. Researchers don’t have as much uncertainty around grant proposals. PMs derisk working off the beaten path for researchers more subtly by building a community around a
technological vision. DARPA derisks working off the beaten path for small companies by giving them more confidence that there will be customers for their product – either large companies or the government. For large companies, the
off-the-balance sheet work to show that an idea is feasible and has evidence of demand derisks the idea to the point that they’re willing to start spending their own R&D dollars on it. Previous DARPA funding can
derisk an idea to the point that more risk-averse grant-giving organizations like the NSF are willing to start funding it.

The DARPA execution framework boils down to showing that a vision is not impossible, showing that it is possible, and then making it exist. Executing on this framework is different for every program but it’s still worth going a few rungs down The Ladder of Abstraction. At the end of the first step, a PM should have a concrete vision of where
the world could go, evidence that ‘on paper’ it doesn’t violate known physics, and a precise list of experiments you would want to do before subjecting it to the scrutiny of the tech council – a group of technical experts in
the program’s area and adjacent areas. At the end of the second step, a PM should have demonstration-based evidence that creating the vision is possible and a roadmap for how to get there. PMs get informal feedback throughout the program
design process on whether the program is likely to be approved by the council.

Funding

DARPA is ~0.5% of the Department of Defense Budget and ~12% of that is basic research. A lot of DARPA’s budget is spent on actually
assembling weapons and vehicle systems. So the math works out that only ~$400m is actually being spent on ‘basic research.’ These raw numbers raise the question – is there a ‘minimum effective budget’ to generate outlier
results? If so, what is it?

DARPA is more ideas limited than money limited. This excerpt is gold: “I never really felt constrained by money,” (former DARPA director) Tether says. “I was more constrained by ideas.” In fact, aerospace engineer Verne (Larry) Lynn,
DARPA’s director from 1995 to 1998, says he successfully lobbied Congress to shrink his budget after the Clinton administration had boosted it to “dangerous levels” to finance a short-lived technology reinvestment program.
“When an organization becomes bigger, it becomes more bureaucratic,” Lynn told an interviewer in 2006.
 Why is DARPA ideas limited? Among the possibilities: A limited number of ideas fall into the sweet
spot of high-risk, high reward military applications; pouring more money on ideas that aren’t ready to scale doesn’t help; and they’re not in the business of dispersion. DARPA only works on programs with a clear defense
application so while their idea-limitation does not spell doom for attempts to replicate their success, it does suggest that the ARPA Model has a ceiling on its effective scale.

DARPA funds wacky things that go nowhere. DARPA programs have a 5—10% success rate and have included things
like jetpacks, earthworm robots, creating fusion with sound waves, spider-man wall climbing, and bomb detecting bees.
You can’t cut off just one tail of a distribution.

It is relatively easy for DARPA PMs to re-deploy funding. DARPA PMs have the ability to pull funding built
into contracts with performers, which means that they can quickly move money away from an approach that isn’t working and into an approach that is working. Easily re-deployed funds lower the overhead to starting something risky and
differentiate DARPA from other funding agencies, philanthropies, and venture capital.

Program Managers have the ability to deploy money without much overhead. For seedling projects, as long as
it’s below roughly $500k program managers can just write a check. In the past, there was almost no oversight over PM spending after the director authorized the money. This is how J.C.R. Licklider was able to “Johnny Appleseed”
computing groups all over the country in only a year. Modern program managers need to get approval from their director to write larger checks but that is still fast and low-overhead compared to other funding agencies like the NSF.  The
ability to deploy money without much overhead is important for many reasons including making it worthwhile to write smaller checks, opening the door to more collaborators, and just plain moving fast. Small amounts of friction can have big
effects.

Riffing on the ARPA Model

There are surprisingly few structural riffs on the ARPA Model. Often when people compare an organization to DARPA, they just mean that the
organization hopes to enable high-risk high-reward use-inspired research. Only a few of these organizations attempt to riff on the ARPA Model’s structure. The US government has built two DARPA imitators: IARPA (the National
Intelligence Agency’s ARPA) and ARPA-E (the Department of Energy’s ARPA.)
IARPA might be living up to the name – funding both nobel-prize-winning quantum research and controversial forecasting research. ARPA-E has less great results. The British government has floated building an ARPA, but nothing concrete has
emerged. In the private sector, Google ATAP is DARPA-esque enough to learn lessons about what not to do from and Wellcome Leap appears to be a full-on health-focused implementation of the ARPA Model.

It’s easy to set out to build a DARPA but end up building a Skunkworks. The dominant paradigm when
starting a research organization is to do internalized R&D unless you’re a charitable foundation. In reality, R&D orgs lie on a spectrum between externalized and internalized, with DARPA on one end and Lockheed Skunkworks on the
other. There’s nothing inherently wrong with building a Skunkworks –it just means that there are different tradeoffs and the statement “DARPA for X” is misleading.

Most DARPA clones start too big or with heavy process. DARPA started off small and informal. For years it was less than ten people. Starting
big often
causes heavy process –people want assurances in place to make sure their money is spent well.  Starting large with large expectations and
scrutiny makes it tough to execute on things that seem stupid. Regardless of size, directly copying all of DARPA’s processes leads nowhere good. Many of the processes were built up over years to fit DARPA’s exact situation.

History

DARPA has changed over time. Most of DARPA’s outlier output happened before APRA became DARPA in 1972
so it’s tempting to throw out anything introduced since 1972 unless it was a codification of a previously implicit rule. However, the world has changed since 1972
3 so it’s worth considering whether some adjustments were made to enable DARPA to more effectively operate in the world.

ARPA became DARPA in 1972 because of the increased scrutiny on military spending both in the government and outside of it. The Mansfield Amendment expressly limited ARPA funding to direct military applications and gave Congress more oversight into their spending. The amendment was part of broader attitude changes towards the military both inside the government and
outside of it. Inside the government there was increasing discomfort with how ARPA Program Managers could spend money on basically anything they thought was worthwhile. Unfortunately, you can’t cut off just one tail of a distribution so
these constraints reduced the variance of DARPA results, both positive and negative. One might think that the technologies are so broadly applicable that  smart program managers could work on anything under the umbrella of
“defense” but from talking to former program managers, there are definitely DARPA-style ideas that DARPA doesn’t pursue because they are not sufficiently defense related.

Was the shift from ARPA to DARPA a focus change or a process change? The process argument is that after 1972 oversight and increased  friction killed DARPA’s ability to create outlier results. The focus argument is that we just don’t see
or appreciate as many of the things that come out of DARPA because they are specifically for military use and have less broad applicability. The focus vs. process question is important because it determines whether there is anything to be
learned from how DARPA works today. The visceral excitement that comes out of (recent) former program managers talking about their time at DARPA suggests that there are many things to be learned from the current organization that is not just
cargo culting.


Upshot: Pay attention to DARPA’s informal process and scrutinize formal process. For the most part
there’s been few changes in the weird things about how DARPA’s program managers work compared to other organizations, the incentives and structure of the organization, the funding, and general shape of the process. So it is not
worthless to study the modern organization. Regardless of whether or not the changes in formal process are the reason that most of DARPA’s outlier output happened before 1972, they certainly didn’t help. Therefore, if you want to
replicate DARPA’s outlier success, it makes sense to pay attention to DARPA’s informal process and for the most part ignore formal process. Formal
process is put into
place to increase oversight and decrease reliance on trust. Formal process lets people outside the organization trust in the process instead of the people. Ignoring the formal process makes the success of an organization depend more on trust in
people. This trust-dependence means that it is essential for an organization that seeks to replicate DARPA’s success to start with
trust from funders and collaborators outside the
organization. The trust-dependence creates a chicken-and-egg situation –by definition new models and organizations do not have a track record and the people who are most likely to create a new game are ones who haven’t won at other
games.


This document  is not meant to be a primer on how the ARPA Model works or a history of DARPA, although it draws on both. People have written thousands of words on both4. This document is not meant to be an unbiased presentation of facts or argue for a general thesis about the world or institutions (though it hints at some.) The goal of this analysis is to acknowledge that DARPA is a
massive outlier, figure out which emulatable attributes contributed to its outlier results, and synthesize them with the explicit intention of creating more organizations that can enable more outlier results. More than anything else, this
analysis is meant to guide my own action,  not to be tossed into the sea of the Internet for some unknown policy maker to act on (though that would be nifty.)

The nature of outliers means that you can’t do a data-based analysis. Perhaps controversially, I think it’s foolish to try to find patterns among outliers because outliers are outliers in different ways except in the most broad strokes. Instead, I dig into the characteristics  that seem distinctive and ask why they might lead to outlier results. At the end of the day, this is
storytelling. But stories are powerful. Of course, it’s easy to take this approach too far and create a story about how
anything weird about an outlier contributed to their success:
“Steve Jobs only wore one outfit which is important because it allowed him to have EXTREME FOCUS.” I try to avoid the lure of story first by focusing on why the pieces lead to outlier performance
in
combination
. Many people who try to replicate outliers fail by picking and choosing the pieces they want to replicate. Second, the goal here is to figure out how to make something that
works, not create a cute theory. I hope that the desire to enable action incentivized me to be intellectually honest.

Is it foolish to try to replicate outliers? Every outlier is an outlier in a different way, so on its surface it seems like the answer is yes. Concretely, many people have set out to build the next Apple by
emphasizing design and perfectionism and fallen flat on their face. People have even tried to replicate DARPA.
IARPA, ARPA-E, and others are explicitly modeled on DARPA.  

There are three mistakes you could make when trying to replicate outliers:

  1. Cherry picking characteristics without understanding why they’re significant or the magnitude of their effects
  2. Copying at such an abstract level as to be useless
  3. Trying to clone an organization whole cloth

Instead of trying to copy an outlier exactly or only in broad strokes, the goal should be to deeply understand how it works and then riff on that understanding for your own situation. If it helps, imagine a Jenga
tower made up of attributes of the outlier. While not all the attributes are important, many are, in an interconnected way. We need to poke each block and see which ones are structural.  While few people have become outliers by copying
other outliers, plenty of outliers have become outliers by
apprenticing to other outliers (Ben Graham and Warren Buffet, Edison and Ford, etc.) You could think of an outlier analysis of an
organization as a type of organizational apprenticeship.

The evidence also suggests that while it’s hard, replicating outlier success is not impossible. IARPA arguably has been reasonably successful –David Wineland’s Nobel prize winning quantum computing
work was sponsored by IARPA and I suspect that IARPA is positioned to be to quantum computing what ARPA was to personal computing.  Since
 the ARPA execution framework boils
down to showing that thing is not impossible, showing that thing is possible, and then making that thing possible
, the IARPA proof point that riffing on the ARPA Model is not impossible is a good first step.

I lean heavily on internal linking within this writeup. Partially the links are an attempt to illustrate how interconnected the different components are in order to avoid the cherry-picking failure case. To avoid
cargo-culting abstraction I try to explain
why each component matters, erring on the side of speculation rather than on the side of “just because.” These deep interlinked
explanations will hopefully enable myself and others to see where components could be changed to adjust for different applications without copying the whole thing whole cloth and expecting it to work.

If you trust me to do the analysis and want to skip straight to the conclusions here you go. I heavily linked them as
well, so you can just dig into the ones that make you go 🤔

I play fast and loose with the term “risk” – it’s mostly as a shorthand for “chance of not achieving goals.”

Economist Tyler Cowen has a habit of asking guests on his podcast about their “production function” –how do they do the amazing things they do? This is an attempt to distill the DARPA production function.

Every single description of the ARPA Model agrees that it’s all about the program managers.

At the end of the day the ARPA Model depends on badass program managers which mirrors the obsession with “talent” in other disciplines

When talking to people about how DARPA works you repeatedly hear “DARPA depends on amazing program managers.” Assuming this isn’t just a suitcase handle word (and I suspect it’s not) why is this the case?

PMs don’t get much structure when they’re designing and executing programs. The closest thing to a  ‘program design framework’ I could tease out of former DARPA PMs is that they
r
oughly model the tech council in their heads and organize their activities around answering the questions that ‘model tech council’ would throw at them. As we already noted, the best DARPA program managers are the ones who can look at an entire literature in an area and notice a systemic bias, which involves a very strong ability to think for yourself and go up and down the ladder of abstraction in an unstructured environment. To throw some more suitcase-handle words into the mix PMs need to be good communicators and
coordinators
 because so much of their jobs is building networks.

There’s a pattern that the abstract qualities that make “great talent” in different high-variance industries boils down to the ability to successfully make things happen under a lot of uncertainty.
DARPA PMs strongly pattern match against that description. Two conclusions from that observation:

  1. The people who would make good DARPA PMs would also make good hedge fund analysts, first employees at startups, etc. so digging into people’s motivations for becoming a PM is important as more precise details about what makes a PM good so you’re not just trying to work with the exact same people as every
    other high-variance industry.
  2. When ‘talent’ isn’t code for ‘specialized training’ it means the role or industry has not been systematized. Therefore, despite all the talk here and elsewhere about ‘the
    ARPA Model’ we must keep in mind that we may be attributing more structure to the process than actually exists.

DARPA Program managers pull control and risk away from both researchers and directors

Control

PMs pull control away from directors by having only one official checkpoint before launching programs and pull control away from performers through their ability to move money around
quickly.
 Control over direction and goals lies on a spectrum between performers —the people actually executing on the research and building
with their own two hands – and
directors –the people who run the innovation organization, be it a VC firm or DARPA.

On the performer control end of the spectrum are organizations like Howard
Hughes Medical Institute (HHMI)
 that in essence say “ok performers – here’s a bunch of money – go for it!”

On the other end of the spectrum, director control looks like NASA’s manned space program where the President has direct input into its direction. (Which is why NASA comes up with a 10 year plan every 8
years.)

Both ends of the control spectrum can lead to great results in the right situation. Performer control is effective for true exploratory research where you can’t do better than someone with good intuition
 playing around. Director control is effective in scenarios like the Apollo Program or Manhattan Project, where you need a massive amount of resources and alignment from wildly different organizations like Congress, academia, the military,
and companies.

In a way DARPA PMs add a second axis to the control spectrum, pulling control away from both the performers and the directors. PMs pull control away from directors by having only one official checkpoint before launching the programs and pull control away from performers through their
ability to move money around quickly
. It is relatively easy for DARPA PMs to re-deploy funding while it’s hard for someone to pull funding from the program manager. The ARPA Model doesn’t fall
neatly into a bucket of “top-down” or “bottom-up” but instead the PMs enable DARPA to do
multiple levels of top-down problem generation and bottom-up solution
generation
.

Risk

In addition to concentrating control, program managers also concentrate risk. The program manager ideally creates a program where the risk lies in the synthesis of relatively low risk projects. This approach is
different from either a normal funding agency or VC portfolio approach where the performers bear the most risk and those risks are aggregated into a portfolio.  
DARPA PMs use
seedling projects to ‘acid test’ the riskiest pieces of a program idea
, so going into the main program, they should be pretty confident that what they’re paying performers to do is possible.
Science policy researcher
Anna Goldstein5 points out that the
organization accepts the risk and the PM takes it on when the director approves a program on the advice of the tech council, not when the program manager gives out a grant.

How well does assuming risk at the level of the program instead of the performer work?  Know When to Fold
‘Em
 asserts that only 10% of individual ARPA-E projects are terminated early. That number seems quite low compared to the technically aggressive nature of the programs and while there are no official
numbers on program failure rates, it’s undoubtedly much higher than 10%. There are no great ways to compare to grants, but an  NSF program designed to support ‘high risk’ research had a ~10% success rate (where success
was defined as producing transformative research.) High project success rates and low program success rates suggests that Program Managers are able to shift risk to the program level. Why does this matter? Shifting the risk from the performers
to the program enables DARPA to tackle systemic problems where other models cannot.

The best DARPA program managers notice systemic biases

“The best program managers can get into the trees and still see the forest.”

According to former PMs, DARPA’s outlier success depends on program managers who have the ability to look at an entire literature of a discipline and notice systemic biases. This
essential but vague attribute of program managers is one of the reasons that at the end of the day,
the ARPA Model depends on having ‘really good’ program
managers
.  

Luckily this admonition lets us dig into two more precise questions

  1. How do you find people who can uncover systemic biases in a discipline?
  2. How could you systematize finding systemic biases in a discipline?

Finding the sort of person who can find systemic biases in a discipline:

  • You want someone who has demonstrated expertise in a field, but isn’t acknowledged as an expert in that field. If you divide experts into three categories –ones with deep knowledge, ones with a
    platform, and ones with power,  you want people in the first area who do not overlap with people in the second or third.
  • Alternatively, you could seek out people who are acknowledged experts in a domain but are fringe members or outsiders to the institution of that domain. Find the heretics.
  • You could try to find someone who has a track record of noticing systemic discipline biases. This one feels a little dangerous because in idiosyncratic areas, it feels just as likely that someone who points out
    problems with everything could just get a few hits.

All of these attributes mean that PMs will be people who maybe look a bit ‘off’ on paper and emphasizes why it is important that DARPA is
incredibly flexible with who it hires to be program managers
.

Possible systems for finding systemic biases in an area:

  • Ask seemingly dumb “why” questions about everything.
  • Make assumptions explicit. Making assumptions explicit is hard because constraints and assumptions are not directly observable in literature.

It might be possible for an organization hoping to riff on ARPA to build tools that make doing these things easier. This idea is not unprecedented – DARPA itself has a program to enhance program design
(metaprogram?) that resulted in
the Polyplexus project. It’s directionally
correct but leaves a lot of room for improvement.

A large part of a DARPA program manager’s job is focused network building

Rethinking the role of the state in technology development: DARPA and the case for embedded network governance points out that DARPA program
managers act as
‘brokers’ and
‘boundary spanners’ –terms of art for people who connect relatively unconnected clusters of people
. DARPA PMs network in the literal sense of creating networks, not just plugging into them.

The idea of focused networking is important. DARPA program managers have a clear purpose for building the network –first to generate a clear target in an area,
then to solicit paths towards the target, and then to make sure that the people who are executing on those paths know about each other so small adjustments to the plan can happen as frictionlessly as possible. It is easy to network in an
indeterminate way, hoping that the connections you make will be useful one day. Many people go about networking in this way.

“Networking” is just jargon for ‘building relationships with people who aren’t shoved in front of you by life’ and PMs need to do this at every step of creating and running a program. What does this tactically look like in practice?

In the first step – showing that a thing is not impossible, PMs need to find and become friends with (ideally) everybody who is working on ideas adjacent to the area where they want to have an impact to get a
sense of what’s possible.
Realistically, PMs are able to get this access because they bring the possibility of funding. PMs also bring these people together in small groups to dig
into which possibilities are not impossible and what it would take to make them possible. If the PM is doing their job well the people at the workshops won’t already know each
other and will continue to poke at ideas together on their own. The second step of the process –showing that a thing is in fact possible – leverages the network built during the first step to quickly do
seedling projects.

During the third step of the process, PMs depend heavily on the network to send unique solutions their way. The PMs host performer days –small private
conference for all the people working on different pieces of the program. These conferences force people to talk about the work they’re doing
while they’re doing it, which while
uncomfortable, enables people to share solutions to tricky problems and form more connections. You could imagine this leading to problematic correlation between results because everybody would want to copy whatever the shiniest group is doing
but at this point all the performers are locked in to whatever approach they’re trying.

You could think of DARPA PMs as playing the role of a manual designed serendipity system. In this role, they both connect the right people at
the right time and give them an incentive to help each other out.

In The Dream Machine, M. Mitchell Waldrop talks about how J.C.R. Licklider6 was always flying around to different university groups. In addition to bringing together all the people who thought they were crazy to be interested in human-focused computing, he also helped create several new lab groups.
Today, a new professor needs about $1.5m to get going. Interestingly, I haven’t seen any organizations devoted to seeding labs in a structured way beyond funding new professors who would have worked on those ideas anyway.

PMs also build networks of people in different classes of organizations –government, academia, startups, and large companies. Connections between different verticals is one way that the DARPA has changed over time –in the 1960’s, large companies had much more academic R&D arms and startups were barely a thing. By bringing academics, startups
and big companies together, modern DARPA PMs are agents of
Safi Bahcall’s admonition to “Manage the Transfer not the Technology.” That
is, it’s more important to manage the transfer of the technology between different groups and organizations, than it is to make sure that it is created in the first place.  This shift in the PM role is important. Two reasons standing in the way of sci-fi tinged technology that I’ve seen over and over are first: technology that needs actual manufacturing or high capital investment isn’t able to jump the gap from a lab to a startup prototype or from a startup
prototype to a real scalably manufactured product and second: many pieces of technology that are amazing but don’t warrant an entire VC backed startup on their own. It seems like building connections between different organization
classes while the technology is being built could address both problems.

In a way, the program manager acts like a product manager in a tech company, talking to the customer and modeling their needs. For DARPA the military is the customer but the difference is that the
‘product’ isn’t something that can be purchased (TRL 9) but something that is proved out enough that other branches of the military will scale it up.

DARPA PMs need to think for themselves, be curious, and have low ego

Yes, on its surface this sounds like a platitude, but because so much of the ARPA Model revolves around program managers, it’s worthwhile to dig into why the personality traits actually matter.

PMs need to think for themselves because at the end of the day, everybody they talk to has only a piece of the puzzle so the PM needs to both put the pieces together and precisely argue for the feasibility of the
final picture. When you are surrounded by smart, opinionated people the easy option is to either 100% accept what they’re saying because it’s eloquent and well-thought through or reject it outright because it sounds crazy or goes
against your priors. Thinking for yourself allows you to avoid these traps.

PMs need to be curious because building a complete picture of a discipline requires genuine curiosity to ask questions nobody else is
asking.
 Additionally, the ‘serendipity hatch’ during program execution where there is always an open call for ideas means that a program manager needs to be open to random outside solutions. Many
people falsely claim they are curious, but from my direct experience DARPA PMs will have an earnest discussion with people who approach them about the craziest ideas.

Scientists working on DARPA programs usually describe PMs as funding ‘their idea’ in a proud way. At the same time, people looking at a DARPA program from the outside describe the program as clearly
the PMs idea. A high ego would lead to a program manager imposing their will on every piece of the program, killing curiosity and the benefits of
top down problems and bottom up
solutions
.

DARPA is incredibly flexible with who it hires to be program managers

Unlike most government roles, there are no hard requirements on the sort of people who can be hired to be program managers, to the extent that there are legal provisions in place that let DARPA bypass normal
government hiring rules and procedures. This is important because
PMs are the sort of people who are in high demand, so you need the ability to get
people right when they’re available or work around their constraints. Additionally, the PM’s ability to
notice and call out systematic biases in a
discipline
 is an attribute that is likely to make them clash with established structures and thus not be as heavily credentialed.

Hiring flexibility is also important because the bar for program managers is very high and the compensation is pretty low for their level of skill
so it’s just straight up hard to find people who would both be good at and willing to do the job. The only flexibility you have in that situation is on credentials and profile.

Bureaucracies ensure consistency through rules, so there are no safeguards against hiring a terrible program manager. The fact that there are no guard rails makes DARPA extremely dependent on the principle that
‘A players hire A players and B players hire C players.’ It’s a double edged sword. One reason hiring rules exist is to fight people’s tendency to hire people like themselves. That tendency leads to organizational
inbreeding which DARPA has been subject to in the past.

DARPA Program managers have a tenure of four to five years

Explicitly temporary tours of duty may also be one way that DARPA is able to get amazing PMs where they wouldn’t have been able to otherwise. If you have many options in life, it’s more palatable to go
into a position that is explicitly designed around the expectation that you can have maximum impact in a few years and then go do something else.

The transient nature of program managers also makes them more immune to most of the effects of asymmetric career risk because there’s only one clear objective – make the program work. They are still subject to
the bias of giving grants disproportionately to people they trust. The familiarity bias is what pushes granting committees to give grants to well-established, known researchers. However, my hunch is that biases towards trust and exposure are
more solvable problems than worrying about their ability to show that their money was ‘well spent’.

Abstractly, the explicitly temporary nature of program managers allows DARPA to maintain alignment with program managers because alignment between people
playing different games can happen on finite time scales but rarely on infinite ones. Unlike many organizations there’s no implicit tension between the knowledge that most people will leave eventually and the uncertainty about when that
will be.

People and organizations are all playing some game that has different ways of gaining status and power. Maybe there’s something about PM’s transient nature that doesn’t allow them to play
long-term games or figure out how to game them. There’s something to the fact that the program managers are not just transient, but they are transiently
changing games. An academic
who becomes a program manager isn’t going to worry about publishing papers. A military officer who becomes a program manager isn’t going to worry about impressing their direct superiors.
Playing a completely different game enables them to focus on the job at hand.

Relatively frequent program manager turnover means that there is also turnover in ideas.

The transient nature of program managers was only codified in the ’90s because program managers were sticking around for much longer than five years. This codification suggests that the transient nature of
program managers is more than just a historical artifact.

Speculatively, there’s something about a culture that knows people have an explicitly short tenure that might actually maintain quality. You see this in lab groups, (possibly military units?), and fraternities
that maintain the same culture for decades. Intergenerational culture is like a standing wave.

Why do people become DARPA Program managers?

From a career and money standpoint, being a program manager seems pretty rough. There’s no promotion, no career stability, you could make more money elsewhere, you need to move to Washington DC, and you often don’t get to show off what you did after you’re out.

Possible Reasons

  • People get frustrated with the incremental/conservative nature of academia. Ben Mann, a former PM cited this as his reason for joining.
  • The prospect of getting to control a lot of money without a ton of oversight appeals to some people. This might suggest that the profile of someone who would be a good PM may be someone who is highly technical
    but finds VC appealing. Of course, you would need to filter people for whom controlling a lot of money is the
    only reason they’d be interested. 
  • Patriotism –many PMs see the role as a way to serve their country. Obviously, this option isn’t available for implementations of the ARPA Model that aren’t associated with a single government, but
    raises the possibility that you could find people who want to serve some cause in the same way that others serve their country.
  • Some people may like the explicitly temporary nature –it can serve as an exciting, high impact break or pivot.
  • There is a particular technological vision they want to see out in the world and DARPA gives them the agency to make it happen in unique ways.
  • Being a PM can gain you the respect of a small group of peers who will know what you did. Many people are motivated more by the respect of people they respect than by recognition from many people they don’t
    know.

The government’s influence on DARPA is buffered by opacity and the director

Like almost all government agencies, DARPA answers to the executive branch and receives funding from Congress.

There are many parts of the US government that are heavily influenced by term-level timescales. For example NASA comes up with a 10 year plan every 8 years for some weird reason. Several DARPA programs operated from
at least 2012 or 2013 to 2020, which means that they successfully crossed administrative boundaries. However, term-scale incentives may have begun to break down – starting in 2001, DARPA directors began to sync up with presidential
administrations.

Because DARPA Program managers pull control away from both researchers and directors, the Department of Defense itself
doesn’t have direct control over which programs DARPA is running. However, the Department of Defense does put pressure on the DARPA director to work on areas that are currently relevant to the military, like counter-terrorism and
insurgent warfare in the 2000s or jungle warfare in the early 1970s.

An opinionated director is the lynchpin keeping DARPA focused on long-term things that nobody is asking for (yet.) At several points in its history, the director went to bat to keep DARPA from being dissolved or
absorbed into the military as a more “normal” R&D organization. This secret meta-dependence on the director makes the increasing politicization of the director potentially detrimental to DARPA and there is evidence that DARPA
has shifted away from long-term disruptive work in conjunction with the probable director politicization.

Originally Congress funded ARPA through a lump sum but over time has shifted to requiring a budget for each program, including what they plan to accomplish for the year7. You know this is bullshit because the plans have statements like “Advance development of design tools for the optimization of collaborative problem solving
performance in human-machine systems and systems-of-systems.”
 In 2003, Congress investigated and effectively defunded the Total Information Awareness Program –an expensive surveillance program riding on the
heels of September 11th.

Abstractly, opacity seems important because if your source of money is constantly looking over your shoulder and judging what you’re doing, you’re going to take actions that look good. A source of
money like Congress that has many opinions exacerbates the incentive to ‘play it safe’ because making sure any action you take is acceptable to a committee is almost as bad as the committee determining those actions. Decisions made
by committees lead to median results. Coincident with Congress digging more into program specifics, more and more DARPA programs are becoming classified. It is complete speculation but it hints at a tension between DARPA’s desire
to
 keep things opaque and Congress’ desire to have oversight thanks to a trust breakdown possibly tied to increasing politicization.

Over time, DARPA has become less opaque and the director has become more coupled to the current administration. If you buy the argument that it’s important to pay attention to DARPA’s informal process and ignore formal process it leads to the conclusion that opacity
is important to DARPA’s outlier success
and DARPAs director is important.

Opacity is important to DARPA’s outlier success

Opacity removes incentives to go for easy wins or to avoid being criticized by external forces. Reporting requirements also add friction around everything from hiring to changing direction to trying crazy things to
moving quickly and more.

Of course, opacity can also be abused: getting nothing done, dumping money into stupid projects or unnecessary expenses, giving contracts to performers with whom you have a special relationship, or just straight up
stealing. Opacity can be abused in too many ways to list and prevent, which means that a better strategy is to incentivize people not to abuse it.

How does DARPA incentivize people not to abuse the opacity?

DARPA is relatively tiny and flat so it’s actually possible for everybody to know everybody. This means that everybody knows what everybody
else is up to and the mechanism of peer pressure can come into play to make sure that opacity is not abused.

DARPA Program managers have a tenure of four to five years, so they’re out regardless of their performance. In my experience finite tenures
either make people want to utterly crush it or not care at all. One way to push for the desire to crush it is simply finding self-motivated people with a lot of integrity who really care about what they’re doing. Of course, this strategy
piles even more weight onto
needing awesome program managers. Another way to motivate people to make the best of a temporary assignment is to enable them
to be as effective as possible. The criticality of effective PMs emphasizes how important it is to minimize even small frictions – process can kill effectiveness through a thousand cuts. This observation suggests that overregulation could
actually lead to a negative feedback loop of PMs feeling less effective which pushes them to the other end of the behavior spectrum for temporary positions which increases the need for overregulation. Obviously that’s an extreme scenario,
but it emphasizes the need for opacity. In a way it’s like a prisoner’s dilemma.

Nominally venture capitalists are incentivized not to abuse their opacity because of carry. Program managers don’t have that incentive.

So put yourself in the situation of a Program Manager: you can’t get promoted, you’re out in five years, nobody will know what you’ve done whether you succeed or fail, and you’re surrounded by
people who are complete ballers working on amazing things. You can either sit on your butt and do literally nothing, or go all out and try to make something amazing. There doesn’t seem to be any incentive to either hedge bets or make it
look like you’re working when you’re not.

The big question then becomes: how do you incentivize people to get into that situation in the first place? Bluntly,
the incentives to do amazing deeds once you’re part of the organization do not make a compelling sales pitch to join.

DARPA does multiple levels of top-down problem generation and bottom-up solution generation

DARPA completely sidesteps the always-important question “is top-down or bottom up better?” by doing both.

In some ways, the program manager is the ‘bottom up’ component. DARPA does an initial pass of top-down when it hires someone. The director or departing PM is usually looking for someone to go after a vague
area or problem rather than just hiring someone to fill a slot or because they’re great and they want to have them on staff. The Program manager then proposes a more explicit solution to the high level problem area and gets it signed off
by the director.

In other ways the program manager is the ‘top down’ component of top down problems and bottom up solutions. When they are doing initial program design (ie. steps one and two of show it’s not impossible, show its possible, do it) they talk to many researchers about possible solutions and solicit project ideas. During the actual execution
phase, while they have a plan and an idea of who they want to deliver on it, they still put out a broad call that anybody can respond to at least nominally keep the door open to other bottom up solutions.

The transient nature of most people at DARPA enables ideas to be revisited

DARPA Program managers have a tenure of four to five years. Similarly, before 2001, directors did not stick around much longer. Obviously institutional memory lasts longer than any individual but memory of any
specific programs dies out quickly because DARPA is relatively tiny and flat so there is only one PM running any program and people pay much more attention to their own work than other people’s. This transient memory about failed programs
means less friction around creating a program that ‘has been tried before’ and makes it more likely that DARPA can explore ‘unexploited Leonardo Ideas’ or ‘gems in plain sight’ –ideas that are good but
before their time.

A former program manager described how there have been multiple instances of program managers starting successful programs that they discovered
later were almost the same as failed programs from the past. If there were enough people around who had seen the failure, the program manager would have had to not just argue for the validity of their program on its own grounds, but address the
shadow cast by the failed program. Of course it’s rational to try something that failed in the past as long as you explicitly call out why you will succeed when they failed but small amounts of friction can have large effects.
Essentially, the transient memory about specific ideas allows them to be judged on their own merits instead of dealing with baggage from the past.

It’s entirely possible that short institutional memory about specific programs could lead to people trying terrible ideas over and over again. You see that with certain forms of government, for example. If
there are high epistemological standards (enforced by the tech council, informal feedback from other PMs, and the ability to do seedling projects) and there are no externalities to the failure, trying things over and over again has capped
downsides. Additionally, the institutional memory is short, but not
that short – you will only get a complete turnover once every ~five years, which is enough time for the constraints in the
world to change.

In a way, short institutional memory about programs is like the temporal version of how federated systems like Renaissance Europe prevent disruptive changes from being completely damped out as opposed to centralized systems
like Imperial China.
8

DARPA is relatively tiny and flat

As of April 2020, there are 124 staff and three layers. That number is right around Dunbar’s Number –just small enough for one person to know everyone in the organization. It’s not explicitly designed
that way but I suspect that it emerged as the right number for the Director to be able to vaguely pay attention to everything that is going on and for everybody in the organization to have vague context on what everybody else is working on.
There are three layers – the Director, office directors, and program managers (with a few deputy directors thrown in.) Culturally and physically DARPA is set up so everybody can talk and seek advice, so there is lots of ability for idea
cross-pollination.

DARPA’s size may help minimize internal politics. The returns to politics is higher at large organizations because there is less direct accountability and any one person doesn’t have a lot of agency to
change outcomes. Low potential impact makes it more worthwhile to get people to like you than produce results. On the flipside, in DARPA any program and therefore any program manager can potentially create an industry. The small organization
means that there are no “group projects” so every PM is directly responsible for the program they’re working on and you can’t have slackers.

I suspect the small size also helps keep PM quality high. Quality tends to slip when there is both flexibility in who you can hire and there is pressure to hire a lot of people. Since the flexibility piece is essential, it becomes important to not have pressure to hire more.

The downside of the tiny flat organization is that there don’t seem to be any checks on the Director’s power, so if they don’t like a program or a person they’re out. This is why politicization
is dangerous. Politicization would lead to the director prioritizing aligning their goals with factors outside of the organization and PMs aligning with the director more than with the organization’s nominal goal.

DARPA Employees aren’t paid very much compared to what they could be

In 2017, a DARPA director is estimated at $76—130k per year and program managers are estimated at $90k. While DARPA does have leeway over salaries compared to the rest of the government, they still can’t offer
the same compensation as large tech companies.

The people with the right qualifications could be making much more. Experienced technical PhDs even outside of software engineering can reasonably expect $170k+. Unfortunately, the things that make “great
talent” in high variance industries just boils down to the ability to successfully make things happen under a lot of uncertainty.

It raises the question –was this more comparable to industry averages in the ’60s? From the Bulletin
of the United States Bureau of Labor Statistics
 in 1960 a PhD Mathematician in industry could make ~$11,000 and $8,955 in government. That’s $96,000 and $78,000 today. So yes, the gap would appear to be
much smaller.

There is a positive and a negative argument for low pay. The positive argument is that it weeds out people who are just in it for the money and treat the role as ‘just a job.’ On the other hand the
negative argument is that the low pay weeds out people who have some lower bound on what they think they’re worth. I can only assume that the widening gap since the ’60s has exacerbated this latter point.

The low pay makes the question “Why do people become DARPA Program managers?” even more important.

DARPA’s aversion to people with a web presence may be how they avoid asymmetric career risk

If you try to find current and even former DARPA PMs or directors on the Internet you’ll have a hard time. DARPA avoids hiring people with a significant web presence. In the 21st century, that’s remarkable
and specific enough that it is worth digging into.

Asymmetric career risk happens when you know that the downsides of doing the ’safe’ thing are capped while the downsides of doing the ‘risky’ things are uncapped. Therefore it makes sense that
you would be more willing to make risky moves in an institutional context if you know you aren’t going to be judged on your actions either way after you leave.

People with a web presence tend to be focused on playing  status games or at least are in a world where they realize that their career depends on public perception of their output. Internet people are often
playing a game to maximize engagement. So being an Internet person could be taken as being a signal that a potential PM would have in the back of their mind “what will other people think about this?”

People are more likely to judge a crazy act positively if they know the reasons why it happened. So if the people you care about are a small group
of peers
 rather than the whole Internet it increases incentives to just go for it.

Additionally, less Internet presence means less grand announcements and fewer expectations. If you know it’s easy to shut something down it’s easier to start. Contrast this to something like Peter
Diamandis and the X-Prize where there is a huge announcement and massive expectations which can quickly lead to institutional distrust when expectations aren’t met. My impression is that most people don’t take the X-Prize seriously
anymore because the ratio of smoke to fire keeps increasing.

Of course, it could be more simple and DARPA just doesn’t like people who are too loud because they like to keep things hush hush. Opacity is important to DARPA’s outlier success. But then maybe there is something to keeping things hush hush. If you keep things hush hush, you don’t have to deal with lock-in from publicly announcing
you’re going to do something.

DARPA doesn’t do any research in house

There is a ton of literature on how DARPA externalizes research, so I want to dig into some subtle advantages of externalizing
research:

  • It enables DARPA to be relatively tiny and flat.
  • Actual cutting edge research may require rare equipment or knowledge. There are many pieces of equipment or tacit knowledge that only exist in one or two places in the world and it’s easier to access them
    through finite projects than purchasing or hiring them.
  • It enables strong accountability because for any program there is exactly one responsible person.
  • It enables program managers to have multiple teams working on the same objective without internal politics.
  • You don’t have to lay anyone off or find new work for people when you change direction.
  • You can get people to work on projects who might not want to work for the org full time.

Of course there are also disadvantages:

  • There are higher transaction costs in making sure deliverables are created, finding the right people in the first place, paying administrative overhead at the other org, and convincing them to work on projects.
    Coasian tradeoffs everywhere.
  • DARPA doesn’t have any control over the ‘intellectual exhaust’ created during projects, which means that they both can’t do direct technology dissemination and they do not capture the value
    it creates.

It’s easy to conflate ARPA with Bell Labs, Lockheed Skunkworks, or Xerox PARC but they’re fundamentally different because R&D orgs lie on a
spectrum between externalized and internalized and DARPA is all the way on the externalized end, while those other orgs are much closer to the internalized end. A common trap for people claiming to replicate DARPA is internalizing the majority
of their work.

DARPA has many tight feedback loops

From day one there is a feedback loop between a new program manager and the other program managers. ‘Onboarding’ at DARPA is an informal process that involves mostly oral traditions and shadowing.
Informal onboarding is normally a no-no when it comes to best practices but most organizations don’t limit themselves to
 Dunbar’s number. Program managers get significant informal feedback from other PMs as they hone their program design. This feedback includes whether something is ‘in
scope’ for the organization
 and whether the design is precise enough. High quality informal feedback depends on other PMs actually caring, so this is another reason DARPA PMs need to think for
themselves, be curious, and have low ego.

There are multiple levels of feedback loop between the PM and the research community during program design. The tightest loop is informal conversations with individual researchers and groups about where they see
blockers and possibilities in the area that the PM can then incorporate into their program design. The PM then uses workshops to set up feedback loops between those possibilities – an idea might be good but an enthusiastic proponent could gloss
over a major hole that needs to be filled. Finally, small projects test the riskiest parts of those possibilities against the world before the PM incorporates them into the program design.  The workshops also set up feedback loops between
previously unconnected groups of researchers focused on the program area that hopefully create new ideas and (ideally) new research communities.

PMs also set up feedback loops between different parts of an industry –academia, startups, and big companies. Setting up these feedback loops with researchers across a
discipline
 is why a large part of a DARPA program manager’s job is focused
network
 building.

Feedback loops enable a PM to adjust, kill, or start projects as necessary during the execution of the program. Frequent contact with performers working on a project enables both the performer and the PM to
incorporate new information into decisions on whether they should make small adjustments to the project’s goals and how that propagates into the rest of the program. PMs also use this information to decide whether t
o kill the project completely or start a new one. Failed projects are also part of a feedback loop: “The failure (of a performer) triggers a
discussion in which the first question is whether the goals were correctly specified and how they might be redefined in the light of the research that has already taken place.”

The ARPA Model also uses long-term feedback loops to maintain high quality PMs and research collaborations. A players hire A players and B players hire C players so high quality program managers make other high
quality program managers want to join. Low friction interactions with DARPA and funding that enables performers to do things they wouldn’t be able to do otherwise keeps ideas coming from the community.

Even PM hiring is part of a feedback loop between the organization and the outside world. PMs are hired to explore a specific area that DARPA wants to explore. Even though it works on technology that nobody knows
they need yet, DARPA needs to be responsive to changes in the world – whether it’s new possibilities (technology push) or new needs (technology pull.)  This buffering between the outside world and hiring is one of the ways DARPAs
director is important.

A lot of DARPA’s budget is spent on actually assembling weapons and vehicle systems. So the math works out that only ~$400m is actually being spent on ‘basic research.’

These raw numbers raise two questions:

  1. Is it essential that a research organization is a small fraction of the money factory’s budget? That is, can an outlier producing organization only exist as a small attachment to a bigger organization? You see
    this pattern repeated over and over.
  2. Is there a ‘minimum effective budget’ to generate outlier results? If so, what is it?

DARPA is more ideas limited than money limited

From What makes DARPA tick:

“I never really felt constrained by money,” (former DARPA director) Tether says. “I was more constrained by ideas.” In fact, aerospace engineer Verne (Larry) Lynn, DARPA’s director from
1995 to 1998, says he successfully lobbied Congress to shrink his budget after the Clinton administration had boosted it to “dangerous levels” to finance a short-lived technology reinvestment program. “When an organization
becomes bigger, it becomes more bureaucratic,” Lynn told an interviewer in 2006.
 

(Let’s just take a moment to stand in awe of this 😮)

Both the explicit desire to avoid bureaucracy and the low marginal benefit of money and people is another reason why DARPA’s small
size
 is important.

Why is DARPA more ideas limited than money limited?

  • At any time there are only so many ideas in the world that fall into the sweet spot of just hitting an inflection point on an s-curve –having enough potential that a concerted effort will get them to a take
    off trajectory but not so obvious that everyone is piling resources into it.
  • Almost by definition, the more of a Big H hard problem it is  (ie. one that needs a breakthrough instead of just more engineering effort) the less throwing money at it will help.
  • Unlike startups, DARPA is not in the business of dispersion. When you have something that scales, by definition more money = more results.
  • Something about the idea marketplace for ideas – once people realize that something is a good idea, DARPA money is less useful.

VC firms experience a similar effect –they tend to have lower percent returns as funds get bigger even though they *are* in the business of scaling things.

Presumably DARPA also has a lower bound on effective funding. What might that be?

DARPA funds wacky things that go nowhere

Many DARPA programs that don’t go anywhere and sometimes sound stupid in retrospect. This list is meant to show how wacky some of the programs sound, especially in “Explain it like I’m Five
English.” Because you can’t cut off just one tail of a distribution, funding these wacky things is essential to getting outlier results.

  • Sonofusion – creating fusion through sound waves. (was so wacky they actually were investigated for wasting money)
  • Trying to solve cancer by having a computer read a bunch of papers and propose mechanisms for attacking the cancer’s metabolic pathways
  • Delivering gene modifying viruses to plants via insects
  • Quantum effects in biological environments
  • Remote controlled insects
  • Jetpacks
  • Earthworm robots
  • Taking over/Recycling satellites
  • Turning plant matter into jet fuel
  • Chemical interventions to reduce stress
  • Encoding information in photons
  • Literally trying to build a full memex (remember, The memex is the philosopher’s stone of computer science.)
  • Central nervous system interfaces
  • Technology to climb vertical walls like Spiderman
  • Bomb detecting bees

Program Managers have the ability to deploy money without much overhead

For seedling projects, as long as it’s below roughly $500k program managers can just write a check.

Modern program managers need to get approval from their director to write larger grants and they need to work through official government vehicles like open grant calls and government websites. 😭 However, the
PM is the only person reading the grant proposals and they don’t need to check with anybody before deploying the money once it’s been allocated. This process is still fast compared to other funding agencies like the NSF where there
is literally a committee that deliberates over grant applications. Even large companies or universities require you to submit a request before spending a large chunk of money.

In the past there was almost no oversight over PM spending after the director authorized the money. This is how J.C.R. Licklider was able to “Johnny Appleseed” computing groups all over the country in only
a year. The transition from ARPA to DARPA was coupled to more oversight on military spending, which inevitably introduced more overhead.

The ability to deploy money without much overhead is important for many reasons including making it worthwhile to write smaller checks, opening the door to more collaborators, and just plain moving fast. If it is
an equal amount of pain for the PM to write a check regardless of size and the same amount of pain for a performer to apply for any amount of money, there will be some threshold below which the money is not worth the effort. Small amounts of
friction can have large effects! High overhead means only larger projects will happen. Larger projects take more time and are more serious, so if all projects were large it would kill two critical pieces for getting to breakthroughs:
feedback loops and play. High-overhead, formal grant applications weed out many potentially useful collaborations such as people who don’t
understand how the grant process works or who are terrible at writing grants (both of which are unrelated to their ability to do good work.)

The ability to move quickly is important because often the money is going towards keeping the lights on or a grad student in the lab, so if an organization can’t get the money quickly they are going to work
on a different project and become unavailable even if the money is available later. Fast money allows a PM
to quickly act on new information and adjust the trajectory of the
program
 which can make it more likely to succeed. There is also something intangible about the feeling of ‘momentum’ that you can’t get if you have to constantly go over road bumps.

Of course, the ability to deploy money quickly requires high trust in both the PM’s integrity and their judgement. Restrictions on spending money happen when you reach trust limits. Overhead exists for a
rational reason –money can be embezzled and people always spend money for a purpose so whoever is providing the money wants to know that it is being well-spent. Yet another reason why it circles back to “
you need to have really awesome PMs.”

It is relatively easy for DARPA PMs to re-deploy funding

DARPA PMs have the ability to pull funding built into contracts with performers, which means that they can quickly move money away from an approach that isn’t working and into an approach that is working.
You would expect people to be hesitant to work on something risky if they know that the funding could be pulled quickly.
Anna Goldstein pointed
out that the actual pieces of the programs are less risky, because within a project the goals can shift.
Instead the PMs take on the risk of the entire program failing.

It’s not clear whether easy fund-pulling was always part of DARPA or if it was introduced as part of formal process around deliverables. However, it seems worth considering part of the ARPA Model and replicating
for two reasons.

First, it makes sense that easily re-deployed funds would increase willingness for funding wacky things that might go nowhere because if you know it’s easy to shut something down it’s easier to start.

Second, it increases the contrast between DARPA and other institutions that give out money like pure grant-giving orgs and venture capital. Organizations that give out long-term grants like the National Science
Foundation or Howard Hughes Medical Institute need to either very carefully consider proposals which
leads to death by committee or lean on trust in a researcher’s experience.
Either way, it slices out a large amount of idea-space. Venture capitalists write checks that are meant to fund a company for 18 months or more. This timescale means that they can end up with a lot of sunk cost in a company that is clearly not
doing well. That possibility rationally leads to more risk aversion and when it inevitably happens it can burn time and resources trying to help the struggling company.

Mechanically, DARPA sets up easily-redeployed funding by using contracts instead of grants for most research and having goals in the contracts that are almost impossible to hit. If the performer doesn’t hit the
goal, it is at the PMs discretion whether to cancel the contract.

Executing on this framework is different for every case, which is why PMs need to be extremely competent, able to
t
hink for themselves, and trustworthy. However, it’s still worth going a few rungs down The Ladder of Abstraction into what this means.

1. Show that the thing is not impossible

At the end of this step, a PM should have a concrete vision of where the world could go, evidence that ‘on paper’ it doesn’t violate known physics, and a precise list of experiments you would
want to do before putting it in front of the tech council and subjecting it to the Heilmeier catechism.  J.C.R Licklider’s “
Man-Computer Symbiosis” is the classic example of a concrete vision.
 Tactically, PMs get to this point by first talking to researchers in a domain about what they’re working on and start to synthesize possibilities into a vision of where the domain could go. They also get small groups of researchers
together to hone ideas against each other and figure out where the key constraints lie. Massive constraints imposed by the laws of physics can kill the program right there, but if there is uncertainty that can be
tested and resolved for a few hundred $K the PM will do that.

2. Show that thing is possible

At the end of this step, a PM should have demonstration-based evidence that creating the vision is possible and a roadmap for how to get there. The evidence and roadmap needs to hold up to scrutiny by well-intentioned experts. Tactically, PMs get to this point by bringing together small groups of researchers to map out precisely what the pieces of the puzzle
look like, which are the most risky, and where the unknowns are, what experiments could be done to resolve the biggest risks. They figure out where you can’t get more precision without experiment and do those experiments as part of
seedling projects. The PM  lays out where the biggest blockers are along the roadmap and figure out the different approaches that you could take to remove the same blockers.

3. Make thing possible

This is where the PM spends the majority of the time and money in the program and can potentially last through the tenure of multiple program managers. During this step PMs fund different groups to work on different
pieces and approaches for the problem. The PM makes sure that the groups working on different pieces are communicating through both formal and informal channels to minimize getting stuck and maximize new ideas. The PMs adjust frequently,
killing approaches that aren’t working and reroute funds to new approaches that come up.

The initial exploratory tranche of a DARPA program is approximately $1.5m

Most of this money goes towards small seedling projects. These seedling projects are small grants or contracts designed to help solidify that an idea is both not impossible and that it is in fact
possible within scope. Intriguingly, this is in the same ballpark as a large seed round for a startup or the amount of money you need to set up a lab.

The ‘framework’ DARPA PMs use to create a program is to mentally model the tech council

According to PMs I’ve grilled on this question, the closest thing to a framework that PMs use to guide program design is “be able to explain precisely why this idea will work to a group of really smart
experts both in the area of the program and adjacent to it.” In short, PMs design a program against a mental model of the
Tech Council.  

The amount of risk the council is willing to give a thumbs-up on depends heavily on the office and its role. The Defense Science Office is much more risk tolerant than the Tactical Technology Office for example.
 The Tactical Technology Office actually builds prototypes and passes them off to the military, while the Defense Science Office works on more speculative projects.

PMs get informal feedback throughout the program design process on whether the program is likely to be approved by the
council. It’s key that this feedback is from people who are good models of the council as opposed to schmucks with an opinion, which is the case with advice in many other disciplines. I suspect this zero-consequence but tight feedback
loop during the design process is important.

While the framework doesn’t have much structure, it does give PMs a very fixed end goal – answering The Heilmeier Catechism. The testament to the usefulness of The Catechism despite the fact that it wasn’t created until after DARPA had many of its early wins makes me suspect that it is a
formalization of previous informal processes and is worth paying attention to.

The unstructured framework depends on DARPA PMs being smart and self motivated because there is almost no explicit
guidance and the feedback loop depends on self-motivation. Yet another reason
‘awesome PMs’ is a bedrock for the whole system.

Every program at DARPA is intensely technically scrutinized by the tech council

The Tech Council is composed of people with technical expertise in the proposed program’s area and adjacent areas. The Tech Council Pitch Meeting is meant to be very high level but council members can ask deep
technical questions on anything.  The tech council doesn’t have any power besides advising the director on the program’s technical soundness. A purely advisory tech council seems like a good idea because it both avoids decision
by committee and keeps all responsibility squarely on the director and PM.

PMs should have ‘derisked’ the idea before going into the meeting by using seedling projects, workshops, and precise roadmapping.  The way
the tech council meeting was described to me is that it’s roughly like a university seminar in a room full of people who you
cannot bullshit and who have enough technical experience to
dig into anything about the program.  It’s important that the meeting be egoless and clearly focused on making the program as good as possible because this sort of thing can easily go down a rabbit hole if people feel the need to
show how smart they are or just destroy the presenter. The need for the tech council to ‘get it’ suggests that it should be composed of other PMs.

DARPA PMs use seedling projects to ‘acid test’ the riskiest pieces of a program idea

Seedlings are 3—9 month projects that cost  $0.05M to $1M and are designed to “Move concepts from ‘disbelief ’ to ‘mere doubt’”. This
means that in an initial exploratory tranche that costs approximately $1.5m, you expect to run roughly a dozen seedlings.

There is little oversight on the money spent on seedling projects as long as the budget is less than ~$500k. PMs don’t need to get the money pre-approved and unlike larger and later projects, DARPA PMs
don’t have to use open solicitations for seedlings. In essence the PM can go to whoever they want and say “I need this done.” Zero oversight enables them to move extremely fast. Restrictions on spending money happen when you
reach trust limits, so this low-oversight spending is another reason why DARPA depends on high trust in badass PMs.

Seedlings are not about finding a solution to a problem, they’re designed to verify or disprove a hypothesis. The idea for seedlings can either come from the Program Manager or from a performer. Either way
the ideas are part of a
feedback loop between the PM and the research community during program design.

DARPA facilitates cross-polination both between PMs and Performers

Everybody at DARPA has deep experience in some technical area and there is a culture of people dropping by and asking each other about the subject of their expertise. The way a former PM described this atmosphere
was weirdly similar to the one attributed to Bell Labs at its peak. The culture also encourages PMs to discuss their programs with other PMs. This cultural artifact is special because it would be easy for everybody to mind their own business as
everybody is working on their own programs. It would also be easy for PMs to feel competitive. DARPA’s small size probably helps walk this fine line between PMs not caring about what other people are working on and caring
too much. I’ve read some accounts that disagree with this description of DARPA culture, instead portraying a situation with little interactions between PMs. I attribute the discrepancy to the
culture being different either in different offices or at different times because I put higher weight on the description I heard directly from talking to former PMs.

Performers who get DARPA funding are required to share research with each other at various closed-door workshops. Since they are all presumably working on different aspects of the same problem, this seems
valuable. The sharing has some of the aspects of
open science but among a much more
focused group of people. Ideally, the closed-door workshops bring together the people who might benefit from the knowledge and keep out the tourists and hangers on that can destroy nascent communities. The forced early sharing stands in
contrast to normal academia where everything is a first-to-the-post system that incentivizes people to pipet out information to their peers because they are (rationally) worried about getting scooped.

DARPA provides a derisking role for people in other organizations

DARPA derisks working on breakthrough ideas for three big groups: researchers, companies, and other funders.

DARPA obviously derisks whether researchers will be able to get funding to work on an idea. Researchers don’t have as much uncertainty around grant proposals because program managers have the ability to deploy money without much overhead. Instead of the common situation where a funder says “that sounds great! But
I need to check with my boss/our budget doesn’t allow it,” if a PM says “that sounds great” it probably means that you’ll get funding. The ability to plan is a big deal. PMs also derisk working off the beaten path
for researchers more subtly by building a community around a technological vision. The community is secretly critical because the peer review and citation system incentivizes people to work on things that other people think is interesting. So
even if an academic could get funding to work on crazy shit, most people wouldn’t work on it because they wouldn’t be able to get the results published and cited. By
bootstrapping a community, the PM gives researchers peers.

DARPA derisks working off the beaten path for small companies by giving them more confidence that there will be customers for their product –either large companies or the government. In the best case, the promise
of future procurement functions like a prize to incentivize startups to spend their own money on top of the money from DARPA. Startups building ‘frontier tech’ often falter at the stage where they need to scale up production.
By encouraging small companies to work with large companies earlier in the process, DARPA PMs reduce scale-up risk as well. In this way DARPA PMs act in a
similar way to biotech VCs, brokering relationships between small companies with specialized skills and large companies with production capacity. Additionally, other customers often see DARPA funding as third-party validation of a
startup’s technology. The DARPA validation can be critical for breaking the chicken-and-egg that nobody will buy your product until someone has bought your product.

From a founder: “DARPA funding has the added benefit of communicating to a third party a validation of the technology.”

For large companies, the off-the-balance sheet work to show that an idea is feasible and clear evidence of demand derisks the idea to the point that they’re willing to start spending their own R&D dollars on
it. Companies want R&D to be off of their balance sheets as much as possible. For example, DARPA worked with IBM and Intel to develop nanophotonics and afterward the companies took them on as R&D programs.

“So the DARPA piece, while large, was the validation for IBM to spend their own money.” He continues, “The same way for the Intel piece. You know, Intel certainly looked at that project, and then
Intel ended up funding it internally, but the fact that DARPA went back to them three and four times and said, this is an important thing, this is an important thing, you know, it got to the board of directors, and it got high enough that they
set up a division to do this.”

Most grant-giving organizations try to derisk them as much as possible9. Previous DARPA funding can give research groups the funding they need to derisk a key part of the technology enough that they can get
follow-on funding from more risk-averse grant-giving organizations.

From a professor: “Once you’ve gotten funding from DARPA, you have an issue resolved, and so on, then you go right ahead and submit an NSF proposal. By which time your ideas are
known out there, people know you, you’ve published a paper or two. And then guys at NSF say, yeah, yeah, this is a good thing.” He continues, distinguishing DARPA’s place within the broader U.S. government system, “NSF
funding usually comes in a second wave. DARPA provides initial funding.” As a consequence, he concludes, “DARPA plays a huge role in selecting key ideas” (from among the broader set of ideas present in the research
community).

DARPA analyses can end up looking like hagiographies – this document is no exception. Clearly DARPA has downsides. Instead of caveating every conclusion, it seems productive to group problems thematically and
use those groupings to suggest places where an organization using the ARPA Model may be able to improve on the original.

DARPA’s primary purpose is to develop military technology

The D in DARPA stands for “Defense” –ultimately DARPA’s explicit purpose is to support the DoD. Even before the addition of “D” ARPA was mission-oriented around the military. at the same
time as J.C.R. Licklider was sowing the seeds of personal computing and the Internet, William Godel was managing ARPA programs to build silent planes and boats, agent orange, and AR-15 rifles. Even J.C.R. Licklider’s computer work was
started in response to an incident where the US almost started a nuclear exchange because a computer couldn’t tell the difference between the Moon and a fleet of incoming bombers combined with JFK’s desire to have a better
command-and-control overview of the US military. For the most part, the non-military technology that came out of DARPA was a happy byproduct.

While we can abstract the ARPA Model away from DARPA’s military focus, it is important to keep that focus in mind because in the long run incentives win and at the end of the day, innovation organizations that
are misaligned with their funding source are either neutered or destroyed. Aspects of the parent organization leaks into any innovation organization’s focus regardless of any commitment to paradigm-shifting long-term work.

The military milieu of the day has always driven DARPA’s focus: Vietnam, the Cold War, concerns about offshoring eliminating US ability to build its own military technology, and terrorism. DARPA is the source of
many technologies that the show Black Mirror has used to paint dystopian futures – killer drones, surveillance AI, walking robots, and more.

You can see this focus leak in ARPA-E, the Department of Energy (DOE)’s ARPA riff. The DOE doesn’t really have one straightforward mission – it used to be “ensure nuclear superiority”
(historical sidenote – all the DOE national labs were originally established to optimize and advance different parts of the nuclear weapons process.) ARPA-E is similarly a bit confused – is it trying to lower the cost of energy? Shift us to
renewables faster? Help the energy industry or upend it?

Two upshots. First, DARPA’s military focus means that there are ideas that are out of scope. A former PM explicitly confirmed that while DARPA is ideas
limited, there are ideas well-suited for the ARPA Model that DARPA doesn’t pursue. Second, the source of authority and funding matters for anybody attempting to riff on the ARPA Model.

The ARPA Model is high-variance

The same mechanisms that enable an organization to move quickly and work on weird, paradigm-shifting programs can also enable massive waste and fraud. The parallel stories of J.C.R. Licklider and William Godel
illustrate both edges of the sword. Both men were ARPA program managers in the ’60s. Both had precise, grand visions of technology that could change the world. Both had unmonitored access to funds once the DARPA director gave the ok. But
while  Licklider helped midwife modern computing and computer science as a field, Godel helped unleash ineffective chemical atrocity on Vietnam, laid the groundwork for the security state, and was arrested by the FBI and convicted for
embezzlement.

The experience of working with ARPA is also high variance. Both first and second-hand experience confirms that sometimes DARPA is an amazing partner and sometimes it is hair-tearingly frustrating. Sometimes you come
in expecting someone like Licklider and you get something closer to a standard government bureaucracy. Perhaps here is a place where you could actually improve on the model with the modern ‘innovation’ of customer obsession – high
quality programs depend on continued relationships with high quality performers, so being a pleasure to work with should be paramount.

Uncomfortably, the ARPA Model’s high variance is a case where you can’t cut off just one tail of a distribution. Any guard rails would slow down the process or constrain PMs, reducing both possible
downsides and potential massive upsides. It is unsatisfying but the only solution that doesn’t kill the goose that lays the golden eggs is “culture and good people.” Acknowledging this fact directly enables everybody in an
organization riffing on the ARPA Model to look the potential downsides straight in the eye. “With great power comes great responsibility” is a great adage for program managers.

DARPA falls into many classic government organization traps

DARPA doesn’t always stick to the ‘ideal’ APRA framework that others and I normally describe. DARPA programs can also look much more like a normal government R&D program – the military needs
something, so they tell the director to start a program around the idea. The director finds a program manager to execute on the idea. The program manager ends up looking much more like a project manager with very little input over the direction
of the program at all. This way of doing things seems relatively rare and happens more in the military-systems prototype focused offices (TTO and DSO) than elsewhere

While DARPA has been granted some exceptions to broad government rules around hiring and how they can spend money, DARPA is still subject to many ‘stupid government rules.’ They aren’t allowed to
use Google Docs or Zoom regardless of how un-sensitive information is. Modern DARPA still requires grants to go through the official government grant application website. The
grant application page is almost tear-inducing. Cold contacting a
PM requires filling out a web form that then sends a note to the PM’s secretary who may or may not set up a time for you to talk to the PM a month out. I can only imagine this friction reduces the number of serendipitous ideas PM’s
receive. The IARPA, the National Intelligence Agency’s ARPA organization, puts emails directly on the website so ‘it’s to prevent too many emails’ is not a valid excuse. From personal experience, at least some DARPA
employees treat the weekend as sacred no-work-email time regardless of urgency – a behavior I associate with large bureaucracies where people have little stake in outcomes.

And at the end of the day politics still sneaks in. The biggest example of this were the changes in 1972 thanks to shifting views on the military as a consequence of Vietnam. Throughout DARPA’s history, new
presidents have occasionally replaced the DARPA director and starting in 2001, DARPA directors began to sync up with presidential administrations, which suggests increasing politics over time.

The ARPA Model can lead to inbreeding

The path of least resistance when you’re hiring is to hire people you know, and people tend to know people like themselves. This human tendency, combined with hiring flexibility means that DARPA tends to bring
people on as program managers who are already in the DAPRA sphere – researchers who work on government projects, military personnel, and generally “Washington DC people.”

One consequence of insularity is that, from personal experience10, there is a significant cultural gap between DARPA and
Silicon Valley. This gap includes legal organizational structures, the expectations of funders, the importance of storytelling, communication style, and more. Since venture capital and startups are an increasingly important technology
dispersion mechanism, the culture gap impairs DARPA’s ability to effectively transfer technology. To be explicit: my point isn’t to criticize DARPA for not having more of a ‘startup culture’, but to illustrate that
cultural insularity can impede technology dispersion.

Cultural inbreeding can also impede DARPA’s ability to work with the best possible performers. While ideally program managers can cast a dragnet that puts their program on the radar of everybody who might be
able to contribute, in reality many people who might be able to contribute are totally unaware. The announcements are still mostly picked up by people in the DARPA “sphere.” I have personal experience with this gap – I was working
on technology for manipulating an uncooperative satellite at literally the same time that DARPA was running a program to capture and repurpose satellites and didn’t hear about it until after the grant calls had closed.

There is also a fine line between ‘working with people you trust for the sake of expediency’ and ‘giving money to your buddies even if they aren’t the best ones for the job.’ Although PMs
have definitely gone over this line – the Total Information Awareness program poured money into a consulting firm run by the PMs former colleagues – DARPA has managed to remain shockingly scandal free.

A riff on the ARPA Model could possibly address the potential inbreeding problem explicitly putting in effort to seek out weirdos and people from ‘different worlds.’

DARPA has a mixed record on transitioning

The ARPA Model has the explicit intention that technologies get out of the lab and into the real world. Clearly DARPA has successfully transitioned paradigm-changing technology to the
government, large companies, and startups, so the model’s design isn’t worthless. However, many DARPA technologies still fall into ‘The Valley of Death’
11 and even successful transitions often are not smooth.

DARPA’s history pockmarked with stories that go “and then at the end of the program, the military or industry said ‘that’s cool, but we like the way we do it now.’” Or perhaps
worse, they say ‘that looks great’ and take ownership of the technology but then completely stop working on scaling it up. In some of these cases the outside organizations come around: DARPA funded the development of UAVs in the
’80s, the Navy took on and then killed the program, DARPA continued development until the military paradigm shifted in the ’90s. The story is similar for optoelectronics. While counterfactuals are hard, these common ‘near
death’ stories suggest that there are many programs that vaporized on impact with existing paradigms.

Each transition failure, like unhappy families, is unique. Some of the failure modes do rhyme:

  • The technology is great, but it both doesn’t slot well into a product line at a big company and investors can’t see how it could lead to a massively valued company in less than a decade.
  • The people who worked on the technology are great researchers but terrible entrepreneurs. The skills that make you good at seeking+executing on government projects can be quite disjoint from the skills that make
    you good at building and selling a product.
  • The pieces of the program don’t come together in one place. Each project in a program is a piece of a puzzle and most of the time they’re far more valuable together than on their own – a mouse
    isn’t much use without a GUI. For maximum impact, those pieces eventually need a single home after the program ends. Personal computing was such a success story in part because as ARPA took its foot off of the gas pedal, Richard Taylor
    was able to bring the pieces in-house at PARC. Speculatively, bringing the pieces together may have become harder because the Bayh-Dole act enabled universities to extract licensing fees on research they created with government money,
    potentially bleeding dry anybody who wants to license from multiple universities who each want their cut.
  • DARPA often works on ideas that nobody thinks they want. A program can fail to address the core reason why there’s no demand. In a way, DARPA programs are like cave dives – you want to get somewhere that
    requires you to decouple from safety/the market but you need to get somewhere with oxygen/demand in a finite amount of time. You can miscalculate where you need to end up or the target can move.
  • DARPA works on programs that go against the established paradigm. Changing paradigms might endanger someone’s job or just be more work than any advocate is willing to do.

Many of these failure modes illustrate the tension between ‘building something people want’ and ‘building something capable of shifting paradigms.’

Transitions are an area where an organization riffing on the ARPA Model may be able to make big improvements over DARPA. It is important to first acknowledge that DARPA doesn’t exist in a vacuum. The
mechanisms for technology dispersion at its disposal are subject to the constraints of the world around it. In the 1970s that meant large corporations like Xerox, defense contractors, or government labs. The constraints on those
organizations have shifted, and VC-backed startups have become an important mechanism as well. DARPA has worked to adjust to this new environment – going so far as to create a commercialization team in 2017. But from personal experience the
adjustment has been clunky. I don’t have a great answer yet, but it’s important to ask “what would an organization built from the ground up with transitions into today (or tomorrow’s) environment in mind look
like?”

When talking about riffs on the ARPA Model, it’s important to distinguish between organizations that spiritually riff on the ARPA Model and structurally riff on it. Spiritual riffs (or just spiritual rhymes)
are organizations that have enabled or strived to enable high-risk, possibly paradigm changing, goal-focused research. People often conflate DARPA with great R&D organizations of the past – Bell Labs, Xerox PARC, Lockheed Skunkworks, and
others. It doesn’t help that DARPA funded and helped organize research at all of these places. While I am all for being spiritually inspired by DARPA, we’re digging into the hypothesis that DARPA’s
structure is worth riffing on, so I’m going to ignore organizations that fall into this category.

To my knowledge, there haven’t been any attempts to structurally riff on DARPA before the latter half of the ’00s. This timing is intriguing in itself, and a pure speculation is that it is the consequence
of a dawning realization that something wasn’t working in the research pipeline. But that is another story for another time. Evaluating these ARPA rifflings as successes or failures is difficult, given that they are all younger than 13
years old, ARPA style programs usually take five or more years to produce a result (and often much longer to have an impact), and the expected success rate is 5—10 programs out of every 100. I will touch on whether they ‘feel’
success-y, but primarily we should focus on the deviations they have made from the standard ARPA Model, whether they make sense in context, and whether we would expect the deltas to lead to more success or failure over time.

There are surprisingly few attempts to riff on the ARPA Model. The US government has built two DARPA imitators: IARPA and ARPA-E. The British government has floated building an ARPA. In the private sector, Google ATAP
is DARPA-esque enough to talk about and Wellcome Leap appears to be a full-on health-focused implementation of the ARPA Model.

IARPA: Intelligence ARPA

I(ntelligence)ARPA started operation in 2007 and is run by the Department of National Intelligence. It currently has 17 program managers as of 2020, which makes it roughly the size of one of DARPA’s six offices.
At first glance, it has all the pieces of the ARPA Model: empowered program managers coordinating high-risk high-reward external research.

Differences from DARPA

IARPA explicitly does zero commercialization – all technology transfers are to intelligence agencies. This stands in contrast to DARPA which transfers to large
companies and the VC/Startup Ecosystem as well as DoD agencies.

IARPA spends around a quarter of its budget on testing and validation of the technology that comes out of its programs. Given IARPA’s focus on computer security and computation in general, this spending both
makes sense and is intriguing. What would a focus on testing and validation look like for other technologies? One failure mode for technology trying to leave the lab is extreme fragility (only working in ideal situations, etc.) and a validation
mechanism might cost more in the short term but lead to more robust technology in the long term.

IARPA uses contests and tournaments as a first-class tool for generating research. DARPA has run a few tournaments which arguably were significant successes – the grand challenge and the urban challenge catalyzed a
phase change in autonomous driving from a research novelty to a potential industry. The DARPA robotics challenge was disappointing, but ⅔ is great for high-risk programs. Despite the outlier results, DARPA tournaments are the exception,
not the rule. The reasons for this difference aren’t clear – it may be that computing-based tournaments are easier to coordinate and standardize or it simply comes down to a matter of culture. Several economists have pointed out that
prizes are underused as a research funding mechanism and when I’ve directly asked DARPA PMs why they don’t use prize mechanisms more often the answer was “I don’t know.” IARPA’s regular use of tournaments,
DARPA’s infrequent but high impact tournaments, and missing evidence to the contrary suggests that prizes may be a useful place for an organization riffing on the ARPA Model to explore.

Surprisingly IARPA is much less sensitive about its research than DARPA. Most of IARPAs programs are unclassified, in contrast to ~1/3 of DARPA programs that
are classified. IARPA also funds organizations outside the US. It seems useful to  tap into as many resources as possible so my hunch is that this is a useful change.

IARPA has a policy of making sure there is a potential intelligence agency ‘customer’ before undertaking a program. While this seems well-intentioned, it means that IARPA will never be able to change a
paradigm unless an intelligence agency already feels like it should be changed.

IARPA’s Results

IARPA has 37 ongoing programs and 26 completed programs as of mid 2020. At a 10% success rate, there’s a 95% chance that at least one of them would have succeeded.

IARPA funded a significant chunk of quantum computing research in the US in the 2007-2010 period, including David Wineland’s research that went on to win the Nobel Prize in physics in 2012. It’s not
completely roses though – IARPA cut off funding to NIST researchers, including Wineland, because they didn’t want to fund other government organizations. It’s not clear if they resumed that funding or are claiming credit for the
Nobel prize research that they funded and then cut off …

“As of 2009, IARPA was said to provide a large portion of quantum computing funding resources in the United States.”

Has IARPA started any industries yet? No, but if quantum computing becomes as big a deal as people think it could be, I suspect IARPA will play a large role in the stories about it.

Not serious note. Look at this list of IARPA program names and rejoice ye LoTR and Mythology nerds: Ithildin, MAEGLIN, SIMARILS, Amon-Hen, Odin, ATHENA, ICArUS,
Mercury.

ARPA-E: ARPA-Energy

ARPA-E(nergy) started operation in 2009 and is run by the Department of Energy. It currently has 20 program managers and a budget of ~$180M as of 2020. That makes it approximately the size of one DARPA office. Like
IARPA, it has all the pieces of the ARPA Model on paper: empowered program managers coordinating high-risk high-reward external research.

Differences from DARPA

ARPA-E is on the opposite end of the transfer spectrum from IARPA: while IARPA only hands off technology to its funding departments, ARPA-E primarily targets
private companies to adopt and scale up its program’s output (DARPA does both.) Transfer via commercialization makes sense given that the Department of Energy doesn’t actually build or deploy energy technology. ARPA-E adapted the
ARPA Model to target commercialization in several ways: it has a whole commercialization team; on-staff lawyers; and explicitly considers how the technology is going to be implemented by the energy industry before embarking on a program. While
these adaptations are logical, they may be shooting the ARPA Model in the foot. The additional apparatus around commercialization makes ARPA-E less flat than DARPA – there are more non-PM staff than PMs, which cuts into the benefits of DARPA
being tiny and flat.

While DARPA is funded by congress as part of the DOE budget, the DOE funds ARPA-E programs directly. Direct program funding means that the DOE has much more say over which programs ARPA-E runs and how they spend
money. This subtle difference in funding sources make ARPA-E less independent from the DOE and may invalidate the buffer between the government political machine and DARPA provided by opacity and its director. Since ARPA-E is targeting transfer
to non-DOE entities, the increased DOE oversight may lead to one of those lovely situations where an entity with no skin-in-the-game has a large say on risky activities.

ARPA-E is explicitly metrics-driven. While this approach certainly jives with modern sensibilities, my hunch is that metrics can hamstring embryonic technology. Metrics are great when you know what you’re
optimizing, but tend to cause the streetlight effect – you optimize for things that can be measured. Do you know what you’re optimizing when you’re still figuring out how a technology works and what it is good for? What would
Licklider’s metrics have been for a personal computing program? It is possible that energy has so few relevant metrics that a metrics-driven approach doesn’t cut down on trying weird things, but I’m skeptical.

ARPA-E is not process-light. I have heard first hand accounts of inflexibility around hiring “despite most of the work being in different parts of the country, you must be full time and in Washington or you will
not be invited to all the meetings” and tools “you must use the government issued laptop for everything.” This inflexibility goes against the conclusions that  you should pay attention to DARPA’s informal process
and ignore formal process and that DARPA is incredibly flexible with who it hires to be program managers.

Results

Most reports about ARPA-E’s impact stress the number of projects they have funded and concrete outcomes are conspicuously absent. It feels like a dog-that-didn’t-bark type situation. The closest to an
outlier result was funding the research that led to a solar-cell company named 1366 hitting 19.6% efficiency.

ARPA-E is also operating in a brutal area: the energy industry is notoriously conservative partially because it is behooven to many stakeholders: various kinds of investors, governments, customers, and non-customer
residents in the areas where the energy company operates.

Others

Many countries have a military R&D organization that reporters occasionally refer to as “The DARPA of X” (for example Israel’s DDR&D: Directorate of Defense Research &
Development
 or Singapore’s Defence Science and Technology
Agency (DSTA)
.) These organizations do work on advanced military technology but run their own labs and have the military as a more direct customer which makes them closer to Lockheed Skunkworks than DARPA.

Google ATAP

Regina Dugan, a former DARPA Director, explicitly organized Google Advanced Technology And Projects (ATAP) to riff on DARPA’s structure. ATAP does high-risk high-reward program-focused research that
isn’t afraid to explore new phenomena but leaves out empowered program managers and externalized research. These two missing components make it hard to consider it a true riff on DARPA. It’s much more akin to a skunkworks.

Wellcome Leap

The Wellcome fund – a British foundation dedicated to funding health research – announced a new division called Leap in May 2020. From the scant information available online, Wellcome Leap seems to be organized around
empowered program managers coordinating high-risk high-reward external research. From what I can tell, this is the closest a private entity has come to the platonic ARPA Model, at least on paper. There are reasons for both optimism and
skepticism about its potential outcomes. Its CEO, Regina Dugan, is a former DARPA director (who incidentally started Google ATAP.) Leap is nominally acknowledging that its programs will take time to come to fruition and have a high chance of
failure. Hopefully, the things it has going for it will not be impeded by starting too big: it has a large ($300m) pool of money from day one.

British ARPA

As of mid-2020 there has been a lot of talk about a British ARPA but not much apparent action. Dominic Cummings, Chief Advisor to the British Prime Minister, has written about a British ARPA many times and the
Queen’s speech at the end of 2019 featured the idea. Many people have weighed in online and the government is
soliciting information about how it might work. That may be a red flag
for an organization based on the idea of individual empowerment and non-consensus approaches.

It’s easy to set out to build a DARPA but end up building a Skunkworks

The dominant paradigm of starting an org is to do internalized R&D unless you’re a government or charitable foundation. In reality, R&D orgs lie on a spectrum between externalized and internalized, with
DARPA on one end and Lockheed Skunkworks on the other. Externalized research entails working through grants and contracts while internalized research is doing everything in house. There’s nothing inherently wrong with building a
Skunkworks – it just means that there are different tradeoffs and the statement “DARPA for X” is misleading.

Most DARPA clones start too big or with heavy process

DARPA started off small and informal. For years it was less than ten people.

Starting too big often causes heavy process. People always spend money for a purpose, so the more money there is up front,
the more expectations there are attached to it. One failure mode is
that funding entity doesn’t trust a low-track record organization and requires
heavy
 process to create assurance that the money will be well-spent.

Starting large makes opacity hard, and opacity is important to DARPA’s outlier success. Starting large with large
expectations and scrutiny makes it tough to execute on things that seem stupid. A spotlight also encourages organizations to work on things that seem sexy which makes it hard to generate true outlier output. Remember: if you know it’s
easy to shut something down it’s easier to start and the opposite is true as well.

Culture and trust takes time to build. A large organization either needs culture and trust or process to keep everybody aligned so if you start big without heavy process it will just lead to a shitshow. As far as I can tell, this point and the previous one is roughly what happened with Google ATAP.

Regardless of size, directly copying all of DARPA’s processes leads nowhere good. Many of the processes were built up over years to fit DARPA’s exact situation, and as noted earlier, you should
pay attention to DARPA’s informal process and ignore formal processes.

“There is not and should not be a singular answer on ‘what is DARPA’—and if someone tells you that [there is], they don’t understand DARPA” 

—Richard Van Atta

It’s important to lay the changes out and discuss whether or not they affected the change in DARPA’s outlier output. Interestingly the
majority of DARPA’s changes over time have involved adding more explicit
process. The biggest change happened when ARPA became DARPA in 1972 because of the increased scrutiny on military spending both in the government and outside of it. Which raises the question: “was the
shift from ARPA to DARPA a focus change or a process change?”

It’s tempting to throw out anything introduced since 1972 unless it was a codification of a previously implicit rule. However, the world has changed since 197212 so it’s worth considering whether some adjustments were made to enable DARPA to more effectively operate in the world.

Most of DARPA’s outlier output happened before 1972

The first elephant in the room is DARPA’s history. Just looking at the DARPA “best hits” timeline  for wildly impactful technologies, the difference before
and after 1972 is pretty stark.

Before ARPA became DARPA in 1972:

  • Weather satellites
  • Phased array radar
  • Precursor to GPS
  • Computer mouse
  • Arecibo observatory
  • Shakey the robot (basically kicked off robotics as a field)
  • Mother of all demos (basically kicked off personal computing)
  • ARPAnet (basically kicked off the Internet)
  • Gallium Arsenide (that enabled the laser)
  • TCP/IP (vaguely important for the Internet)

After 1972:

  • PAL (siri precursor)
  • Big Dog robot
  • Urban Challenge (basically kicked off autonomous cars)
  • Miniaturized GPS receivers

It raises the question of whether there is anything to be learned from the modern organization or whether we need to go completely off of historical accounts.  Some of the post 1972 creations are still pretty
significant – both autonomous cars and voice technology arguably haven’t reached their potential yet. Additionally, we can attribute a chunk of the
perceived outlier dropoff to
DARPA’s Congressional mandate to focus on things with clear military applications. While DARPA has changed over time, most of the changes were changes in official process so if we pay more attention to informal
process than formal process
,  there are still valuable things to be learned from post-1972 DARPA.

ARPA became DARPA in 1972 because of the increased scrutiny on military spending both in the government and outside of it

The Mansfield
Amendment
 expressly limited ARPA funding to direct military applications and gave Congress more oversight into their spending. The amendment was part of broader attitude changes both inside the government and
outside of it. Inside the government there was increasing discomfort with how ARPA Program Managers could spend money on basically anything they thought was worthwhile. Unfortunately, you can’t cut off just one tail of a distribution so
these constraints definitely reduced the variance of DARPA results, both positive and negative. One might think that the technologies are so broadly applicable that  smart program managers could work on anything under the umbrella of
‘defense’ but from talking to former program managers, there are definitely DARPA-style ideas that DARPA doesn’t pursue because they are not sufficiently defense related.

The economy also tanked in 1973, which contributed to Congress being  more concerned about what they were getting for their money.  They didn’t want program managers like J.C.R. Licklider prancing
around spending money that didn’t have a clear payoff. This historical note emphasizes the point that to pull off long term projects, innovation orgs need to be aligned with their money factory so that they’re not dependent on fair
weather funding – sources of money that flow only when the economy is good or they’re working on something popular.

Outside the government, popular opinion turned against the military in the early 1970s, which both put pressure on elected officials and changed attitudes towards working with ARPA. The change in opinion
made university researchers more hesitant to take military money.  It also may have made fewer high quality people want to join the org, which arguably probably persists now. The stigma around working for the military puts additional
weight on the question of
why do people become DARPA Program managers?  

A list of changes over time

  • Some time before 1993 DARPA shifted to a formal  three phase development system. Verdict: Let’s ignore this as a formalized process
    change.

  • The Heilmeier Catechism13 was introduced in 1975, so it did not contribute to any of the output in the
    ’60s or early ’70s. In fact,
    Dominic Cummings attributes the decline in output to its structure and an implied focus on results.
    There are arguments both ways. Many explicit DARPA rules like transient program managers are just codifications of previous implicit rules so the catechism may have been an informal rule of thumb test. Additionally, Heilmeier himself was an
    engineer who discovered phenomena that enabled liquid crystal displays so he presumably understood the process of true invention and discovery. On the
    other hand, he worked on planning
    for McNamara so he very much bought into a top-down mentality.
    Verdict: I’m on the fence about whether to pay attention to this one – it seems like it could be useful as a thinking
    framework as long as it isn’t used as a bludgeon.
  • Currently, DARPA managers do go/nogo checks every few months on performers. Program managers used to exert less explicit oversight over performers. I
    suspect that the ability to move money quickly away from things that aren’t working is important though.
     
  • In the 1960s DARPA program managers didn’t do open solicitations for grants through an official government system – they just gave money to whom they wanted. Verdict:
    This is pure formal oversight.  
  • In the 1960s it wasn’t official policy that DARPA Program managers have a tenure of four to five years  but the official policy was just a codification of the informal rules when Program Managers started hanging around too long. Verdict: While technically it’s
    addition process, this is just a codification of an informal rule
  • Currently, every program at DARPA is intensely technically scrutinized by the tech council that then advises the
    director on whether to approve it. In the 1960s the process was a quick meeting between the Program Manager and the director laying out what they wanted to do.
    Verdict: This one is
    tricky – from conversations with program managers, the Tech Council sounds useful because it enables a clear goal for PMs without restricting what they do.
  • In 2001 DARPA changed who can be a prime contractor from anybody (including research labs and startups) to only large companies. Verdict: This one is
    also tricky –
    Rethinking the role of the state in technology development: DARPA and the case for embedded network governance makes the
    argument that this is an adjustment for the fact that Corporate R&D has declined and it’s hard for startups to get products into production so one of DARPAs new roles is almost as a systems manager. This change may be worth paying
    attention to as a way that DARPA has adjusted to a changing world in a smart way.
  • Starting in 2001, DARPA directors began to sync up with presidential administrations. This suggests that there’s been a shift in the politicization of the organization. Verdict: Obviously this seems bad.

Together these raise the question …

Was the shift from ARPA to DARPA and changes over time more of a focus change or a process change?

The transition from ARPA to DARPA involved both a focus change to explicitly military projects and more formal process and oversight. Since most of DARPA’s outlier output happened before 1972 the shift raises
the question of whether the output dropoff  was more a function of the change in DARPA’s process or its focus. There are arguments for both ends of the spectrum.

The process argument is that after 1972 oversight and increased  friction killed DARPA’s ability to create outlier results. Both increased demand
that programs have a waiting customer and increased spending scrutiny embodied by things like The Heilmeier Catechism and the Tech Council  hamstrings program managers. This argument definitely has merit because small amounts of friction
can have large effects and you can’t cut off just one tail of a distribution.

The focus argument is that we just don’t see or appreciate as many of the things that come out of DARPA because they are specifically for military use and
have less broad applicability.

The focus vs. process question is important because it determines whether there is anything to be learned from how DARPA works today. If the changes in process that occurred after 1972 are the direct cause of the
drop-off in outlier output, then analyzing the DARPA process today is worthless. If the focus shift caused the drop-off in outlier output, lessons from today’s DARPA remain valuable. In reality, it is probably a mix of both.

The visceral excitement that comes out of (recent) former program managers when I’ve talked to them about their time at DARPA suggests that there are many things to be learned from the current organization
that is not just cargo culting. At the same time, the majority of changes to DARPA over time have involved adding more explicit
process. How can we resolve these two ideas? I would suggest
that the solution is to
pay attention to DARPA’s informal process and ignore formal process.

One non-focus, non-process reason for the change in outlier output  may be that DARPA can’t attract as many amazing people because they have better options or don’t want to work on explicitly
military things. If this were the case, it would severely affect output
because the ARPA Model is so heavily dependent on great PMs. However, there are
still super high quality people in DARPA so this cannot explain all the changes.

Pay attention to DARPA’s informal process and ignore formal process

For the most part there’s been few changes in the weird things about how DARPA’s program managers work compared to other organizations, the incentives and structure of the organization, the funding,
and general shape of the process. So it is not worthless to study the modern organization. Regardless of whether or not the
changes in formal process are the reason that most of DARPA’s outlier output happened before 1972, they certainly didn’t help. Therefore, if you want to
replicate DARPA’s outlier success, it makes sense to pay attention to DARPA’s informal process and ignore formal process.

Usually, formal process is put into place to increase oversight and decrease reliance on trust. Formal process lets people outside the organization trust in
the process instead of the people. Ignoring the formal process makes the success of an organization depend more on trust in people.

This trust-dependence means that it is essential for an organization that seeks to replicate DARPA’s success to start with trust from funders and
collaborators outside the organization. The trust-dependence creates a chicken-and-egg situation because by definition new models and organizations do not have a track record and the people who are most likely to create a new game are ones who
haven’t won at other games.

This analysis opens many questions – especially about different knobs and what happens when you turn them. Here are some of the most pressing to mind, though there are many others.

Where is the pareto front between externalized and internalized research?

DARPA doesn’t do any research in house, which has several advantages. However even in the domain of mission-focused research examples like Lockheed Skunkworks, Bell Labs, and others have demonstrated that internalized research can produce excellent results and has clear advantages. Internalized research lowers some set of transaction costs, enables closer collaboration between disciplines, smooths transition to manufacturing, and makes capturing value easier. The fact that internalized vs. externalized work is not a binary distinction but a spectrum complicates the situation. My hunch is that there is some class of work where the advantages of externalized research dominate and another class of work where the advantages of internalized research dominate. I don’t have a good answer to where that line is, but it is an important question to address.

Is it essential that a research org is a small fraction of the money factory’s budget?

DARPA is a small fraction of the Department of Defense Budget. Similarly, other storied research organizations tend to be attached to large ‘money factories’ and represent a small fraction of their budget—at its height, Bell Labs’ budget was only 5% of AT&T’s revenue. By contrast, my sense is that even well-capitalized independent research-focused organizations feel constant pressure that can impede long-term research efforts. Now, plenty of corporate labs produce nothing of note, so being attached to a larger money-making organization is certainly not sufficient for producing long term results. It’s important to understand whether that money factory is necessary.

Why doesn’t ARPA use prizes instead of grants outside of a few challenges?

Prizes seem like they could be a natural fit with the ARPA Model, yet DARPA doesn’t seem to use prizes outside of a few examples like the DARPA Grand Challenges and Robotics Challenges. Specifically, prizes seem like they’re just one more path along the theme of top down problems and bottom up solutions beyond open solicitations and parallel projects. IARPA’s heavy use of competitions suggests that the question is worth asking. One prize-dissuading possibility could be that most DARPA projects are more capital-heavy than IARPA’s mostly computer-based-projects and capital requirements would make people hesitant to work towards an uncertain payout. Another possibility is simply tradition and government rules. It’s worth knowing whether it’s the former, the latter, or something else if you’re going to riff on the ARPA Model.

Is there a minimum effective budget for the ARPA Model to work?

Successful big things usually start small and many attempts to replicate DARPA’s results start too big. So it makes sense to ask: can you start with a tenth of DARPA’s budget? A hundredth? A thousandth? There are two lower budget bounds that you’ll run into: the number of parallel high-risk programs you need to run to get a success and the budget per program to have a viable shot at that success.

A back of the envelope calculation suggests that if each program generously has a 10% chance of success then you need to run at least seven programs to give yourself a 50% chance of at least one program being successful.

The question of minimum program budget is trickier. Let’s look at some comparison numbers. In the years 2018–2020 among DARPA programs not focused on assembly and production the minimum budget was $2m, the maximum budget was $31.4m, and the average budget was $12m. The ARPA IPTO directorate that midwifed the personal computer started with a budget of $47m in 2020 dollars. ARPA-E vacillates between ~$200–300m/year and has about 50 programs running at any time, which comes out to roughly $4–6m per program. IARPA’s budget is classified. Could a program go below a couple million dollars a year and still be effective? There are arguments both ways. On one hand you could argue that surely bureaucratic inefficiency is pushing those budgets higher than they need to be. On the other hand, it might be that the sort of work most valuable for an organization riffing on the ARPA Model to undertake needs a relatively large budget, otherwise it would be picked up by other funding mechanisms.

How could the ARPA Model take advantage of the Internet?

A large part of a DARPA program manager’s job is focused network building and the Internet is one of the greatest network building tools of all time. However, DARPA doesn’t have a strong web presence and the idea that DARPA’s aversion to people with a web presence may be how they avoid asymmetric career risk suggests that there are good reasons for the low Internet utilization. However, the Polyplexus project, a DARPA effort to foster an idea-generating online community, suggests that DARPA may realize that it’s no longer at the pareto front of this tradeoff and that it might be able to more effectively use the Internet to execute on its model. The Internet could conceivably help PMs find people working on the edge of a space, foster communication between performers, and find gaps in the state of the art. What would a riff on the ARPA Model built to take advantage of the Internet look like?

The Advanced Research Projects Agency model is of an organization set up to maximize the agency and effectiveness of world-class program managers (PMs) who coordinate
external research to midwife technology that wouldn’t otherwise happen (Programs.)

The model has changed over time, but has still produced outlier
results
 over time so it is worth paying attention to modern DARPA with more focus on informal process than formal process. PMs need specific characteristics to succeed: thinking for
themselves
, curiosity, low ego, vision, and ability to act under uncertainty. PMs also need to be trustworthy because the model depends on their ability to deploy funds quickly and redeploy them as needed. These PMs have temporary tenures, which enables idea turnover, aligns incentives, and enables DARPA to hire
people
 it wouldn’t otherwise be able to. It’s worth deeply thinking about PM motivations because they are so core to the model. 

Organizationally, DARPA is tiny, flat, and opaque. It is set up to combine bottom-up *and* top down approaches through different-scale feedback loops. It is more ideas limited than money limited. DARPA’s project design and execution framework boils down to first showing that a precise technological vision is not impossible, then showing that it is possible,
and finally making it possible
. On top of many tacit tools, PMs execute on these steps by building focused networks and
using seedling projects to derisk assumptions during a <$1.5m exploratory tranche before presenting a program design to an advisory group to the director known as the tech council.

DARPA provides a critical ‘in-between’ role in the world. It facilitates cross-polination and derisks wacky ideas for both private, academic, and government
organizations
.

The number of interlocking pieces is huge but this breakdown should give you hope – none of these admonitions are magical or depend on irreplaceable configurations of people in time. Riffing on DARPA will be
incredibly hard. ‘DARPA hard, perhaps?

As I noted at the beginning, the entire point of this document is to inform action. This is not the
place for a deep dive on what those actions could and should be but I wanted to leave you with some sketchy questions and considerations for replicating DARPA’s outlier success that I plan to tackle.

The first question, of course: “is it worthwhile to riff on the ARPA Model?” Yes. It’s notable that in conversations with former PMs, they
explicitly noted that there are many program ideas that are shelved (and often forgotten) because they fall outside of DARPA’s military-focused scope. If anything we need it now more than ever.

Does an ARPA need to be a government organization? Is it possible to create a private ARPA? Nothing suggests that it’s impossible, but there are several pieces that need to be figured out first.

Some uncomfortable truths:

  • Anybody attempting to riff on the ARPA Model needs to be willing to sit with a lot of discomfort for a long time – the best ARPA programs often take a decade or more to come to fruition.
  • You can’t be in it for short term glory. Opacity is important to DARPA’s outlier success so the industry standard flashy announcements about what you’re going to do are off the table.
  • You will need to work on things before they’re sexy. The whole point of the organization is to make unsexy things sexy, not hop on the latest hype
    train.
  • Attempting to replicate DARPA-style successes may involve doing several things that go against Silicon Valley dogma:
  • Value capture might be really hard or impossible. Externalized research does seem structural to the model and value capture primarily happens on work done inside an organization. The role of DARPA as an
    in-between thing feels weirdly important – again this is awkward for value capture. This also suggests that riffs on ARPA probably will not look a normal fund or startup.
  • Planning is important and might suggest resurrecting the dark art of systems management.
  • There is the awkward tradeoff between the need to start small and the fact that there may be a minimum size to pull it off successfully. You could imagine a situation where you start small and just do program
    design for a few programs and once that is done, you use those designed programs to bring in the money to execute on them. However, that would put the organization in a position to be judged on unsexy and weird programs. It would hamstring
    the ability to quickly begin executing. One of the reasons companies and researchers are willing to engage with PMs seriously so early in the process is that they know the PMs are ‘good for it.’  Maybe there’s a
    tranched solution.

A big consideration for anybody hoping to emulate DARPA’s success outside of a government agency is the question of ‘how will money work?’ The ideal situation would be for the organization to be
its own money factory, but it might be impossible to accomplish that while preserving the structurally important pieces of the ARPA Model. Another huge consideration is how to set up
incentives both to work with high quality PMs and for potential performers in big companies, small companies, and academia to take you seriously. These considerations suggest that replicating ARPA’s success outside of a government might
require truly new organizational structures.

There are also exciting places where it may be possible to riff on the model. Building more structured roadmapping frameworks could make program design even more effective. You might be able to build trust during the
program design period by having potential PMs do a ‘residency’ instead of making an upfront binary decision. It’s even possible that program design and execution could be decoupled. You could seed even more awesome if another
organization is willing to execute on the program after it’s been
derisked by showing that it is both not impossible and is in fact possible. These are just a few of the possibilities!
If you want to work with me on the second step, reach out or just stay in the loop.

So many thanks to the people who gave well-thought out feedback and encouragement on this: Andy Matuschak, Sam Arbesman, Martin Permin, Rebecca Xavier, Jason Crawford, Luke Constable, Mark McGranaghan, and Cheryl Reinhardt. Thanks also to Mark
Micire, Anna Goldstein, and Erica Fuchs, Michael Goldblatt, and others for letting me ply them with many annoying questions.

The styling and interface of this website is mostly derived from Andy Matuschak and Michael Nielsen, “How can we develop transformative tools for thought?”, https://numinous.productions/ttft, San Francisco (2019)


Footnotes

(https://www.darpa.mil/work-with-us/heilmeier-catechism)

Changelog

  1. June 18 2020: expanded on open questions beyond just the questions

License

This work is licensed under a Creative
Commons Attribution 4.0 International License
. This
means you’re free to copy, share, and build on the work,
provided you attribute it appropriately. Please click on
the following license link for details: Creative
Commons License

Read More

Previous Post

24a2: An ultra-minimalist game engine

Next Post

How we got our AWS bill to around 2% of revenue

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top