9 Comments
User's avatar
Cavcdr66's avatar

Do we believe the government has enough people with the requisite passion, intellect, and energy to make this happen at scale? All the success stories here and elsewhere seem to assume a ‘workforce in waiting’ with the skills and attributes to make this happen — but never ask “if that’s so, why hasn’t it happened yet?”

Matt MacGregor's avatar

If done right, the government doesn't need to be expert in all domains. They just need to understand the problem, have a vision, and be able to judge progress through operational milestones.

Cavcdr66's avatar

I don't disagree; but the government must have individuals who understand the operational problem(s), individuals who understand enough of the technical aspects to validate the feasibility of a solution (in the time frame required), and individuals who understand the difference between prototype and serial production. Additionally, from a bureaucratic standpoint a single individual needs to be held accountable -- not to "take the fall" when there's a failure (which there will be) but to assess whether that failure is significant enough to terminate the effort based on the operational needs (sometimes the answer really is 'cut slingload' and then use whatever's been learned on a next project, because the operational value window has closed) and not anything else.

David Sharp's avatar

I don’t think that we need a cast of thousands. There are not THAT many situations where this approach is optimal. We need to empower the best Contracting and Agreements Officers to try this approach on really hard problems.

Cavcdr66's avatar
5dEdited

Two things I would offer:

This is much more than Contracting and Agreements officers. This is ensuring everyone on the effort is fully committed and competent.

While all programs don’t need this, every ACAT 1 / MDAP probably does. The cost and risk of failure is too high.

David Sharp's avatar

Good point about the team… this strategy is best suited for teams with strong, hands on leaders, like a DRPM.

Maria Edlenborg Mortensen's avatar

This is a really important shift.

But it also creates a new kind of risk:

when responsibility is distributed across vendors, it becomes harder to see where decisions actually live — and who carries their consequences over time.

Continuous competition and frequent evaluation improve performance.

But they don’t solve what happens after decisions are made:

how they are tracked

how they are challenged

and how learning actually changes the system without destabilizing it

That layer still seems structurally missing.

And without it, we risk building highly adaptive systems that are difficult to hold accountable in practice.

Matt MacGregor's avatar

Good points for sure but I do think the regular test and experimentation events do help drive performance and also accelerate the learning process so I think that layer can be included if it's executed right.

Maria Edlenborg Mortensen's avatar

That’s a fair point — and I agree that regular testing and experimentation can significantly improve performance and learning.

I think where it becomes more difficult is that those mechanisms mainly operate within the system’s active phase.

What seems harder to capture is what happens across time:

how decisions persist

how they’re revisited

and how their consequences are carried once they move beyond those test cycles

That’s where it feels less like a performance problem, and more like a structural one.