In a recent video, Nate B. Jones argued that as AI makes software and content “infinite,” value is shifting toward 5 specific “safe places.” After watching the video, a few times, I believe these 5 pillars offer a roadmap for how we, as public servants, should think about digital transformation and regulation.
Note: The following represents my personal interpretation of these concepts, as a tax payer and citizen, and does not reflect the official views or policies of my employer.
1. Trust: The new security guard
Nate’s insight: When AI can mimic anyone, “Trust” becomes the scarcest resource. People will gravitate toward platforms that can prove they are safe, real, and secure.
The Public service lens: I see this as the government’s core mandate in the digital age. The government’s role isn’t just to use AI, but to provide the verification and certification infrastructure. If citizens can’t tell a government service from an AI scam, the social contract breaks. The focus should be on building “trust layers”, official registries and identity signals that are AI-proof.
2. Context: Protecting the “knowledge base”
Nate’s insight: AI is a generic engine; it only becomes powerful when it has “context” (your files, history, and private data).
The public service lens: For me, this is about data stewardship. Governments hold the ultimate context – health records, tax data, and legal history. Our priority should be creating rules for “context portability.” We must ensure citizens own their data and can safely give and revoke permission for AI agents to use it, preventing private “data lock-in.”
3. Distribution: Curation over quantity
Nate’s insight: When AI creates a million apps, the bottleneck is discovery. The value moves to the “gatekeepers” who decide what gets seen.
The public service lens: In a public sector context, I believe this means ensuring equitable access. We need to ensure that dominant platforms (like app stores or search engines) don’t bury public-interest services under a mountain of AI-generated noise. Public policy should focus on making essential services “discoverable” in an automated world. I also think it relates to how we manage news sources and how we keep those sources as bias-resistant as possible. (Freedom of the press!!!)
4. Taste: The human in the loop
Nate’s insight: AI can execute, but it cannot “judge.” “Taste” is the human ability to decide what is good, ethical, and appropriate.
The public service lens: This suggests a shift in our workforce needs. We don’t just need more coders; we need expert orchestrators. I think we should be investing in “human-in-the-loop” design by training public servants to supervise AI workflows to ensure they reflect community values and public ethics, which an algorithm cannot replicate. This also has to be the focus in our education systems and in public education.
5. Liability: Who is accountable?
Nate’s insight: Innovation stops when no one knows who is responsible for a mistake. Companies that accept risk and provide liability will win.
The public service lens: This is the most urgent area for legal frameworks. In sectors like health or law, we need to clarify where the buck stops. My view is that the government’s role is to create “legal certainty” by defining whether the developer, the user, or the agency is responsible when an AI agent makes a high-stakes error.
What this really means to me
Nate’s framework suggests that “building with AI” isn’t about the tech. It’s about the five durable layers surrounding it.
In my opinion the public sector’s path forward is clear: Stop worrying about the “magic” of AI and really focus on building the infrastructure for trust, context, distribution, taste, and liability. That is how we ensure a resilient and competitive digital future for the public interest.
Watch the video and let me know what you think over on LinkedIn.