top of page

Think No More: What Happens When We Stop Treating AI Technology as Neutral

  • claudiotancawk
  • Jan 19
  • 4 min read

Last week, I walked into Hallie J. Stern's Think No More event at Arizona State University's McCain Institute in Washington, DC, expecting a conversation about artificial intelligence, power, and systems. I walked out with something far more unsettling and far more useful: a clearer sense of responsibility.

Think No More was a reckoning with the present. Over two days, technologists, historians, security experts, and critical thinkers dismantled a comforting myth many of us still carry: that systems are neutral, progress is inevitable, and technology "happens" to us. What emerged instead was a sobering truth: every system reflects human choices, values, and blind spots. And those choices have consequences.


That framing matters deeply to how I see leadership.


What I Encountered

The two days were deliberately unsettling. We saw how prediction markets now allow insiders to profit from classified military operations. One presenter documented how market activity spiked from 8.5% to 87% probability in the 24 hours before a covert strike; someone with inside information turned national security into a betting opportunity.


Dan Manning's session on AI for decision-makers drove home a critical point: when patterns break, or AI faces novelty, it hallucinates to please. Broken pattern equals made-up answer, dressed in confident language that masks fundamental incomprehension.


Greg Lindsay's threatcasting workshop immersed us in a scenario where AI recommends military strike targets. The exercise exposed automation bias at its most dangerous; operators deferring to machine confidence even when human judgment screams otherwise. Technology becomes the excuse.


We explored how America's critical infrastructure depends on supply chains so fragmented that a hurricane in rural North Carolina can halt drone manufacturing nationwide. Chinese sensors now control the majority of US water treatment systems, and four payment processing companies manage 95% of gas station transactions, with single points of failure hiding in plain sight.


We watched AI try – and spectacularly fail – to understand human humor, exposing the gulf between pattern recognition and actual comprehension.

Each session deconstructed a different myth about control, security, or inevitability. And each left the same question hanging: who benefits from our belief that these systems are too complex to change?


Prediction Is Power—And Power Is Never Abstract

One of the most striking threads throughout the event was how prediction has become a form of power. Not prediction in the cinematic sense of foresight or prophecy, but the quieter, more insidious version embedded in algorithms, risk models, and automated decision-making.


Sean Anthony Guillory, Ph.D., and Daniel Zimmermann presented compelling evidence of how predictive systems increasingly shape who gets access to resources, who is flagged as a risk, who is deemed credible, and who is quietly excluded. These systems don't just reflect reality—they actively construct it. When prediction becomes policy, and models become authority, accountability can disappear unless leaders insist on it.


What stayed with me: the more complex a system becomes, the easier it is for responsibility to evaporate. That's not a technical failure. It's a leadership one.


The Myth of Impenetrable Systems

Another powerful theme was the danger of believing systems are too complex to challenge. "The system decided" has become a convenient sentence—one that absolves humans of agency.


But systems are built. Maintained. Funded. Protected. Someone benefits from their opacity.


Experts reminded us that complexity is often used as a shield. When we uncritically accept that shield, we stop asking who designed the system, whose interests it serves, and who bears the cost when it fails. History shows us that the most harmful systems are rarely the ones no one understands—they are the ones no one feels empowered to question.


Real change has never come from deferring to complexity. It comes from insisting on clarity, even when clarity is uncomfortable.


The Think No More discussions reinforced this principle: complexity is often weaponized to avoid accountability. When we say "the algorithm decided" or "the market determines" or "donors won't support that," we're really saying "I don't want responsibility for this choice."


Authentic leadership means owning the choice and redesigning the system if necessary.


What This Means for Leadership Today

For me, Think No More reinforced a belief I carry into every role: leadership is not about mastery over systems—it's about stewardship within them.

Good leaders don't outsource ethics to technology. They don't confuse efficiency with justice. They don't hide behind process when people are harmed.


Instead, they ask more complex questions:

  • Who designed this system, and why?

  • Who is missing from the room where decisions are made?

  • What incentives are shaping behavior beneath the surface?

  • And crucially: what happens to people when this goes wrong?


These are not abstract questions. They are operational ones. Cultural ones. Moral ones.


Why This Conversation Matters Now

We're at an inflection point. AI is embedding decisions into infrastructure faster than governance can adapt. Philanthropy and development financing are contracting while needs expand. Corporations are rethinking stakeholder capitalism.

In each domain, the same dynamic: systems that worked (or seemed to work) are breaking down, and leaders are discovering they don't actually understand how those systems function—or who benefits when they fail.


Although the Think No More conversations centered on AI, security, and infrastructure, the implications reach far beyond technology. The same dynamics show up in global health, public policy, philanthropy, and large institutions of every kind. I've seen firsthand how well-intentioned systems can drift away from the communities they were meant to serve. Metrics replace meaning. Speed replaces trust. And eventually, impact becomes something we talk about rather than something people feel.


Choosing to Think-And Act-Differently

The event's title is intentionally provocative. "Think No More" doesn't mean stop thinking. It means stop thinking automatically. Stop accepting defaults. Stop assuming someone else has done the ethical work for us.

The challenge isn't just to be thoughtful, it's to be accountable. To build systems that serve people, not the reverse. To remember that every technical choice is a moral choice, whether we acknowledge it or not.

Think No More didn't offer easy answers. It offered something better: a shared refusal to be passive in the face of complexity. And in a world increasingly governed by systems we're told not to question, that refusal may be the most critical leadership skill of all.

Which systems in your organization would you question if you had the authority? What complexity are you accepting as inevitable that might actually be designed? I'd welcome your thoughts.


 
 
 

Comments


bottom of page