AI And The Discipline of Thinking. A Personal Journey, And How Artificial Intelligence Is Reshaping Human Judgement and Decision Making

To understand what artificial intelligence represents, it is useful to step away from the urgency and noise of the present and look backwards, not nostalgically, but structurally. I belong to a generation that did not simply adopt technology but crossed into it.

Written by: Nuno Dimas


To understand what artificial intelligence represents, it is useful to step away from the urgency and noise of the present and look backwards, not nostalgically, but structurally. I belong to a generation that did not simply adopt technology but crossed into it. We moved from a world defined by paper, memory, handwriting, black and white television, mechanical scales, telex, and mental calculation, into one world defined by computers, spreadsheets, real-time market systems, mobile connectivity, algorithmic execution, neural networks, real time systems, and now artificial intelligence. That transition is not merely historical detail, it is an analytical advantage. We remember what it means to think without machines, and we also understand what machines have made possible. That dual perspective is increasingly rare, and it is precisely what is missing from much of the current debate around AI. This transition offers a rare glimpse into a direct understanding of how technology changes human thinking.

The question is often framed in simplistic terms, whether we should use artificial intelligence to write, think, analyse, or create, as if the issue were one of permission. But every meaningful technological shift I have experienced has been accompanied by the same concern, expressed in different languages but driven by the same intuition. Digital calculators would make us mentally lazy. Computers would weaken memory. GPS would destroy our sense of orientation. Market systems would replace judgment. Internet trading would eliminate the human element. Now artificial intelligence, we are told, will end original thinking. The fear is not new. Only the tool is. Every technological shift has followed the same path, resistance, adoption, dependency and transformation. The real question is not whether we use it, but how it reshapes the structure of decision making.

As a child, technology barely entered my daily life. We built things ourselves, spent long hours outdoors, and solved problems directly. In local shops, calculations were often performed mentally or written on paper, weights were measured using physical scales and counterweights. In banks, processes were manual, communication relied on telephones and telex, and decisions were shaped by relationships and context as much as by data. Nothing about that world felt deficient. It was slower, certainly, less efficient and less scalable, but it demanded something that is increasingly underdeveloped today, the consistent exercise of internal capability. At school, that expectation was reinforced, we memorised multiplication tables, geography, grammar, everything was written by hand, everything had to be retained. I am not romanticising that system, many aspects were unnecessarily rigid, but the underlying principle was clear. Thinking was not outsourced. The brain was the primary instrument.

The first disruption to that model, in my life,  came with the arrival of digital calculators. They improved speed and accuracy, and immediately triggered a concern that has repeated itself ever since, if the machine does the work, will we stop thinking? Looking back, what changed was not thinking itself, but its allocation. The machine removed repetition and reduced friction, allowing attention to move from execution to structure. The same pattern repeated when I encountered early computers. Machines such as the ZX Spectrum were limited by today’s standards, but that was not the point. They expanded the range of what could be attempted. The constraint was no longer computation, it was imagination and discipline. Around the same time, the environment itself was changing. CNN introduced continuous news, MTV reshaped the relationship between media and time, and information ceased to be periodic and became constant. Science fiction offered me a conceptual bridge into that future, series like Star Trek expanded the horizon of possibility, while movies like Blade Runner introduced ambiguity and tension into the idea of technological progress. Technology was no longer just advancement, it was acceleration, complexity, and risk.

At the Naval Academy, that ambiguity was addressed with discipline, even as new tools emerged, we were still required to calculate position manually using sextants and tables, this was not an exercise in nostalgia, it was a structural safeguard. The principle was simple and enduring, a tool may assist judgment, but it must not replace the ability to form it. Dependency without understanding is a latent vulnerability. That lesson proved to be durable, and it became more relevant, and not less as technology advanced. That lesson is directly relevant to the impact of AI today.

As I moved into University in the UK, Studying Finance and engineering, and later into financial markets, the pace of change accelerated again. Spreadsheets, modelling tools, and communication systems transformed entire categories of work, tasks that once required hours could be executed in minutes. In many respects, this was unambiguously positive. Productivity increased, information became more accessible, and the capacity to model complex scenarios expanded, but something less visible changed at the same time, in financial markets, particularly in environments such as London where I spent a significant part of my career, technology did not simply improve execution, it altered the structure of decision making itself. Systems such as Bloomberg and Reuters created continuous information flows, electronic dealing platforms reduced latency, and communication became instantaneous. The market became more efficient, more interconnected, and more dependent on its own infrastructure.

At first, the benefits dominated perception, but over time, a different dynamic emerged, as speed increased, the space available for judgment narrowed, there was less time to question assumptions, less opportunity to consider second order effects, and less tolerance for delay, because delay itself became a disadvantage. Decisions were made faster, but they became increasingly dependent on the validity of the system’s internal logic, it continued to function, but it became more exposed. This was not theoretical. It became visible during periods of stress, most notably in the collapse of Long-Term Capital Management, where extraordinary intellectual capital represented by two Nobel Prizes, advanced models, and sophisticated technology did not prevent failure. When assumptions broke down, leverage and interconnection amplified the consequences. The lesson was not that models are useless, that would be simplistic, but that intelligence and technology do not compensate for structural fragility.

Subsequent developments reinforced the same point. High-frequency trading improved efficiency under normal conditions but introduced new pathways for instability. Events such as the Flash Crash, in 2010, demonstrated how quickly disruption could propagate through an interconnected system that had become too fast and too complex to be fully understood in real time. What changed with technology was not only capability, but the architecture through which consequences travelled. That same structural shift is now occurring with artificial intelligence, but at a broader scale and at a faster pace, it is accelerating that shift across all domains, from business strategy to governance.

The current debate often focuses on whether AI should be used, particularly in areas such as writing or analysis. This framing is incomplete. Technology has never been optional in any meaningful sense. Artificial intelligence will be adopted because it expands capability, and therefore, the real question is what it does to thinking. AI removes friction at a scale we have not previously experienced, it can generate, summarise, structure, and synthesise, allowing individuals to operate across domains with unprecedented efficiency. However, removing friction does not guarantee better outcomes, it changes the distribution of effort, and that shift introduces a critical distinction, technology can be used to extend cognition, or it can be used to replace effort. Used well, AI allows a disciplined thinker to explore more possibilities, test more ideas, and refine outputs more effectively. Used poorly, it creates fluency without depth, enabling individuals to produce outputs they do not fully understand.

This distinction is not new. Throughout my career, I have delegated tasks to focus on higher level thinking, dictating letters, relying on research prepared by others, debating ideas before execution. Even in the arts, figures such as Jeff Koons rely on collaborators. Delegation is not the issue, unthinking delegation is. The same principle applies to AI, the difference lies in whether the tool enhances thinking or replaces it. Research in education and knowledge work increasingly reflects this tension, suggesting that passive reliance can reduce cognitive engagement while structured use can enhance capability. The underlying principle is consistent with what we have seen before, tools amplify behaviour, they do not replace it.

Arthur C. Clarke observed that sufficiently advanced technology becomes indistinguishable from magic. AI often produces that reaction. It generates outputs with a speed and fluency that appear almost effortless. But describing it as magic removes responsibility. AI is not magic, it is leverage, and leverage always depends on the structure that uses it. Isaac Asimov, In the “Foundation” trilogy, approached the issue from another perspective, exploring how complex systems can become fragile not because of a lack of intelligence, but because of over reliance on their own internal structures. Sophistication does not guarantee resilience. That insight is directly relevant today.

The risk with AI is not that it will write a memo or produce an article, the risk is that individuals and institutions may become satisfied with fluency and stop demanding understanding. In boardrooms, this is not an abstract concern, AI will influence strategy, governance, capital allocation, and risk management. The question is not whether it will be used, it already is, but whether those using it understand how it changes the nature of judgment. Governance frameworks are emerging, and they are necessary, but they are not sufficient. Governance can structure the use of technology, it cannot substitute for thinking.

Looking back across the different phases of my life, the pattern is remarkably consistent. Each technological shift has created the same divide. Those who integrate the tool with discipline become more capable. Those who rely on it passively become more exposed. Artificial intelligence will accelerate that divide, it will not eliminate it. The future will not belong to those who reject AI, nor to those who surrender to it, but to those who can integrate it without losing the discipline that makes thinking possible. That requires preserving clarity, scepticism, memory, and the willingness to verify, it requires understanding that speed is not intelligence, and it requires accepting that no tool, however powerful, removes the responsibility to think.

Artificial intelligence is not the end of thinking. It is, so far, the most demanding test of it.

LET’S TALK

Start a conversation

I work with a limited number of founders, boards, and investors.

If you believe there is alignment, reach out with context.