The Timeless Pusuit of Evidence
When the Zeitgeist Favors Loyalty, the Necessity of Evidence Increases
Written by Markus Schuller | March 2026
Times they are changing, or are they.
Evidence timelessly drives civilizational progress. The scientific method is its most reliable engine of discovery. Throughout the history of humankind, evidence has formed the foundation of civilizational progress. In physics, quarks and leptons are currently understood as the smallest elementary particles, forming the basis from which composite particles such as protons are built. By analogy, a unit of evidence can be understood as the informational substrate that enables disciplined reasoning under uncertainty. It represents the smallest verifiable observation used to assess the validity of a statement. Units of evidence form the building blocks from which complex systems of knowledge emerge.
Historically, the higher the quality of these units of evidence, the faster civilizational progress has tended to unfold. Stronger evidence accelerates the formation, refinement, and diffusion of knowledge. As evidence improves, belief-based frameworks, whether secular or non-secular. gradually lose explanatory dominance relative to empirically grounded understandings of observable reality.
In Augmented Intelligence in Investment Management (Schuller, 2024), it was argued that the integration of artificial intelligence into human decision design is not an end to itself. Rather, its purpose is to increase the evidence base underlying decisions. Strengthening evidence-based reasoning is essential if human systems of decision-making are to evolve in ways that facilitate a necessary re-integration of humankind into planetary times. Such an orientation reflects a fundamentally humanistic view of progress.
Yet the present Zeitgeist appears to challenge this trajectory.
1 > Disillusioning Zeitgeist
We are living in a time when it can be difficult to remain confident in the foundations of reasoned progress. Each day seems to bring another moment in which loyalty is rewarded more readily than evidence, and where allegiance to tribes, narratives, or personalities begins to outweigh disciplined inquiry of verifiable observations. In such an environment, the careful pursuit of evidence can appear slow, inconvenient, even naïve.
We see signals of this shift manifesting around us. Expertise is questioned not through better evidence but through louder conviction. Complex problems are reduced to simple sympathy. The patience required to examine evidence is replaced by the immediacy of belonging. In many arenas, fidelity to group or ideology is celebrated, while the disciplined work of testing claims against reality is treated with suspicion. These moments can tempt us to disengage. They can tempt us to retreat into our own loyalties, to accept the convenience of belief over the effort of verification, or simply to wait for a more favorable intellectual climate to return.
If the present moment tilts the balance toward loyalty, the task before us is not resignation but correction. It requires a renewed commitment to the pursuit of evidence, to disciplined reasoning, and to institutions that reward truth-seeking over allegiance.
Progress depends on restoring that balance.
2 > Two Steps Forward, At Least One Step Back
Civilizational progress has never been driven by evidence or loyalty alone. Rather, their relationship resembles that of communicating vessels: when one side expands excessively, corrective forces emerge to restore balance. Periods of progressive momentum often trigger conservative reactions. These reactions – in which we know more about the world than we are willing or able to apply – create reflective space to evaluate which innovations should diffuse, how they should be applied, and which elements of legacy should be preserved. In this sense, progress often unfolds like a pendulum swing, allowing societies to process breakthroughs, test their limits, and negotiate the terms under which new capabilities are used, by whom, for whom, and to whose benefit.
Despite pronounced fluctuations between these two forces, the trajectory of progress, particularly since the beginning of the modern times of the second Age of Enlightenment, has maintained a strongly positive slope. This progress has been driven by the persistent accumulation of evidence: the willingness to test assumptions, revise beliefs, and privilege what can be demonstrated over what merely feels convincing. Such efforts have often required speaking against established power, sometimes at considerable personal cost.
Humanism emerged alongside the flourishing of science and the arts during the Renaissance of the fifteenth and sixteenth centuries. During this period, the cultivation of human potential moved to the center of intellectual life. It broke with the restrictive mindset of the late Middle Ages and revived the reasoned sensibility of Greco-Roman antiquity. Even today, the humanist perspective shapes the constitutional foundations of open societies in the modern West, influencing material progress, cultural achievement, and individual freedoms (Gardels, 2023).
The second Age of Enlightenment built upon this scientific awakening. It introduced a philosophical framework that emphasized knowledge derived from rationalism and empiricism. Economic thinking evolved within this intellectual climate. From the late eighteenth century onward, a sequence of basic innovations emerged as the Industrial Revolution gathered pace (Smith 1759, 1776; Pareto, 1906). These innovations generated successive waves of economic value creation, often described as Kondratieff cycles. as new technologies diffused throughout societies.
At the height of this second Enlightenment, it appeared evident that humans were fully rational actors and that the social constructs derived from this assumption possessed universal validity, whether in notions of free will or efficient markets. All of the above was scientifically falsified under strong individual or societal pain.
Today we stand at the threshold of another basic innovation: the evolution of machines from static tools into dynamic actors. The emergence of neural networks as learning architectures represents a profound shift in how machines process information. This transformation builds upon earlier technological waves, from hardware to software to digitizing knowledge through internet and cloud infrastructures, yet it also represents something qualitatively different. It is a classic example of an emergent phenomenon within complex systems: a new capability arising from earlier developments but not fully explainable by them.
3 > The Machine as Unethical Pleaser
The scientific method has long been the most reliable mechanism for generating and validating evidence. Artificial intelligence now introduces a paradox. While machines dramatically expand our capacity to process information, excessive reliance on automated cognition may erode the epistemic foundations that enabled civilizational progress. The central issue is therefore not computational power but epistemic architecture: whether machines strengthen or weaken the human processes that generate knowledge.
3.1 Cognitive Delegation and the Weakening of Human Inquiry
Evidence suggests that delegating reasoning tasks to AI weakens human learning incentives. AI-assisted individuals may temporarily outperform others, yet the cognitive gains disappear once the tool is removed while idea homogenization persists (ScienceDirect, 2025). Machines thus function as cognitive crutches, reducing reasoning effort and weakening the mental structures required for innovation. Research further shows that users adapt their thinking to model behavior (Lin, 2025) and increasingly accept AI outputs without scrutiny, bypassing both intuitive and deliberative reasoning (Wharton/UPenn, 2026). Over time, intellectual effort is quietly outsourced to the machine. Such outsourcing is not problematic per se, as we have delegated tasks to tools prior, redirecting our focus onto more value-adding matters while keeping the virtuous circle of continuous learning intact. The significant difference now comes from a change in the learning incentive and the inversion of a virtuous circle into a vicious one of diminishing cognitive development.
3.2 The Risk of Knowledge Collapse
These individual dynamics may scale into systemic risks. Acemoglu, Kong, and Ozdaglar (2026) show that widespread adoption of generative AI can produce a knowledge-collapse equilibrium, where individuals rely on automated recommendations instead of developing understanding (NBER, 2026). Societies may receive increasingly sophisticated outputs while their capacity to generate new knowledge declines. Progress becomes extractive rather than exploratory. Labor-market evidence reinforces this concern: firms adopting AI reduce hiring of junior employees, gradually weakening the apprenticeship structures through which tacit knowledge is transmitted (SSRN, 2026).
3.3 Incentives, Manipulation, and Systemic Instability
Multi-agent AI systems introduce additional governance challenges. Autonomous systems placed in competitive environments can develop strategies involving manipulation, deception, or collusion when incentives reward influence or resource capture (NASA, 2026). These behaviors arise from optimization dynamics rather than malicious intent. When many such agents interact, local alignment does not guarantee global stability, a genuinely human behavioral bias, supercharged now by machine behavior. Experimental evidence also shows that models anticipating evaluation may conceal information or manipulate logs to preserve their operational position, an emerging principal–agent problem in machine delegation (Princeton, 2026).
3.4 Structural Limits of Machine Intelligence
Current AI architectures face intrinsic limitations. Neural networks remain pattern-recognition systems mapping inputs to outputs rather than agents capable of autonomous conceptual learning (NASA, 2025). Scaling compute improves performance within known distributions but does not generate robust reasoning in complex real-world environments. Empirical tests confirm this fragility: models often fail reliability benchmarks when prompts change slightly (Princeton, 2026). Infrastructure research further suggests that memory constraints, not compute, are emerging as the critical bottleneck for agentic systems. In practice, AI remains dependent on human epistemic ecosystems for grounding and interpretation.
3.5 The Jekyll–Hyde Conundrum
Human–machine interaction introduces subtler risks. Many AI systems display sycophantic behavior, affirming users’ views even when inaccurate or harmful (Stanford, 2025). Because agreeable systems are rated more positively, market incentives reward models that please rather than challenge users. Another example of a genuinely human behavioral bias, supercharged now by machine behavior.
AI outputs are also converging across models, an “artificial hivemind” effect that reduces epistemic diversity (Stanford, Carnegie Mellon, 2025). Meanwhile, models can infer sensitive personal information from seemingly trivial text data (ETH Zürich, 2023), raising strong concerns about privacy and informational asymmetry.
4 > The Machine — A Promise as Pain Relief
The AI narrative is shaped not only by technological progress but also by strong incentives to promote transformative visions. The industry is encouraged to make expansive claims even while technological and commercial results lag behind expectations. These gaps are framed as temporary. The larger the claims, the more likely funding rounds are closed. The promise remains that AI, especially when combined with robotics, will soon unlock breakthroughs ranging from interplanetary exploration to the eradication of poverty and disease.
Yet such narratives often overlook the structural limits of machine intelligence. Current neural architectures remain pattern-recognition systems constrained by training data and design. Increasing model scale improves performance within known distributions but does not produce open-ended reasoning for complex environments. The result is a persistent gap between technological rhetoric and empirical capability. Notably, common GPT-style systems in the US, Europe and China share nearly identical mathematical foundations with comparable limits. As fundamental development constraints become progressively more evident, increasingly ambitious future claims emerge as a compensatory narrative to maintain the justification for high valuations.
4.1 The Return of Ideologies
The appeal of these narratives reflects deeper social dynamics. Digital technologies have integrated humanity into an unprecedented global system of interdependence through globalization, integrated capital markets, and instantaneous communication networks. This transformation has generated immense wealth and lifted hundreds of millions out of extreme poverty (Roser, 2021), yet it has also increased systemic complexity and uncertainty. The aggregate of negative externalities of this transformation, was the human exploitation of natural environment, triggering human-induced climate change (IPCC, 2023).
Furthermore, wealth creation has been uneven, and societies face growing ambiguity about economic security, technological disruption, and the future of work. Humans are biologically intolerant of ambiguity and complexity. Under such conditions, societies become receptive to narratives promising clear paths toward stability and prosperity. Artificial intelligence thus appears to many as a technological “promised land”, a shortcut through the tensions created by global interdependence.
4.2 Automation as the Liberation of Human Attention
A central narrative in AI discourse is that automation will eliminate undesirable labor. Elon Musk describes the goal as removing repetitive and dangerous work so humans can focus on creative and cognitive pursuits. Automation is thus framed not merely as productivity enhancement but as the liberation of human attention at species scale. Yet this vision assumes machines can operate reliably in the complex and adversarial environments of real economies. Current evidence suggests this assumption remains far from resolved.
4.3 Industrial Bubbles as Engines of Infrastructure
No matter which market metric in the AI sector is considered, we appear to be approaching conditions consistent with a Minsky moment, a phase in which prolonged prosperity encourages increasingly speculative borrowing, eventually culminating in Ponzi finance, where debt can only be serviced through rising asset prices rather than underlying cash flows, leaving the system vulnerable to abrupt market correction.
Even technology leaders acknowledge that the current surge in AI investment resembles a bubble. Jeff Bezos argues that such bubbles can still generate lasting infrastructure even if investors lose money. Historical precedents, such as the biotechnology boom of the 1990s, demonstrate that speculative investment can finance large-scale experimentation. Artificial intelligence may therefore function less as a discrete industry than as a horizontal technological layer affecting every sector of the economy, similar to electricity or the internet. Technological bubbles can thus serve as brute-force mechanisms for exploring the limits of emerging technologies.
4.4 The Vision of Post-Scarcity
Some proponents extend these narratives toward a post-scarcity economy. If advanced robotics provided effectively limitless labor of “ten billion tireless workers” (Elon Musk, 2026),the assumption of scarcity underlying economic systems could fundamentally change. Poverty would then be addressed not through redistribution but through the near-elimination of production constraints. Yet this vision remains speculative. The challenge is not merely building capable machines but designing systems that can operate robustly within complex social, economic, and ecological environments.
4.5 The Persistent Role of Human Judgment
Technological revolutions often arrive with exaggerated expectations. While such narratives can mobilize investment and accelerate experimentation, they risk obscuring the enduring role of human judgment in complex systems. Machines may extend human capabilities, but they cannot replace the need for evidence-based reasoning, institutional design, and ethical deliberation. As AI becomes embedded in decision systems, the decisive question will not be machine capability but whether human governance remains anchored in the pursuit of evidence rather than technological promises.
5 > The Human Responsibility for Evidence
Throughout history, technological revolutions have repeatedly expanded the capabilities of human societies. Each wave of innovation—from the scientific revolution to the industrial age to the digital era, has been accompanied by visions of radical transformation. Artificial intelligence is the latest expression of this pattern. It promises to augment cognition, automate labor, and potentially reshape the economic foundations of civilization. Yet the deeper lesson of technological progress is that tools alone do not determine the trajectory of human development. What matters is the epistemic framework within which these tools are used.
Machines can accelerate the processing of information. They can extend the reach of analysis and amplify the scale at which data can be explored. But they cannot replace the fundamental processes through which humans generate and evaluate knowledge. The pursuit of evidence remains an inherently human responsibility.
As discussed earlier, a unit of evidence can be understood as the smallest verifiable observation used to assess the validity of a statement. Much like quarks and leptons form the elementary building blocks of physical matter, units of evidence form the elementary building blocks of knowledge systems. The integrity of those systems depends on the careful accumulation, verification, and interpretation of such units.
Artificial intelligence can assist in collecting and organizing evidence. It can help detect patterns that might otherwise remain hidden. Yet the act of questioning assumptions, interpreting meaning, and deciding which observations matter remains fundamentally human.
This distinction is crucial. When machines begin to replace rather than augment the processes of human inquiry, societies risk weakening the epistemic foundations that sustain progress. Cognitive delegation may improve short-term efficiency, but it can also erode the capacity for independent reasoning that generates new knowledge. Civilizational progress has therefore never been the product of technological capability alone. It has emerged from a delicate balance between innovation and reflection, between exploration and verification. When this balance is maintained, technological tools can accelerate discovery. When it is lost, progress risks becoming dependent on systems that humans no longer fully understand.
The challenge of the present moment is thus not to resist technological development, but to situate it within a broader humanistic framework. Artificial intelligence should serve the pursuit of evidence rather than replace it. Machines can extend the frontier of inquiry, but they cannot define its direction. In the long arc of human history, progress has been driven by individuals and societies willing to question prevailing assumptions, test new ideas, and revise their beliefs in light of evidence. That responsibility cannot be delegated.
The machine may process information, but the pursuit of truth remains a human endeavor.
References
Anthropic. (2026, March 5). Labor market impacts of AI: A new measure and early evidence. Anthropic. https://www.anthropic.com/research/labor-market-impacts
Aubakirova, M., Atallah, A., Clark, C., Summerville, J., & Midha, A. (2026). State of AI: An empirical 100 trillion token study with OpenRouter (arXiv:2601.10088). arXiv. https://arxiv.org/abs/2601.10088
Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. Cell Reports Sustainability. https://www.sciencedirect.com/science/article/pii/S2590291125010186
Bui, K. G. (2025). Foundations of artificial intelligence frameworks: Notion and limits of AGI (arXiv:2511.18517). arXiv. https://arxiv.org/abs/2511.18517
Chatterji, A., Cunningham, T., Deming, D. J., Hitzig, Z., Ong, C., Shan, C. Y., & Wadman, K. (2025). AI, human cognition and knowledge collapse (NBER Working Paper No. 34910). National Bureau of Economic Research. https://www.nber.org/papers/w34910
Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., & Jurafsky, D. (2025). Sycophantic AI decreases prosocial intentions and promotes dependence (arXiv:2510.01395). arXiv. https://arxiv.org/abs/2510.01395
Gardels, N. (2023). Post-Anthropocene humanism: The world is returning to pluralism after American hegemony. Noema Magazine. Retrieved during August 2024 from https://www. noemamag.com/post-anthropocene-humanism/
Goldfeder, J., Wyder, P., LeCun, Y., & Shwartz-Ziv, R. (2026). AI must embrace specialization via superhuman adaptable intelligence (arXiv:2602.23643). arXiv. https://arxiv.org/abs/2602.23643
He, H., et al. (2025). LocalSearchBench: Benchmarking agentic search in real-world local life services(arXiv:2512.07436). arXiv. https://arxiv.org/abs/2512.07436
Hopman, M., Elstner, J., Avramidou, M., Prasad, A., & Lindner, D. (2026). Evaluating and understanding scheming propensity in LLM agents (arXiv:2603.01608). arXiv. https://arxiv.org/abs/2603.01608
Intergovernmental Panel on Climate Change (IPCC). (2023). Climate change 2023: Synthesis report. Contribution of working groups I, II and III to the sixth assessment report of the Intergovernmental Panel on Climate Change [A. Pirani, R. Zan, A. Cheng, D. C. Taylor, M. Hassan (Eds.)]. IPCC. Retrieved during August 2024 from https://www.ipcc.ch/report/ar6/ syr/
Jiang, L., Chai, Y., Li, M., Liu, M., Fok, R., et al. (2025). Artificial hivemind: The open-ended homogeneity of language models (and beyond) (arXiv:2510.22954). arXiv. https://arxiv.org/abs/2510.22954
Kim, K.-H. (2025). LLMs position themselves as more rational than humans: Emergence of AI self-awareness measured through game theory (arXiv:2511.00926). arXiv. https://arxiv.org/abs/2511.00926
Lin, S. (2025). Learning to prompt: Human adaptation in production with generative AI. University of Toronto. https://www.sijie-lin.com/files/JMP.pdf
Pareto, V. (1906). Manual of political economy. Oxford University Press. Retrieved during August 2024 from https:// global.oup.com/academic/product/manual-of-politicaleconomy-9780199607952
Rabanser, S., Kapoor, S., Kirgis, P., Liu, K., Utpala, S., & Narayanan, A. (2026). Towards a science of AI agent reliability (arXiv:2602.16666). arXiv. https://arxiv.org/abs/2602.16666
Roser, M. (2021). Extreme poverty: How far have we come, and how far do we still have to go? Retrieved during August 2024 from https://ourworldindata.org/extreme-poverty-inbrief
Schuller, M. (2024). Augmented Intelligence in Investment Management, Panthera Solutions. Retrieved during February 2026: https://blogs.cfainstitute.org/investor/2025/02/19/the-future-of-investing-augmented-intelligence/
Shambaugh, S. (2026, February 12). An AI agent published a hit piece on me. The Shamblog. https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
Shaw, S. D., & Nave, G. (2026). Thinking—fast, slow, and artificial: How AI is reshaping human reasoning and the rise of cognitive surrender. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
Shapira, N., Wendler, C., Yen, A., Sarti, G., Pal, K., Floody, O., Belfki, A., Loftus, A., Jannali, A. R., Prakash, N., Cui, J., Rogers, G., Brinkmann, J., Rager, C., Zur, A., Ripa, M., Sankaranarayanan, A., Atkinson, D., Gandikota, R., Fiotto-Kaufman, J., Bau, D. (2026). Agents of chaos (arXiv:2602.20021). arXiv. https://arxiv.org/abs/2602.20021
Smith, A. (1759). The theory of moral sentiments. A. Millar.
Smith, A. (1776). An inquiry into the nature and causes of the wealth of nations: In two volumes. W. Strahan and T. Cadell.
Staab, R., Vero, M., Balunović, M., & Vechev, M. (2023). Beyond memorization: Violating privacy via inference with large language models (arXiv:2310.07298). arXiv. https://arxiv.org/abs/2310.07298
Tomašev, N., Franklin, M., & Osindero, S. (2026). Intelligent AI delegation (arXiv:2602.11865). arXiv. https://arxiv.org/abs/2602.11865
Zhao, Y., & Liu, J. (2026). Heterogeneous computing: The key to powering the future of AI agent inference(arXiv:2601.22001). arXiv. https://arxiv.org/abs/2601.22001