Automated Knowledge and The Postmodern Condition

In 1979, the philosopher Jean‑François Lyotard published The Postmodern Condition: A Report on Knowledge, a short but prescient meditation on the changing status of knowledge in technological societies. At the time, his argument seemed speculative. He suggested that the authority historically granted to human thinkers might erode as computational systems became capable of performing the operations through which knowledge is produced, organized, and verified. Knowledge, he argued, was becoming increasingly technical, increasingly machinic, and progressively detached from the human subject who once served as both its producer and its guarantor.

As Lyotard wrote:

“Our working hypothesis is that the status of knowledge is altered as societies enter what is known as the postindustrial age and cultures enter what is known as the postmodern age.”

Nearly half a century later, this scenario no longer feels speculative. Machine intelligence systems summarize research papers, generate computer code, produce visual art, assist with legal analysis, and detect medical anomalies in diagnostic images. Increasingly, knowledge is assembled through automated processes that operate at scales and speeds that far exceed human cognition. The human thinker remains present, but the center of gravity has shifted. Intellectual work is now distributed across complex assemblages of humans, algorithms, and planetary-scale datasets.

This transformation may mark the era when a genuine postmodern condition finally arrives.

For decades, postmodern theory described a crisis in the authority of knowledge. Grand narratives collapsed. Universal foundations dissolved. Truth appeared fragmented across competing perspectives and institutional discourses. Yet despite these theoretical diagnoses, knowledge production remained fundamentally human. Scholars debated, argued, and interpreted one another’s claims within institutions that still assumed human intellectual sovereignty.

Advanced machine intelligence changes that context radically. When systems autonomously generate hypotheses, synthesize literatures, and produce coherent arguments, knowledge no longer emerges exclusively from human cognition. Instead, it arises from distributed technical systems that integrate human intention with computational inference. Authorship thus becomes diffuse and epistemic authority becomes difficult to locate. Under such circumstances knowledge is no longer anchored in the figure of the scholar or researcher but circulates through networks of machines, material infrastructures, software filters, and data archives.

This introduces a peculiar epistemic atmosphere—one that feels strangely familiar to anyone who has read postmodern philosophy. The intellectual culture of modernity sought stability, attempting to eliminate uncertainty through method, verification, and disciplined reasoning. Knowledge was meant to be secure, transparent, and grounded in procedures controlled by rational subjects.

Automated knowledge systems produce a different terrain. Their outputs often rely on statistical relationships within immense datasets and on neural architectures whose internal operations remain partially opaque even to their designers. Knowledge works—it predicts, classifies, generates—but the path through which it arrives often remains obscure. Understanding gives way to operational effectiveness.

The result is a form of epistemic disorientation that resembles the landscape many postmodern thinkers described so acutely. Knowledge generated by technical infrastructures that no individual fully comprehends arise as decentralized, contingent, and distributed, and from no identifiable perspective. Instead of a single intellectual center, we encounter a vast cognitive ecology in which humans and machines continuously exchange signals and information.

Which is to say, machine intelligence expands cognition beyond the scale of individual minds. Large computational systems ingest the textual sediment of centuries, drawing patterns across millions of documents, images, and datasets. They operate within global energy networks, fiber-optic cables, data centers, and satellite systems. As a result, knowledge production becomes embedded in planetary infrastructure. Thinking is no longer confined to the human brain but unfolds across silicon, electricity, and the thermodynamic metabolism of the Earth itself.

From this perspective, the automation of knowledge production can be understood as the emergence of a new posthuman ecology of thought, where human cognition becomes one component within a larger field of intelligence that includes technical systems, biological processes, and planetary material flows. The question is no longer simply who produces knowledge. It is how knowledge circulates and manifests across this expanded ecology, and to what ends? In such a world, competing or resisting with machine intelligence may be neither possible nor desirable.

Perhaps as the philosopher Yuk Hui suggests:

“Such a machine takeover would be a rational process of evolution that we should not try to compete with any more than we should try to outrun a speeding car. After these developments, however, if we are still talking about knowledge, it will be a knowledge of life—of how to live well and how to live together well” (Hui 2026).

This observation points toward a subtle but important shift. If machines increasingly dominate technical forms of knowledge—calculation, optimization, large-scale synthesis—human cognition can migrate toward other domains: interpretation, ethical judgment, cultural meaning, and the coordination of collective life through shared values. The role of human thinking then becomes less about producing information and more about deciding what forms of life that information should support.

Academic theory itself may need to adapt accordingly. For centuries, scholarship depended on the painstaking accumulation and interpretation of texts. Today, computational systems can search and summarize vast literatures almost instantly. The role of the theorist may therefore evolve from that of archivist to that of navigator—someone who orients knowledge within conceptual landscapes that machines alone cannot inhabit.

In this sense, the automation of knowledge does not necessarily signal the end of intellectual life. Rather, it alters our ecology of awareness and attention. Humans become participants and stewards within distributed cognitive systems rather than the sole originators of thought. The result is not the disappearance of thinking but its migration into a broader cosmotechnical field.

Maybe the task now in front of us is to learn how to inhabit this field responsibly and develop postmodern forms of thinking and feeling capable of steering these vast technosocial systems towards planetary flourishing instead profit and power accumulation for the rich. If machine intelligence is going to be able to produce much more sophisticated and well-sourced knowledge than human brains maybe the challenge is not necessarily resisting automated knowledge but cultivating forms of wisdom capable of guiding machine intelligence and unleashing its potential for enhancing life and prosocial cooperation?

The advent of advanced machine intelligence might be an opportunity to start thinking about what kinds of cognition and wisdom we actual will need and want to develop going forward in this milieu. After all, as Aristotle once advised: “The purpose of knowledge is action, not knowledge.”

2 responses to “Automated Knowledge and The Postmodern Condition”

  1. Response received via email:

    Respectfully disagree. There are a number of category errors in the quotes that invalidate this entire line of thinking. First off, Ai is not capable of thought and probably never will be. The notion that we humans can thus withdraw into the background as humble janitors is not only fraught, but “not even wrong” as physicists like to say. In fact, the extent to which we abdicate our role as responsible authors of our thinking and actions, the farther away we get from resolving the underlying problematic.

    What exactly that problematic is, i dont have the mental stamina to elucidate (still recovering from brain injury), but suffice it to say, it will not be resolved by greater computing power, or even Ai magically becoming sentient.

    In short, *we* have an alignment problem, not Ai. We are not aligned with the fundamental principles of life itself, and are therefore failing, as well as having caused the 6th mass extinction even, and the climate collapse, along with numerous other crisis.

    There is a way to mitigate some of the worst results from the metacrisis (such as not starting a nuclear war), but ceding any human responsibility to Ai is NOT part of the solution.

    Like

    1. Hi, thank you for the response. 

      I appreciate your perspective on this but believe you are missing the larger point: machinic intelligence is going nowhere (at least not until collapse/the great simplification begins in full), so trying to resist it is kinda like telling people they walk instead of use cars to travel across the country. Not going to happen for many reasons. Thus, we people are going to need to adapt to in an readjust our assumptions and aspirations in order to create alignments that are prosocial and ecologically sensitive.   

      Whether you want to classify machine cognition as “thinking” or not is less important. I agree that machines may never become sentient, but that’s not the same as being capable of cognitive processes. So-called “A.I” (terrible term) is already more cognitively adept than most people in terms of speed and ability to relate information from different ‘genres’ in ways that are coherent. 

      My suggestion was not that we should withdraw and become “janitors”, but rather that we engage strategically (and politically) and become stewards and cultivators by ethically developing cognitive orientations capable of steering machine intelligence systems, and then designing knowledge generating and sensing infrastructures that align with the aspirational goals of equity, fairness, compassion, cooperation. And all that for mutual flourishing. 

      Which is to say, I don’t advocate giving up “thinking” (as if that’s even a true singularity given the intermingling of emotion and unconscious biases in cognition), as that is a fundamental feature of sapience and critical for agency and ethical regard. What I’m saying is that *knowledge automation* via machine cognition is and will be far too powerful for humans to keep up and so humans can focus on *other kinds* of cognition and thinking. We can become less knowledge producers in the banal sense and more wisdom cultivators and empathy processors.     

      Take, for example, conspiracy theorists. They do a lot of “thinking” but are not so good at selecting value sources, or cross-referencing, or checking login chains against known facts. Machinic intelligence is already far superior in those domains than most people. So why would we have to suffer discourse with conspiracy theorists or MAGA zealots when we can just get certain kinds of knowledge directly from an artificial cognitive agent? In *particular domains of knowledge* machine intelligence is much preferred to a billion billion different opinions from non-experts and people who haven’t done the cognitive work and learning to have an informed opinion.

      Now I agree wholeheartedly that “we” have an alignment problem with life itself. That is something that must be addressed and prioritized before anything else. Can machine intelligence help with that? I think it can, IF (big if) “we” take responsibility to become wiser and more neuro-cognitively integrated and fluid (cf. Iain McGilchrist) and learn how to adapt with it and steer it correctly. Can it also lead us into dangerous waters and decrease our capacity to understand and navigate the world in an ethical way? Yes it can. 

      So that is the provocation I’m trying to offer. My question is: if 1) machine intelligence, or machine cognition if you prefer, is not going anywhere and 2) has the potential to automate knowledge in specific domains (not all) such that serious advances in energy, medicine, biotechnology, and infrastructure are achieved then what is the best way to harness and adapt to this? 

      Maybe “we” should focus more on cognitive development that emphasizes right-brain capacities through art, music, intuition, compassion, and relationships rather than left-brain data crunching and linear ordering? Maybe machine intelligence can handle most (not all) of that kind of knowledge generation and “we” can shift the current culture from its obsession with left-brain domination to right-brain cooperation?

      And maybe that starts with learning how to co-exist, think-with, and cooperate with other types of intelligence rather than see a dichotomous back/white opposition between the human and the machine?

      To be honest, I don’t know if I see a way that our species can escape extinction if we don’t learn how to change the orientations of our brains and harness massive computing power at the same time..? The predicament we are in now might be too complex and the crises too advanced for humans to “solve” without massive deployment of advanced technology…

      Like

Leave a reply to ||| Cancel reply