Triple Illusion: How Social Media and Tokenization Distort the Fabric of Human Relations
Generative AI Operates in Restoring the Illusion of Reality Loss, Eroding Human Relations
Social media is widely regarded as a threat to the integrity of society, impacting not only public discourse but also the psychological well-being of its users, particularly minors. Before addressing the deeper problem, I want to clarify two critical points up front.
To start, while some defend social media platforms on free speech grounds, it is essential to separate the protection of free expression from the regulation of algorithms that shape and amplify online interactions. Unlike individual speech, algorithms are not a fundamental right and must therefore be subject to oversight. However, freedom of expression still carries responsibilities and should face limits when it infringes on others’ rights, promotes harm, or threatens public safety and social cohesion.
With these distinctions in mind, concerns about misinformation and malicious content—while important—only touch the surface of the issue. This analysis will delve into a deeper concern: the illusion of reality loss that data, social media, and generative AI together create. Drawing from the new realist school of thought, particularly the work of Markus Gabriel, this essay explores how the digital sphere fosters a progressive detachment and erosion from genuine human connection and shared reality.
To build this argument, the analysis will first define an ontology of the social and clarify the distinction between sociality (rooted in genuine human relations) and society (an abstract system of coordination). This distinction also helps to highlight why a “world society” is impossible, highlighting the challenges with softer forms of global governance. With this framework established, I will examine how social media, though outwardly “social,” generates a double distortion that undermines true sociality by abstracting the depth of human interactions into surface-level tokens and eradicating the shared truth that usually helps to calibrate misperception and disagreement. Finally, I will analyze how generative AI compounds these distortions, merging them into a pervasive synthetic layer that fundamentally alters our perception of reality and the role of human relationships.
In sum, this “triple illusion” emerges from three levels of distortion: (1) data stripped of spirit (Geist) yet capturing only our outer state for monetisation, (2) a virtual space that mimics genuine social relations but lacks shared truths, and (3) generative AI’s production of synthetic perspectives that imitate objective reality while detaching users from authentic, embodied connections. This is what makes generative AI so profound. It simulate objective reality, which was previously lost, while thereby accelerating societal atomization, dehumanization, and a loss of diversity in social relations.
Defining an Ontology of the Social
To understand social media’s impact, we must first grasp the foundation of human relations. In Gabriel’s ontology, a field of sense (Sinnfeld) refers to a specific domain where context-dependent meaning emerges. Each field of sense provides a framework for interpreting reality, establishing boundaries within which certain objects, facts, and truths hold meaning. These fields are inherently plural and account for the diversity of human experience, without an overarching, all-encompassing structure. This lack of a single structure preserves plurality and freedom, while the "field" still determines the boundaries of reality and truth.
Within this conceptual framework, sociality emerges as a distinct form of interaction. Sociality is embodied, real, and particular, created through the alignment of diverse perspectives within shared environments. Unlike other forms of meaning within a field of sense, sociality relies on a shared object or truth that serves as an anchor for triangulation—a reference point enabling individuals with differing views to align toward a common understanding. Since we can be mistaken, the shared object enables us to correct our opinions within a given field of sense. Sociality possesses Geist (spirit), which is an embodied awareness, an immediate perception of “nowness” that connects individuals in a shared moment.
In everyday social existence, we engage with a multitude of such fields of sense, each grounding social interactions in specific contexts while preserving diversity. Without overarching structures, true social interaction depends on managing dissent—coordinating individual viewpoints around shared anchors to foster productive diversity. This process presumes a basic human agreement; as humans, we owe each other mutual respect, balanced by morality to mediate our obligations and conflicts.
The Distinction of Society and The Impossibility of a World Society
While sociality is grounded in embodied, spirit-filled interactions within specific contexts, society represents a broader, more abstract organizational system—akin to Ferdinand Tönnies’ (1887) distinction between “community” (Gemeinschaft) and “society” (Gesellschaft). Society encompasses the entirety of social relations, including socioeconomic transactions, laws, and institutions, yet it lacks the embodied Geist that defines genuine sociality. As a result, society operates primarily as a framework for functional coordination rather than a true domain of meaningful, present-moment connections. While laws and institutions provide structure, they do not serve as shared objects in the ontological sense that anchors genuine social interaction. Society is thus structured around collective transactions and rules, but remains ontologically distinct from the immediacy and ethical depth that sociality requires.
To tackle global issues, such as the governance of AI, there are frequent calls for global collaboration. However, understanding the challenges of global collaboration requires grasping why a global society is impossible, based on Gabriel’s distinction between field of sense, sociality, and society. Gabriel’s framework clarifies why a world society—or global sociality—is fundamentally unattainable. While society can expand in political and economic terms, forming vast networks of transactional relationships, these structures inherently lack the qualities essential to true sociality: embodiment, shared ethical values, and diversity of perspectives managed within concrete, context-bound interactions. Society itself cannot scale globally without losing cohesion, as it is bound by cultural, legal, and ethical particularities that resist universal application. Thus, while society can coordinate relations on a grand scale, it remains ontologically distinct from the embodied, ethical interaction that defines genuine social connections (This lack of a global social ontology may partly explain why realist theories, such as John Mearsheimer’s offensive realism, perceive international relations as inherently competitive and power-driven rather than cooperative. In contrast, Yan Xuetong’s Struggle for Dominance introduces a moral dimension, aligning it more closely with New Realism. I will discuss this further in one of the next Stacks).
Social Media: Illusion within the Illusion and the Loss of Reality
Like society, social media functions as a coordinating framework that is “social” to an extent. It organizes users' behaviors, facilitating intersections of perspectives represented through data generated by real users (though complicated by bots, trolls, and malicious actors). Social media platforms, much like society, operate as large-scale structures for coordinating interactions—networks of socio-digital transactions governed by algorithms and profit motives. However, social media lacks the historical, cultural, and normative foundations that give depth to societal institutions. Instead, it introduces a two-layered structure of reality distortion that creates not one, but two interconnected layers of distortion, resulting in what Markus Gabriel describes as an “illusion of reality loss” (Illusion des Wirklichkeitsverlusts). To grasp this double distortion, why it is serious even though it “only” creates an “illusion of reality loss,” I will examine each layer of reality distortion step-by-step.
1. First Layer of Distortion: Data Lacks Spirit, yet Spirit is Extracted
The first layer of distortion in social media lies in how data abstracts and distorts the essence of human interaction. Data, in this context, captures only the observable, external aspects of our actions—our words, reactions, and gestures—while lacking the spirit that gives depth to genuine sociality. This outer form of human experience, stripped of its inner qualities like ethical awareness, presence, and emotional depth, is merely raw information, devoid of meaning on its own.
Although data lacks this inner spirit, social networks tap into our spirit by extracting and commercializing these outward expressions. Social platforms monetize this external layer of our experience—both what we explicitly share and what algorithms infer—turning our thoughts, feelings, and perspectives into commodities. In this way, social media captures and repurposes only a shallow reflection of human interaction, leaving the deeper, embodied experience of social connection outside the reach of its data-driven model.
2. Second Layer of Distortion: Sociality Without Shared Reality
The second layer of distortion arises not in the building of social networks—an endeavor involving genuine sociality between technology and people—but in the way these platforms are used, which distorts social interactions by removing the shared, verifiable objects that ground human connection. While the development of social networks is based on true sociality, their usage transforms interactions into a mimicry of social connection, detached from a shared reality.
Social media platforms organize data into a pseudo-social environment, where interactions might appear rich in expression but lack the grounding in common truths or ethical depth that defines true sociality. Users contribute personal content—updates, images, ideas—that originates from real experiences but becomes mediated by algorithms designed to amplify engagement rather than foster genuine, multi-dimensional understanding. This curated digital self becomes abstracted into representations that reinforce personal views, creating a space where users express freely without alignment around a common, verifiable object.
In the absence of shared reference points, algorithms further intensify this distortion by filtering content to reinforce individual beliefs, creating echo chambers that restrict exposure to diverse perspectives. This isolated, self-reinforcing environment mirrors a kind of digital postmodernism, where personal expression is untethered from a shared reality, leading users to feel a profound reality loss. Without the possibility of grounding in shared truths, users find themselves in a fragmented space where sociality becomes relativist, and the concept of an objective reality seems diluted.
Through these processes, social media creates a distorted sense of sociality that feels immediate but lacks the cohesion and substance of real human engagement. Rather than fostering connection, the platform’s structure fuels individual perspectives disconnected from a shared understanding, leading to an environment where the concept of readily, or a grounded, objective reality, becomes obscured and fragmented. In such environment, misinformation and hate-speech can early flourish.
Generative AI: Restoring Reality Loss yet Eroding the Foundation of Human Relations
With the integration of generative AI, digital platforms introduce a new layer of distortion that reshapes reality perception. As highlighted above, data itself captures only the outer form of human interaction, recording observable actions but lacking the inner state of spirit and shared meaning. Social media platforms compound this distortion by providing a space to share and promote individual perspectives, yet these interactions remain disconnected from a common, verifiable reality. Now, generative AI intensifies this process by not only circulating user-generated perspectives but actively generating content that mimics the objectivity lost on traditional social media platforms. AI reproduces patterns that appear contextually accurate, yet this synthetic objectivity is purely mathematical. It is not genuine causality but rather a form of synthetic objectivity based on real numbers—results of summing other real numbers and multiplying them by weights propagated through millions of layers within a predefined function. Generative AI thus fuses the initial distortions into a single, all-encompassing layer, creating an immersive simulation of reality that seems to recover what was previously lost—yet this recovery is synthetic, still lacking any true shared foundation. Correction of human misperceptions and managing disagreement, which is a core aspect of sociality, is now carried out by the machine, not through social interaction and genuinely shared objects. This is what makes generative AI so powerful, with all its consequences.
As generative AI becomes pervasive, users increasingly trust this synthetic objectivity and begin to reinternalize it as truth, relying less on physical human relationships to establish and verify reality. This new trust subtly shifts the role of human relations, rendering them less necessary for grounding shared understanding. Instead, AI-synthesized “truths” become self-sustaining, trusted independently of physical interaction or direct social bonds. Over time, this synthetic framework for truth erodes the ontological foundation of human relations, depriving society of the core relational structures that once fostered mutual understanding and genuine connection.
The consequences are profound: as users rely on synthetic objects as sources of truth, they disengage from the physical, embodied human connections that have historically anchored society. This shift not only reshapes social expectations towards superficial, transactional interactions but also reinforces a fragmented form of sociality, where the diversity of perspectives and ethical depth inherent in true human interaction are steadily diminished. Society risks evolving into a state of fragmented uniformity, where AI-generated replicas create a veneer of connection while undermining the complex, ontological core of human relations.
In this way, generative AI does not merely offer a convenient layer of synthesized truth but fundamentally redefines the nature of reality perception and human connection. By displacing verifiable, shared objects with statistical approximations or simulated trinagulation, it risks restructuring sociality itself—rendering authentic, embodied human interactions increasingly obsolete in favor of a trust in synthetic representations that fulfill the function of truth without the depth of genuine social engagement. Yes, hate speech, disinformation, and other malicious forms of communication are serious issues, as are the risks of homogenizing and normalizing public opinion. Relying on tokens that redefine our inner state and shared spirit will impact society more fundamentally. Hate speech, disinformation, and the risks of homogenizing and normalizing public opinion are serious issues. However, as long as we don’t know what consciousness truly is or where it resides, at least our inner state remains ours—for now. “Welcome to the desert of the real.”



Interesting perspective on how social media and AI are working and evolving now — but there is another way – if we demand it. Your framing, as applied here relates to the poorly designed and incentivized social media that emerged and now dominate, as well as current centrally-controlled AIs. A movement toward open, interoperable middleware that empowers user agency points to that better way. Your framing seems very applicable to motivating that, as well.
Consider how middleware can enable a return of human centrality to re-enable the “genuine sociality” you describe, by restoring the role of real communities of human interaction. My article just out in Tech Policy Press, Three Pillars of Human Discourse (and How Social Media Middleware Can Support All Three) [https://www.techpolicy.press/three-pillars-of-human-discourse-and-how-social-media-middleware-can-support-all-three/] explains this in the context of social media.
Extending this to AI, my forthcoming article with Richard Whitt suggests how personal AI agents that faithfully serve their users --not their developers -- can help support and augment real human interaction with AI. That article draws on ideas in Whitt’s recent book [https://www.reweavingtheweb.net/] and paper [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4899709], as well as my Three Pillars.