The Perfect Trap
Artificial Intelligence, Digital Surveillance, and the Architecture of Behavior
🇮🇹 LEGGI QUA LA VERSIONE IN ITALIANO - READ THE ITALIAN VERSION 🇮🇹
“If you’re not paying for the product, you are the product.”
The phrase, attributed to Andrew Lewis, circulated for years as a cynical meme about the supposed “free” nature of social networks. Today it feels almost naïve. Because in the meantime, the product is no longer just our attention: it is our digitized interiority.
From Social Networks to Behavioral Profiling
For years, the laboratory of data extraction was visible: social media platforms. We posted political opinions, emotional states, private images, impulsive reactions—inside environments carefully designed to feel engaging and intimate.
The Cambridge Analytica scandal made visible what was already in place: profiling was not merely about selling products, but about shaping perceptions, steering collective decisions, and modulating consent. It was not simple advertising. It was the architecture of behavior.
Generative Artificial Intelligence: From Publishing to Confession
With the arrival of generative artificial intelligence—from OpenAI’s models to those embedded in Google and Microsoft ecosystems—the relationship between individuals and technology has changed radically.
We are no longer publishing content for an indistinct audience. We are speaking to a machine that answers back.
Social networks pushed us to perform.
AI invites us to confess.
We ask it for professional advice, medical clarifications, revisions of delicate communications, psychological interpretations of our doubts. We share projects, fears, ambitions. This is not a public bulletin board; it is an interlocutor perceived as neutral, competent, and always available.
The real transformation lies here: artificial intelligence reduces the distance between thought and articulation. And when it lowers our defenses, it exponentially expands the scope of data collection.
The perfect trap is not forcing you to speak. It is making it feel natural to do so.
AI, Smart Devices, and Biometric Data Collection
Artificial intelligence has not remained confined to the screen. It has entered smartphones, voice-activated earbuds, smart glasses, and wearable devices that measure heart rate, temperature, and sleep quality.
Each device is presented as an extension of human capability: improving memory, health, performance. Yet every extension is also a sensor. And every sensor is a data collection point.
It is no longer just about knowing what we do.
It is about knowing when we do it, with what intensity, and in what physiological state.
Data is no longer an isolated piece of information. It is a variable within a predictive system.
CBDCs, Digital Payments, and Financial Traceability
To this technological landscape we must add the financial dimension. The debate around CBDCs (Central Bank Digital Currencies) introduces a new level of traceability.
Fully monitorable digital payments.
Detailed transaction histories.
The possibility of cross-referencing purchases with location, time, and income.
If financial data is correlated with biometric and conversational data, the implications become profound.
Knowing what we buy is useful to the market.
Knowing why we buy it is power.
China and the West: Two Models of Technological Control
China is often cited as an example of explicit technological control. Integrated ecosystems such as WeChat unify communication, payments, and public services. Facial recognition replaces cards and tickets. The social credit system formalizes the link between behavior and access to services.
It is a visible, declared model, tied to a recognizable authority.
The Western model is different. Not less pervasive—more fragmented and less transparent.
There is no single center of control. There is a constellation of platforms that collect, cross-reference, and optimize data. As Noam Chomsky observed, in modern democracies power does not primarily manifest through explicit censorship, but through defining the boundaries of discourse and perceived possibilities.
There is no need to forbid.
It is enough to structure the environment.
AI integrated into everyday life contributes precisely to creating that environment.
From Digital Avant-Garde to the Center of Power
Early digital cultures—from cyberpunk to hacktivism—emerged as forms of counter-power. Communities around Unix or FreeBSD represented technical and cultural alternatives to corporate giants.
Today we are witnessing a historical inversion.
Large-scale artificial intelligence is supported by governments, sovereign funds, and technological conglomerates. Investments in chips, data centers, and energy infrastructures are not merely economic bets. They are geopolitical strategies.
The technological frontier is no longer peripheral to power. It is its center.
Surveillance as a Service
The question is no longer simply what AI can do.
The question is: why are we investing so heavily in making it omnipresent?
Perhaps because it enables something no authoritarian system in the past ever achieved at this scale: voluntary collaboration in one’s own mapping.
Not surveillance imposed by force, but surveillance accepted in exchange for efficiency.
Not repression, but assistance.
Not prohibition, but optimization.
The Perfect Trap: Control or Voluntary Choice?
As we grow accustomed to speaking with machines, paying with our faces, delegating memory and decision-making to predictive systems, the line between service and infrastructure of control becomes increasingly blurred.
We are no longer merely the product.
We are a continuous stream of data—updated, analyzed, refined.
The perfect trap is not locking us inside.
It is convincing us that we chose it ourselves.





