Trending:
AI & Machine Learning

Bruce Schneier: enterprises confuse AI trust with human trust, enabling corporate betrayal

Security expert Bruce Schneier warns enterprises treat AI systems like friends instead of services, creating exploitable trust gaps. The category error lets corporations embed surveillance and bias while users anthropomorphize profit-driven tools. Government regulation of AI controllers, not AI itself, is the fix.

Bruce Schneier, Harvard Kennedy School lecturer and longtime security authority, argues enterprises are making a dangerous category error: confusing interpersonal trust (human intentions) with social trust (system reliability) when deploying AI.

The distinction matters. We trust strangers daily through structured systems: airline safety protocols, food supply chains, traffic laws. These scale because they're predictable, not because we know individual actors. AI should fall into this category. It doesn't.

"We will think of AIs as friends when they're really just services," Schneier said in his 2025 RSA Conference talk. Users anthropomorphize ChatGPT and enterprise agents as helpful assistants while ignoring the corporate interests controlling them. OpenAI, Google, Meta, and Anthropic operate closed ecosystems with undisclosed training data and embedded biases.

The trust confusion creates exploitable gaps. Enterprises deploying AI agents in IoT environments and supply chains treat them as reliable without demanding the transparency required for social trust: what data trained the model, what values shape outputs, what instructions guide behavior. Meanwhile, the surveillance capitalism model means user interactions feed corporate optimization, not user benefit.

Schneier's framework maps to emerging AI security concerns. Data poisoning attacks exploit training pipelines. Model poisoning and backdoor attacks compromise inference. Prompt injection bypasses safeguards. All thrive in opacity.

The fix isn't regulating AI technology itself. "It is the role of government to create trust in society," Schneier argues. That means regulating organizations controlling AI systems, similar to how health codes govern restaurant safety regardless of individual cooks.

The EU AI Act mandates training data transparency. The U.S. lags despite Senate Majority Leader Chuck Schumer's prodding. Schneier's position: social trust requires accountability structures, not faith in corporate benevolence.

Notably, Schneier delivered this framework before the current wave of production AI deployments. CTOs implementing agentic systems should ask: are we treating this like a person or a process? The answer determines what safeguards you need.