First Principles

The Universal Protocol for Scaling Intelligence

QIS reduced to its absolute foundation: If insight is aggregatable to an edge node, and similarity is definable — intelligence scales quadratically. Any entity. Any problem. Any domain.

By Christopher Thomas Trevethan · January 17, 2026

This article strips QIS to its absolute core. No domain-specific examples. No implementation details. Just the fundamental abstraction that makes quadratic intelligence scaling possible — and why it applies universally.

The Core Abstraction

At the most fundamental level, QIS is this:

If insight is aggregatable to an edge, and similarity is definable — intelligence scales quadratically.

That's it. Everything else is implementation.

Let me unpack what this means.

The Three Conditions

QIS applies whenever three conditions are met:

Condition 1: An Entity That Could Benefit From Insight

Something exists that could make better decisions, optimize performance, or improve outcomes if it had access to relevant insight. This entity could be a human, a machine, a robot, an AI agent, an organization, or any system capable of acting on information.

Condition 2: Insight That Is Aggregatable to an Edge Node

The relevant insight has already been generated — by someone or something that experienced a similar situation. AND that insight can be aggregated to a compute node (any compute: phone, edge device, cloud, browser, vehicle, sensor, etc.). If the insight exists but can't be ingested at an edge, QIS can't route it. If it CAN be ingested — from any data source — it can be distilled, deposited, and shared.

Condition 3: Similarity Can Be Defined

There is some method — any method — of determining what makes two situations "similar enough" that insight from one is relevant to the other. This could be expert-defined, AI-determined, learned from data, or inferred from context.

If these three conditions are met, QIS applies. The domain doesn't matter. The entity type doesn't matter. The implementation doesn't matter.

The Critical Constraint: Aggregatable to Edge

"Aggregatable to an edge node" deserves emphasis. QIS doesn't require moving raw data centrally. It requires that insight can be ingested at whatever compute node serves the entity — could be a phone, a vehicle computer, a cloud API, a Raspberry Pi, a browser. The edge node ingests from whatever data sources are available, distills into an outcome packet, and participates in the network.

If the data source can feed an edge node, QIS applies. Sensors, APIs, databases, manual input, wearables, telemetry — any data source that can reach any compute.

The Flow: How It Works

The Universal QIS Flow
1
Situation → Address: Your current situation, however defined, becomes your routing address. The problem IS the key.
2
Route by Similarity: Your address routes you to all entities that share it — everyone with a sufficiently similar situation.
3
Retrieve Outcomes: You receive outcome packets from those similar entities — distilled insights from their experiences.
4
Synthesize Locally: Your edge node combines these outcomes using any synthesis method to produce YOUR personalized insight.
5
Deposit Your Outcome: When you experience an outcome, you distill and deposit it back — raising the baseline for everyone who shares your situation.

This flow is universal. It doesn't care what the "entity" is, what the "situation" represents, or what "insight" means in your domain.

The Math: Why It Scales

N(N-1)/2
N entities sharing a situation create N(N-1)/2 pairwise synthesis opportunities — with only O(log N) communication cost per entity.

This is the quadratic scaling law. It's pure mathematics, independent of domain:

10 entities = 45 synthesis opportunities

100 entities = 4,950 synthesis opportunities

1,000 entities = 499,500 synthesis opportunities

10,000 entities = 49,995,000 synthesis opportunities

Every new participant increases the collective intelligence quadratically while their individual communication cost grows only logarithmically.

The Breadth: Every Layer Is Universal

Look at the full architecture diagram. Every layer shows maximum breadth:

Data Sources

"Anything that produces data"

IoT sensors, APIs, databases, wearables, voice, manual entry, telemetry, environmental, financial, social...

Edge Nodes

"Any compute that can process"

Smartphones, edge AI, cloud models, local LLMs, smart sensors, hospital systems, vehicles, PLCs, browsers...

Similarity Definition

"Any method of defining similar"

Expert templates, AI-determined, doctor-assigned, learned, inferred, hybrid...

Routing Infrastructure

"Any infrastructure that routes by similarity"

DHT, vector database, pub/sub, gossip protocol, IPFS, service registry, hybrid...

Outcome Packets

"The insight itself, any domain"

Treatment results, sensor readings, performance metrics, decisions, observations, any distilled outcome...

Synthesis Methods

"Any method of combining outcomes"

Vote, tally, Bayesian, weighted median, ensemble, confidence filter, meta-learning, custom...

The protocol doesn't prescribe ANY of these. It says: "Use whatever works for your domain." The abstraction holds regardless of implementation choices.

The Beneficiaries: Who/What Can Use This

QIS is not limited to humans. It applies to any entity that could benefit from distributed insight:

👤

Humans

Patients, farmers, workers, researchers, decision-makers

🤖

Robots

Manufacturing, logistics, service, exploration

🚗

Vehicles

Autonomous cars, drones, fleets

🧠

AI Agents

LLMs, decision systems, optimization engines

🏭

Machines

Industrial equipment, sensors, control systems

🏢

Organizations

Companies, hospitals, governments, networks

If it can have a "situation," receive "insight," and act on it — QIS applies.

The Key Insight: The Problem IS the Address

This is what makes QIS fundamentally different from traditional approaches:

Your situation — your problem — IS your routing address.

You don't search for insight. Your situation automatically routes you to everyone who shares it. The insight finds you.

Traditional AI asks: "Given this data, compute an answer."

QIS asks: "Given this situation, who else has experienced it, and what happened?"

The insight already exists. The outcome already occurred. Route to it.

Why This Is Universal

QIS isn't a healthcare protocol that happens to apply elsewhere. It isn't an IoT protocol that healthcare can borrow. It's the fundamental protocol for scaling intelligence across ANY distributed system where:

1. Entities exist that could benefit from insight

2. Insight exists distributed across those entities

3. Similarity can be defined between their situations

4. Outcomes can be aggregated to edge nodes

Healthcare is one application. Agriculture is another. Industrial IoT is another. Autonomous vehicles, supply chains, research networks, AI agent swarms — all applications of the same fundamental protocol.

Priority Claim: Scope of Innovation

This article establishes the full scope of the QIS Protocol innovation:

A method and system for scaling intelligence quadratically across any distributed entities where (1) similarity between situations can be defined by any method, (2) insight for those situations can be aggregated to any compute node capable of processing, (3) routing occurs by any infrastructure supporting similarity-based lookup, and (4) synthesis occurs locally by any method of combining outcomes.

Every domain-specific application — healthcare, agriculture, IoT, autonomous systems, AI coordination — is a subset of this universal protocol.

The innovation is not domain-specific. The innovation is the protocol itself: situation as address, routing by similarity, outcome sharing, local synthesis, quadratic scaling.

What This Means

If you are building ANY system where:

• Multiple entities face similar situations

• Those entities can ingest relevant data at an edge node

• Outcomes from those situations can be distilled and shared

• Similarity between situations can be defined

Then QIS applies. The quadratic scaling law is available to you. The architecture is documented. The protocol is defined.

The key constraint is simple: Can insight reach an edge node? If yes — from any data source, to any compute, for any entity — the protocol applies.

Any entity. Any problem. Any domain. Any scale.

If insight is aggregatable to an edge and similarity is definable — intelligence scales quadratically. That's the first principle. Everything else is implementation.

Go Deeper

See the Full Architecture The Paradigm Shift in Three Words
The Three Elections 11 Flips Every Component Exists The Scaling Law All Articles