Выбрать главу

Is such a machine or system even possible? The dream of every prophet, fortune-teller, priest, planner, investor, protector, and lover, ever since our brows got pushed forward by those lamps, the prefrontal lobes.

In modern times, much of the investment went into “intelligent” computation, fed with massive information. Ideally, all information. The World Meteorological Model consumed more computing power than some major cities, dividing Earth’s surface, atmosphere, and oceans into ever smaller cells, transforming those pathetic old four-hour “weather reports” into a finely meshed gas-vapor-energy sim that lets folks plan what to wear on vacation, ten days ahead. A miracle so routine that several billion ingrates take it for granted, then diss the genius scientists who built the WMM for believing climate can change. Don’t get me started.

And yet, even the best modeling programs kept bumping against their twin enemies, complexity and chaos. The famous butterfly effect, where time (our ancient foe) amplifies even tiny perturbations—say, the flapping of a monarch’s wings—into a hurricane of downstream variations. Later efforts to push the WMM forward by even one more hour threatened to double the system’s computational needs.

How about quantum computers? Arrays of qubits processing fine skeins of possibility in parallel. Spectacularly parallel—if mystically inclined cyberneticists are right that quantum machines tap networks of entangled computers in alternate universes. And yet…

…yet the mesh models seemed helpless when it comes to analyzing human affairs.

Clearly, the problem was not in the machines, but software. Lacking a Hari Seldon, but swamped with all kinds of Big Data, we didn’t know how to mix and stir and bake all the ingredients.

From ancient warlords to insurance companies seeking better underwriting formulas; from investment arbitrage to handicapping political races, to predicting the next move by terrorists or strategic powers, to planning a new store for your doughnut-shop chain, there would be no end of eager customers for improved foresight services.

Only, even if you solve the complexity and chaos problem, there’s another rub. If you keep the method secret, you’ll eventually turn the whole world against you, or else fall into multiple traps of overconfident delusion. But if you share it, adversaries will apply every new forecasting method and cancel each other out! We’ve seen that happen to every brilliant stock market analytics tool.

The cancellation effect can be a good thing! What was it Sun Tzu said about war? Or maybe Clausewitz? Conflict only becomes violently physical when one side is mistaken about the other’s abilities or intentions. It’s why Eisenhower, humanity’s second most underrated statesman, made such a point of “open skies,” pushing development of spy satellites so both sides might see and predict better, calming their worst fears.

As Sophia’s team calmed when they evaluated data from the tap that Kilonova and I installed at the Golden Palace.

All right, so the Mazellas hadn’t made an epic breakthrough. But what kind of breakthrough had they made?

Sophia’s project caught my curiosity, so her top analyst—Simon Anderson—gave me reading materials. Books by Poundstone and Rebonato and Hanson and Pentland and MacLean shed some light on quantitative analysis and risk assessment. The details went way over my head—especially since I still had nine shows a week to perform—but I dug some of the gist. Enough to realize the quants had bitten off way more than they could chew. The better their models got, the more likely they’d prove brittle when some fickle, human factor veered unexpectedly.

“You get a more robust system when there’s diversity,” Anderson explained. “With scenarios, the storyteller often pushes one part of the narrative, trying to make a point. But that tendentious tendency eases when you increase the number of contributors.”

“Like with Delphi?” I asked, poking deliberately.

Simon shrugged. “OK, sure. Back in the 1950s, the Rand Corporation tried simply polling large numbers of people, getting them to vote on what they thought might happen in the future. John Brunner’s 1960s book The Shockwave Rider portrayed that method working better than it ever wound up performing in real life. Outside a novel, the results weren’t impressive.”

“Um. Duh? Delphi just measured the average opinion of a herd. Herds follow whatever’s fashionable. That’s no way to build a smart mob.”

“Oh? Then how would you do it?”

“Competition! That’s what wagering has always been about. It’s why the Golden Palace oddsmaking system had you spooked.”

“Hm. Then why haven’t statesmen and politicians and captains of industry long ago adapted, using competitive wagering systems to predict, and to make better policy?”

“Beats me. Maybe because betting always had such a low reputation. And it was vulnerable to cheating. Money is a good incentive, but it also warps everything, like gravity around a black hole. Anyway, haven’t there been efforts to adapt the approach lately, by setting up prediction markets?”

“Sure. Professor Robin Hanson established one of the first modern versions, with later variants run by everyone from SAP and Intrade to the Long Now Foundation. Start by gathering a large number of savvy volunteers. Only, instead of polling or voting, you get them wagering against each other—usually with pride-points, or else small charitable donations—just enough to get their competitive juices flowing. When it’s adversarial, folks care more, pay closer attention, maybe study a bit, before betting.

“IARPA then took things a bit further with their Good Judgment Project, creating a large pool and giving the volunteers access to lots of unclassified background material, tracking outcomes and seeking individuals with good predictive success. Some amateurs outscored top CIA analysts! The best were then put together in teams of various kinds—”

I leaned forward. “And the results?”

“Good. It’s partly classified. But a moderate step forward.”

“Still, only another incremental step.” I pondered for a moment. “Jeez, one would think that this IARPA approach ought to get the most investment of all.”

“Oh?” Simon glanced at his watch, then looked back at me archly. “The overall outcomes weren’t that much better than other predictive systems.”

“Yes, but you’re missing the big picture. We should be sifting the largest pool possible, not for the predictions themselves, but in order simply to find out who is right a lot.”

“Well, sure, I get that—”

“Do you? The IARPA program appears to have preselected by all sorts of criteria. How big was their pool?”

“It started at about a thousand.”

“A trifle. It should be hundreds of thousands, and with very loose criteria, with just one aim—find out who’s right more often than not. Then study the heck out of those people.”

“You’re talking about a predictions registry,” Simon said with a sigh. “It’s been tried, on a small scale. One Utopian goal was to give added credibility to people who are—as you say—right a lot. So that it translates into reputation.”

Utopian, indeed, I thought. And for once, Simon and I agreed about something.

“Like the way Nate Silver vaulted from nerdy number cruncher to media star for his election forecasts. Yeah, we should be scanning and scoring millions, so that being right a lot counts more in building credibility than money, charisma, or connections.”