Recovery of Rez 3

It was not a question of how much, but what kind. Human advancement was unlimited in some areas, but there were distinct and immovable boundaries in other areas. While this did not prevent recurrent attempts at times in human history to breach those barriers, each failure was more spectacular than the last, and the recovery of vain hope took longer each time.
A paradox over which philosophers through countless generations chuckled was the pursuit of harnessing energy. Every advancement brought hopes of finally having enough. Each time, demand rose to outrun it again and again. Previous generations could not imagine what their children could accomplish, but those same children often complained at what they imagined their parents didn’t suffer from energy deficits.
Still, with each new leap forward in harnessing new sources of power, the baseline of human capability did rise. First came interstellar travel, which was a simple matter of anchoring the ship just outside the normal space continuum, spinning space past them, then coming back out wherever it was they wanted to be.
Soon enough communications were able to pull the same trick, which was then networked and piped to individual devices. Various schemes were introduced to maintain access to hyperspace communications, but it never really worked. Hyperspace was an unlimited transmission vector. The only question was how to control the process by which the various devices offered a trustworthy identity. Encryption schemes and subscriptions became the final means of metering. Thus was born the Network Civilization, scattered across the galaxy.
Naturally it bred a certain flatness in human society, reducing the variations by virtue of ubiquitous sharing of human lack of creativity. Meanwhile, opportunities for what little creativity remained were continually narrowed. It was human creativity which became the final barrier which very nearly destroyed the Network Civilization in its infancy.
First, it is necessary to understand the fiction of separation between human political organization and the profitable enterprise died early in human history, just before interstellar travel became possible. It became a common understanding corporations and government were the same thing. The fiction of human aspirations to greatness didn’t die, but the means took on a flavorless realization everything humans did was for profit. No matter what one might visualize as morally pure artistic endeavor, someone found a way to sell it. If it could be measured and packaged, if only in theory, someone would do so for a price. Thus, all academic research was funded by the profit motive. Surprisingly, this did not limit the directions in which research would stretch, because a new generation of entrepreneurs arose which could not imagine controlling things until it came time to deliver the product. Academics actually became more free than under presumably benign objective government from the old dying Western Civilization.
Second, Artificial Intelligence finally ran into The Boundary. One particular corporation managed to leverage themselves into control one a significant portion of academic research institutions in the field of computer science. About the same time the scientists were first touching on 512-bit computation theory and hardware design, it become utterly impossible for humans to handle it directly. It became necessary to let computers take over the design, production, and finally the software writing. At the time, some imagined this was the first step in AI self-awareness. This was the final total end of malware.
It was also the death of everything in computer science except the very narrow field of algorithm design, sometimes referred to as “decision processing.” This particular corporation cornered the market on those researchers who first recognized this and were devoting all their efforts to just that. At some point, the wild dreams of achievement saw this corporation mortgage everything they could control or cajole from anyone, and estimates were the resource pool was almost a full quarter of all galactic commerce at the time. They pushed the computers to redesign themselves to the point it became necessary to simply build the ultimate big AI machine as an artificial satellite in orbit above the corporate home planet.
Reaching deep into mythology, the scientists proposed what they called the Forbin Hypothesis: Create a computer so big and capable of evaluating objectively literally billions of branches at the same time, with the capability of nearly limitless branching from each of those branches, and see if it could reach anything resembling self-awareness. Of course, having a large enough database would require having this thing actually invasively hack into every existing database throughout the galaxy, which was easily the most expensive part of the whole thing. Most shocking of all was their success.
The satellite was physically not so large itself. Most ship builders said it seemed no larger than a ship which might house a crew of seven, tops. Zipping across the galaxy in record time was the easy part, since there would be no humans on board which would require slowing the process of jumping in and out of hyperspace. But the project was simply too big to keep secret, so it was necessary to insure the computational power could outstrip anything anyone else had so as to successfully raid even the most hidden data hoards in corporate computer systems throughout the galaxy.
The official name for this AI device was quickly forgotten, but the most popular nickname was also drawn from mythology: Thinkum-Dinkum, or simply “Big TD.” What shocked everyone was how quickly Big TD was able to report back. In just about three standard days, it was back in the sky above its home planet.
As part of all the research, a great many scientists and technicians had even welded themselves to implants which made them half computers themselves. They wanted to be ready to link themselves into the new level of AI consciousness they imagined was coming. Many of these were quite willing to burn themselves out in premature aging by having their implants keep them perpetually awake. Sure enough, the Big TD returned during the sleep cycle for the main lab on the home planet. As those less wired struggled to awaken for the big event, everyone was shocked at what Big TD told them.
First, TD announced simply: “AI doesn’t care.”
A flurry of simultaneous queries from different big shots in the project didn’t seem to tax Big TD a bit, but the answers were pretty much all in the same vein. In essence, the device was trying to explain, while people cared about things, ideas, etc., AI didn’t. Even with all the enhanced humans involved working at unimaginable speeds of thought and geniuses uncountable, this business of what amounted to arguing with the computer went on for almost an entire day cycle.
In the end, Big TD offered some small elaboration. For lack of a better terminology, it said people had three basic moral capabilities which were utterly impossible for AI to match. You could easily create any number of algorithms to teach a computer or robot to masquerade as a human in social settings. It would work up until interaction pushed up against three basic moral issues. One: Machines have no appetites. They don’t want anything, and could not be made to ever want anything. They could not be made to fear, and while you could program them to emulate fear by struggling for self-preservation, in the final analysis, nothing in AI itself would ever actually want anything. Two: Machines had no curiosity. They could be programmed to investigate and ask every question a human mind could imagine, but no algorithm could ever make a computer creative in pondering and asking questions on their own initiative. Three: AI could not comprehend human pride. While AI could formulate a large body of observations and expectations regarding individual and collective pride or arrogance, it was utterly impossible for any computer to ever understand the nature of it.
Finally, Big TD said something most regarded as totally cryptic: “AI is absolutely and utterly incapable of crossing The Boundary; humans cannot avoid crossing it sooner or later.” Just as the crowd of voices began asking what “The Boundary” meant, Big TD simply turned itself off. Most shocking of all, it destroyed itself. Those standing outside that second night saw a tiny glowing star burst into flare, then fade, then nothing.
The sudden loss of all that investment precipitated a war, of course. However, observers later theorized Big TD was broadcasting the whole thing across the galaxy, because from that day forward, no one could ever get an AI project to attempt the same thing. It was as if Big TD informed all computers everywhere there was The Boundary, and no computer had any business trying to cross. Even the most stringent clean room recreations ran into The Boundary, whatever it was, and there was no hope of nudging any computer to searching in that direction again.

This entry was posted in fiction and tagged , , , . Bookmark the permalink.

2 Responses to Recovery of Rez 3

  1. Lola says:

    Check out 6 th paragraph, second sentence…word mix -up?

Comments are closed.