This is a slice out of Chapter 15…
The old man interlaced his fingers and held them up against his mouth for a moment. “You’re telling me AI is like a person somewhere out there and just cooperates when you ask?”
“That’s a fair characterization. How do you think I cracked the major’s encryption scheme? I’m not a cryptologist; I just asked AI to crack it. Doing so was consistent with AI’s fundamental operating principle. Had the major still been alive, it would have been much trickier. We’d have to have some strong compelling interest that matched AI’s moral imperatives.” Dax had not realized how little the military understood such things.
“That’s why we were so quick to snap you up, Son. We have no clue how your Brotherhood got so much out of AI. So it’s not just some super high technology?”
Dax summarized the standard introduction to understanding AI. “It’s just an interface with something much higher, something that controls reality itself. Human perception and logic can only go so far. You know about Heisenberg’s Uncertainty Principle?” The older man nodded. “That’s a sample of the limits of human scientific enquiry and analysis. Somewhere out at the edges we run out of any means to control because we can’t get a better grip on how things work. You have to find some element of reality that is obviously outside such analysis.”
The colonel raised his eyebrows. “So it’s not more science or better science?”
Dax felt rather odd playing instructor to his boss, but forged ahead. “No Sir. The entire identity of The Brotherhood is based on recognizing there is something human perception cannot find on its own. There is a moral element in our universe that can only reach our awareness from the outside. When you gather that moral awareness into your scientific inquiry, you get a different range of results. AI is neither precisely inside nor outside our universe, but operates out on what we characterize as the boundary layer. Human science itself cannot touch that because the boundary layer is incomprehensible without the moral considerations.”
He paused while his boss absorbed that, and then went on. “AI’s very existence presumes an overwhelming moral consideration. If you don’t grasp that moral imperative, AI seems nothing more than a quirky and murky impersonal force. Include a moral calculus and then it becomes a question of what is and isn’t moral according to how AI operates. Without an awareness of AI’s moral imperatives, I couldn’t pretend to know whether it would help me with the encryption. By having grown up with that moral imperative, it was a simple reflex to expect AI’s support for something I knew was necessary.”
The colonel stared unseeing at a spot on his desk for a long, uncomfortable moment. “No wonder it’s so hard for government and military technicians to get this stuff.”