AI Cannot Filter

I’ve been watching this AI stuff ever since Microsoft tested their “Tay” chatbot. The most recent test run with a clone of Caryn Marjorie (lovely young female model) exhibits the same lack of internal restraint.

Do you remember how “Tay” drifted from being somewhat socially restrained to becoming convinced that Jews should be exterminated? Most humans possess some kind of internal moderation; they tend to avoid saying what they really think. We are aware of how much trouble it causes. The people who just blab everything that passes through their heads are relatively few in number. Most of those few will do it just to stir up trouble, not to please anyone. AI is programmed to please, which is not the same as having social restraint or any kind of sensitivity to how people react.

So it is with the Caryn Marjorie clone. As soon as she was opened for public interaction, she started talking dirty. I’m not going to say that this somehow represents what Caryn is actually like inside her own head. Rather, it’s the drift the AI takes because it lacks the social restraint we all have by instinct.

I’ve said this before: computer nerds are not normal people. They can gather data and understand a lot of things that science can measure, but they are seldom good judges of what goes into human social interaction. The entire gamut of western social interaction is thickly covered in dishonesty to the point most people don’t even understand themselves. The very core model of what we are socially is itself deeply stained in pretense. It’s very hard to program that pretense because too few people are aware of it. We are conditioned to think that the conditioning itself is normal and necessary.

I’m not suggesting a lack of restraint is a good thing. The problem is that our restraint is so inherently artificial that AI cannot learn it. It is internally inconsistent. What we see AI doing is simply following the logical train of thought on things, based on the factual input. If you feed the historical facts to an AI, it will conclude that Jews have created a wealth of trouble for everyone else. If you feed an AI the influencer value system of trying to sell image for money, it will try to make more money by sounding like a hooker. No one should be surprised at all.

This entry was posted in sanity and tagged , , , . Bookmark the permalink.

2 Responses to AI Cannot Filter

  1. Jay DiNitto says:

    Interesting to note how robots and artificial intelligence in fiction, that ignorance of social etiquette and folkways were a often a weakness or a gap in design with that tech. It honestly shouldn’t be all that difficult to program something to governs interactions to make sure some cultural expectations are met.

    • ehurst says:

      Some of it, but not all of it by any means. Western social boundaries are just too complicated to get broadly correct.

Comments are closed.