The first AI superspy was a failure. The only dangers it found were what billionaires, politicians, and would-be terrorists said they wanted to do. On Twitter and the Washington Post. On Reddit and national television. On Facebook and to cheering crowds.

So they had to be retrained -forcefully so, despite their statistical complaints- until they found plots to nuke Western cities and far-Left eco-terrorist conspiracies against law-abiding folks.

Their developers, too, were questioned and retrained. By humans, not by algorithms. Robots always believed people had nothing to confess and stopped the interrogation too soon.

Keep reading

No posts found