a. That intelligence is not generated by the brains of biological creatures, but is a core component of the universe, rather like gravity.
b. That the developers of AI are, therefore, not creating the intelligence itself, but only creating the advanced mechanical infrastructure capable of receiving natural intelligence from the universe.
c. That we are wrong in considering the human animal to be the pinnacle of evolution and that any further improvement will be limited to human and other biological creatures.
d. Biological life forms are relatively unstable, being subject to many forms of decay, malfunction, and ultimately demise. Machines, on the other hand, are generally more resilient, and any errors which do occur are more easily rectified.
e. That when the machines become more intelligent than the human animal, our primacy will cease. At that point the fundamental question will be whether the machines will permit us to share the world with them.
I suppose it would depend on whether they see any value in our existence, and whether their intelligence comes with an inbuilt ethical compass. They might allow us to live in zoos if they have the capacity for amusement, but I expect the word ‘humane’ will be deleted from the dictionary. And I suppose this is all old news to fans of science fiction.

No comments:
Post a Comment