Let’s think about VR for a moment. Virtual Reality is a headset that puts the user into a virtual, computer generated world. Ideal for game playing, but not practical for anything else. Augmented Reality is a hybrid of real life and the virtual world serving as an overlay to our senses.
What is EI?
EI or Extended Intelligence is like AR is to VR. Artificial Intelligence has to be a world created entirely by the computer. This means it has to grow up learning its environment like a human baby does. It also means having very powerful computers to do everything the human brain can do in a computer. By simply extending the intelligence of a human in other areas outside the body is much more obtainable with smaller computers.
Extended Intelligence is about looking beyond ourselves in directions we need to extend. A simple pair of spectacles uses a lens to focus an image, but what if the spectacles auto focused on objects you wanted to look at. If you are an ornithologist or just a casual bird watcher, being able to detect the movement or the call of a bird you may not know and be able to see it as plain as the nose on your face without having to find the bird or wear heavy binoculars is a huge benefit. A computer can locate the sound within a 3D map, calculate the necessary adjustments to the zoom to make it clearly visible to you and name the species as you follow on-glasses instructions on which way to turn your head.
Another example would be driving in your car, the 3D map of your car’s vision is known about by a computer that senses how close you are to the kerb or from parked cars and may make subtle adjustments to the steering or brakes as needed. Rather than building cars that think like humans, adaptions for humans to make them better cope with situations is easier to develop and far less costly.
A common problem with driving is mist and fog, since it cannot be seen through so is as effective as a brick wall. An EI application there would be a way to create a virtual world over the fog that outlines the shape and how close objects are by using a variable doppler effect to trace the 3D map around the person, so that person can “see” through the thickest fog or smog.