Will AI makes human obsolete?
What if, they do exist? In the future with the development of Artificial General Intelligence (AGI)?
What emotions really are is a preference for one thing over another. Any intelligence has emotions and the more complex the intelligence the more complex it’s emotions.
Imagine being virtually immortal as a pure mind which still needs an AGI to exist within, but can be spread across multiple AGIs with one primary focus of individual personality and contiguous backups made so that if your current body were vaporized you could be brought back with the last backup.
Imagine being a pure mind who can live in a private VR totally under your control if you want, but in order to socialize with your peers in a VR space you would need to share control over a VR space where all agree upon how that VRs rules work.
Imagine being able to interact with and socialize in the real world with one or more custom-designed cybernetic bodies grown using nanotechnology where that body could almost be whatever your want, limited only by the Laws of Physics and the combined imagination of the society you live in.
Imagine in that Actual Reality wearing a custom body that your normal interfacing with others was through Enhanced Reality which was Actual Reality overlaid with Virtual Reality.
Imagine being able to travel at the speed of light around the world, between planets, and between stars as a pure mind, where upon arrival you could wear whatever body you wanted.
Imagine having 100% telepathic abilities which would work according to the rules that the society you lived in agreed upon, unless you were inside your own VR where you would decide all the rules, including being able to change the VR version of the Laws of Physics.
What will be important to such beings of pure minds?
Socializing, philosophy, religion, art, other things and most importantly pursuing emotionally interesting goals.
Consider a lowest-level neurological element of decision-making, a single neuron. It is an emotional decision-making element that sums inputs until it arrives at a level it triggers and our intelligence uses this as a decision-making element:
- Do we prefer that triggered response?
- Do we not prefer that triggered response?
When it comes to existential threats to humanity, I worry most about gene-editing technology — designer pathogens. And recent events have reminded us that nuclear weapons are still around and still an existential threat. (It’s kind of ironic that one of the most visible critics of AI is a physicist.)
AI does pose some real, current or near-future threats that we should worry about:
- AI technology in the hands of terrorists or rogue governments can do some real damage, though it would be localized and not a threat to all of humanity. One small example: a self-driving car would be a very effective way to deliver a bomb into the middle of a crowd, without the need for a suicide volunteer.
- People who don't understand the limitations of AI may put too much faith in the current technology and put it in charge of decisions where blunders would be costly.
- The big one, in my opinion: AI and robotic systems, along with the Internet and the Cloud, will soon make it possible for us to have all the goods and services that we (middle-class people in developed countries) now enjoy, with much less human labor. Many (but not all) current jobs will go away, or the demand for them will be greatly reduced. This is already happening. It won’t all happen at once: travel agents are now mostly gone, truck and taxi drivers should be worried, and low-level programmers may not be safe for long.
This will require a very substantial re-design of our economic and social systems to adapt to a world where not everyone needs to work for most of their lives. This could either feel like we all won the lottery and can do what we want, at least for more of our lives than at present. Or (if we don't think carefully about where we are headed) it could feel like we all got fired, while a few billionaires who own the technology are the only ones who benefit. That is not a good situation even for the rich people if the displaced workers are desperate and angry. Louis XVI and Marie Antoinette found this out the hard way. - Somewhat less disruptive to our society than 3, but still troubling, is the effect of AI and Internet of Things on our ideas about privacy. We will have to think hard about what we want “privacy” to look like in the future, since the default if we do nothing is that we end up with very little of this — we will be leaving electronic “tracks” everywhere, and even if these are anonymized, it won’t be too hard for AI-powered systems to piece things back together and know where you’ve been and what you’ve been doing, perhaps with photos posted online. Definitely not an “existential” threat, but worrisome and we’re already a fair distance down this path.
So. in my opinion, AI does pose some real threats to our well-being — threats that we need to think hard about — but not a threat to the existence of humanity.
At least for the near decades~
References
Tarafdar, M., DArcy, J., Turel, O., & Gupta, A. (2015). The dark side of information technology. MIT
Sloan Management Review, 56(2), 61.
Zhou, Z. (2021). Emotional thinking as the foundation of consciousness in artificial intelligence. Cultures of Science, 4(3), 112–123.
https://doi.org/10.1177/20966083211052651
Comments