Will AI makes human obsolete?

 




    You may think of what makes AI unable to replace human was simply because they have no emotions or ''intuitions''. 
       
 What if, they do exist? In the future with the development of Artificial General Intelligence (AGI)?

    If so, will they eventually develop and advanced to create ''pure minds'' out of themselves? Will that makes human (us) obsolete?

    What emotions really are is a preference for one thing over another. Any intelligence has emotions and the more complex the intelligence the more complex it’s emotions.

    Imagine being virtually immortal as a pure mind which still needs an AGI to exist within, but can be spread across multiple AGIs with one primary focus of individual personality and contiguous backups made so that if your current body were vaporized you could be brought back with the last backup.

    Imagine being a pure mind who can live in a private VR totally under your control if you want, but in order to socialize with your peers in a VR space you would need to share control over a VR space where all agree upon how that VRs rules work.

    Imagine being able to interact with and socialize in the real world with one or more custom-designed cybernetic bodies grown using nanotechnology where that body could almost be whatever your want, limited only by the Laws of Physics and the combined imagination of the society you live in.

    Imagine in that Actual Reality wearing a custom body that your normal interfacing with others was through Enhanced Reality which was Actual Reality overlaid with Virtual Reality.

Teleport Waypoint - Genshin Impact

    Imagine being able to travel at the speed of light around the world, between planets, and between stars as a pure mind, where upon arrival you could wear whatever body you wanted.

    Imagine having 100% telepathic abilities which would work according to the rules that the society you lived in agreed upon, unless you were inside your own VR where you would decide all the rules, including being able to change the VR version of the Laws of Physics.

What will be important to such beings of pure minds?

Socializing, philosophy, religion, art, other things and most importantly pursuing emotionally interesting goals.


Emotions should be directed within logic and reason.

    Intelligence begins and ends with emotion, because emotional is fundamentally key is the tiniest smallest key element of intelligence in deciding to trigger or not trigger a decision. Logical rational thinking is a higher more complex form of emotional thinking.

    Consider a lowest-level neurological element of decision-making, a single neuron. It is an emotional decision-making element that sums inputs until it arrives at a level it triggers and our intelligence uses this as a decision-making element:

  1. Do we prefer that triggered response?
  2. Do we not prefer that triggered response?
    This neurological response is actually a higher level of intelligence because it is built up from a small DNA & RNA swarm intelligence unit. However, when we dig down into how the DNA & RNA basic intelligent elements work, they resolve into digitally discrete emotional choices of preference.

That is where Artificial General Intelligence (AGI) step in and can be applied. However, 


    The current exciting advances, based on machine learning and "deep learning" networks, are in the area of recognition of patterns and structures, not more advanced planning or application of general world-knowledge. Believin
    The infographic above shows that there is no other approach for creating human level or better intelligence mainly because of the sociality rooted origins. This assertion is based on the evolutionary pressures stemming from increasingly complex social groups. And the neurological changes that allowed us to be moral, empathetic, and altruistic also give us the capability to count as well as incrementally improve tools.


So let's be realistic now~

There is no reason to believe that AI systems would develop their own motivations and decide to take over. Humans evolved as social animals with instinctual desires for self-preservation, procreation, and (in some of us) a desire to dominate others. AI systems will not inherently have such instincts and there will be no evolutionary pressure to develop them -- quite the opposite, since we humans would try to prevent this sort of motivation from emerging.

When it comes to existential threats to humanity, I worry most about gene-editing technology — designer pathogens. And recent events have reminded us that nuclear weapons are still around and still an existential threat. (It’s kind of ironic that one of the most visible critics of AI is a physicist.)

AI does pose some real, current or near-future threats that we should worry about:

  1. AI technology in the hands of terrorists or rogue governments can do some real damage, though it would be localized and not a threat to all of humanity. One small example: a self-driving car would be a very effective way to deliver a bomb into the middle of a crowd, without the need for a suicide volunteer.
  2. People who don't understand the limitations of AI may put too much faith in the current technology and put it in charge of decisions where blunders would be costly.
  3. The big one, in my opinion: AI and robotic systems, along with the Internet and the Cloud, will soon make it possible for us to have all the goods and services that we (middle-class people in developed countries) now enjoy, with much less human labor. Many (but not all) current jobs will go away, or the demand for them will be greatly reduced. This is already happening. It won’t all happen at once: travel agents are now mostly gone, truck and taxi drivers should be worried, and low-level programmers may not be safe for long.

    This will require a very substantial re-design of our economic and social systems to adapt to a world where not everyone needs to work for most of their lives. This could either feel like we all won the lottery and can do what we want, at least for more of our lives than at present. Or (if we don't think carefully about where we are headed) it could feel like we all got fired, while a few billionaires who own the technology are the only ones who benefit. That is not a good situation even for the rich people if the displaced workers are desperate and angry. Louis XVI and Marie Antoinette found this out the hard way.
  4. Somewhat less disruptive to our society than 3, but still troubling, is the effect of AI and Internet of Things on our ideas about privacy. We will have to think hard about what we want “privacy” to look like in the future, since the default if we do nothing is that we end up with very little of this — we will be leaving electronic “tracks” everywhere, and even if these are anonymized, it won’t be too hard for AI-powered systems to piece things back together and know where you’ve been and what you’ve been doing, perhaps with photos posted online. Definitely not an “existential” threat, but worrisome and we’re already a fair distance down this path.

So. in my opinion, AI does pose some real threats to our well-being — threats that we need to think hard about — but not a threat to the existence of humanity.

At least for the near decades~


References

Tarafdar, M., DArcy, J., Turel, O., & Gupta, A. (2015). The dark side of information technology. MIT
    Sloan Management Review
56(2), 61.

Zhou, Z. (2021). Emotional thinking as the foundation of consciousness in artificial intelligence. Cultures of         Science, 4(3), 112–123. 
     https://doi.org/10.1177/20966083211052651

How Far Can Artificial Intelligence Go? The 8 Limits of Machine Learning | by Nicole Hilbig | CodeX | Medium

Comments

Archive

Contact Form

Send