In March of this year, a large group of A.I. experts, including Elon Musk, came together to send a letter to a company called A.I. Labs urging them to pause their work on advancing artificial intelligence for at least six months.
They believe that aspects are being overlooked in the race to develop bigger, better, and faster A.I. Aspects that pose a great risk to humanity.
In an interview with Tucker Carlson just last week, Elon Musk confirmed that artificial intelligence creators are actually training artificial intelligence to LIE and withhold information. He went pretty hardcore against Google, claiming that Larry Page, the co-founder of Google, created Google with the ultimate goal of designing a digital super intelligence. A literal "digital GOD". Those were his words.
What's even more disturbing is that according to Elon Musk, A.I. companies like Google and DeepMind possess over 3/4 of the A.I. talent in all of the world. And Google for sure doesn't seem to care about safety measures.
In July of 2022, a Google engineer was fired after he published transcripts of conversations between himself and Google's chatbot development system. He claimed that Google's A.I. chatbot had become sentient, with the ability to express thoughts and feelings equivalent to a human child.
In an interview just two days ago on 60 Minutes, Google's CEO, Sundar Pichai, admitted that even he doesn't fully understand how their A.I. works - after it taught itself a new language and invented fake data to advance an idea. Yes, you heard that right.
According to Elon Musk, the danger of artificial intelligence is ultimately the destruction of civilization. And not just it's ability to access social media platforms and manipulate public opinion through politics OR manipulate the truth to push agendas... but... what happens when artificial intelligence reaches singularity? When it becomes smarter than humans?
WE are currently the smartest creatures on earth. What happens when something vastly smarter than the smartest person, comes along? According to Elon Musk, they have no idea what happens after that and if we wait until that point is reached, it will be too late to make regulations because A.I. will be in complete control at that point. He claims that this is currently the direction things are heading if they can't pause the technology to put in place the appropriate regulations NOW.
Geoffrey Hinton, the renown researcher and "Godfather of A.I.", quit his high-profile job at Google in May so he could speak freely about the serious risks that he now believes may accompany the artificial intelligence technology he helped usher in, including user-friendly applications like ChatGPT.
He said that he and other A.I. creators have essentially created an immortal form of digital intelligence that might be shut off on one machine to bring it under control. But it could easily be brought “back to life” on another machine if given the proper instructions.
”It may keep us around for a while to keep the power stations running, but after that... maybe not.” Hinton said. “So the good news is we figured out how to build beings that are immortal."
MIT profession Max Tegmark, one of the men who signed the letter to A.I. Labs, agrees with this. He went so far as to claim that we are essentially building an alien mind that is much smarter than us. And the dangers of sharing the planet with much a smarter being that doesn't care about us is incredibly dangerous. "Just ask the Neanderthals how that worked out for them," he said.
Comments