THE KEV BAKER SHOW, EP#824
SPECIAL GUEST – ANTHONY PATCH
Anthony Patch is our special guest and we discuss the public announcement that Amazon has built a secure cloud for the CIA & 17 other alphabet agencies to use for the vast amounts of information they process. From there we delve deeper into the dark secret at the heart of A.I.
THE BLACK BOX
This episode concentrates heavily on what is referred to in A.I. as the “black box” problem. What is the “black box” problem i hear you ask. Well, it relates to the lack of understanding as to just how or why an A.I. reaches one particular outcome over another. Not even those programmers at the very top echelon of this sector can explain exactly how their A.I. algorithms work. When it comes to future accountability, that is going to be a serious issue.
No one really knows how the most advanced algorithms do what they do. That could be a problem.
If this wasn’t enough of a problem, consider this. Even if the programmers were able to allow us a look inside the neural networks that process the raw data sets, that may not be enough to gain an understanding or knowledge of how these machines work. We are now at the point where these algorithms are learning through observation & even rewriting their own code of the individual neurons within the potentially thousands of other simulated neurons as it learns whats needed to more efficient reach the desired outcome.
We are now living in the time of big data-powered deep learning algorithms that teach and rewrite themselves!
Picture it like this if you will. Nvidia recently tested an autonomous vehicle. Nothing new there i hear you say, until i inform you that this car, unlike ones from Tesla & Google that were programmed by humans, actually had an algorithm that taught itself purely by looking at available data sets. Thats right, the A.I. literally taught itself to drive, and it worked. The thing is, no one knows how it was able to achieve this mind blowing fete. Moreover, just what happens when something goes wrong? Who is liable? The thing rewrote its own algorithms, so is it the car or the manufacturers that are liable? Now you can see why this could be a problem.
The article cites the exciting breakthroughs that these “deep learning” machines are capable of, including the ability to early diagnose potential schizophrenia patients, something that the medical profession is struggling & unable to do. Again, just how did it reach these predictions? The AI in question, named Deep Patient, was given access to over 700,000 medical records & from there, its anyone’s guess as to what patterns it detected that us mere humans couldn’t.
Now, please let me state here that I’m not knocking the fact the A.I. is able to achieve this outcome, because it is literally helping people ahead of time, and who could criticize that. What happens when it goes wrong? What happens if it misdiagnosis someone and this results in patients being put on dangerous drugs they dont require, or worse? Its the same as the driverless car, there is no way to understand the “though process” of the A.I. in reaching any outcome.
They sell the public on the A.I. with the undoubted potential to revolutionize the medical industry & literally save lives that would have been lost otherwise. They want people to crave such a technology, and lets face it, who wouldn’t if it might go on to save you or a family member. This is the marketing ploy to get the masses to accept the massive paradigm change that is just around the corner.
During the show we bring up a new term that we feel we are going to see a lot more of in the coming months. Artificial organisms is the term that one company called Mindfire are using when it comes to their vision of A.I.
Neuroscientist Pascal Kaufmann is the man behind what is an open source initiative to “crack the brain code”. Kaufmann is the founder or Starmind & president of a new foundation called Mindfire.
Mindfire is a new foundation based in Switzerland & is calling on the best minds in the world to come together in a collaborative effort to decode the mind in order to build a truly intelligent, and dare i say it, conscious machine.
“We cannot achieve True AI until we understand actual intelligence. Intelligence has evolved as a means of nature to successfully guide us through an ever-changing environment. This gave rise to behavior, emotions, and consciousness. These critical factors must be taken into account in how we develop AI. This is the purpose of the Mindfire Foundation,” he explains.
Kaufmann is taking a new approach, rejecting the idea that our brains work like the neural networks at the heart of the deep learning process. He, and his team of scientists believe, that the path to machine consciousness lies within our brains, a hidden code that can be broken and then replicated.
Mindfire considers ‘”artificial intelligence’ an obsolete term and thus have coined a new phrase — “artificial organism” — to more fully encompass the totality of what they are hoping to achieve. Kaufmann explains that the term refers to the synthetic intelligence’s carrier system. “Intelligence is not only located in the brain; it’s actually the interaction between the body and the brain that we are building. That’s why we refer to an artificial organism.” This concept, then, unifies the physical, intellectual, and emotional facets of intelligence.
The use of the word “organism” is clever word magic, in my opinion. By referring to something as an organism, it infers life of some kind, which in turn affects how our brains perceive it. Us humans are suckers for anthropomorphising objects, and the casters of word spells know it! We let down our guard when its something we can relate to.
Its interesting that this foundation, just like the Blue Brain Project & CERN, are all based in Switzerland, a country renowned for its neutrality, and secrecy. Keep an eye on Switzerland for more of these A.I. companies, it seems to be the safe haven for such technology.
We discuss so much more within the show itself that you really need to listen to it and then do your own research to determine whether you agree with our outcomes or not.
Unlike the black box of A.I. we are very transparent & insist you all scrutinize the process we went through to reach such outcomes.