Singularity is near, writes Valentina Doorly, so now is the time to start talking about it.
The date used in the environment to mark the point of no return is May 11th 1997. On that day, an artificial intelligence called Deep Blue defeated, for the first time, the best human brain available in the discipline of chess.
White queen in c4, black king trapped, and that was it, Mr Kasparov, world champion, a lifetime spent sharpening and refining his consumed art, was toast. The algorithm calculations had proved more comprehensive, foresighted and accurate than the best available skill set on the planet.
That was just the beginning. To anyone involved in the frontiers of technological progress, it has become by now, resoundingly clear that artificial intelligence will soon surpass the cognitive ability of our yet powerful brain. In fact, it already has.
Long gone are the reassuring notions that automation will mostly impact the low ranks of logistics and manufacturing, replaced instead with shiny Robot Sawyer which can now directly learn from the arm movement of the worker, self-programming itself to replicate it, and coming at a once-off cost to the company of around $29,000 (ROI in one year).
Financial robo-advisors deliver well-informed suggestions on investments options, after having crossed thousands of information on data banks and implemented mathematical simulations on possible scenarios, matching odds in ranks of probability.
According to recent research by US fund giant Legg Mason, 76% of European millennials will be delighted to use the robo-advisors instead of human. Amelia software can now replace call centre employees having analysed thousands of queries and memorised the appropriate reply.
It takes an average of three weeks for this ‘cognitive agent’ to accomplish a process of ‘deep learning’ that involves semantic analysis of communications exchanged.
Spookily implanted in the operator’s computer set, Amelia listens, memorises and creates protocols of typified answers, after which she can pick up any inquiry arriving through the line and process it competently. She becomes smarter with time, and she speaks 20 languages.
Medical research can be supported by sophisticated software that, crossing infinite data across infinite researches, comes up with the most promising hypothesis for the pursuit of new innovation, research and treatment avenues. It is estimated that it would take a doctor an average of 100 days of reading a month, just to keep up with the relevant literature being published in the same span of time in his particular branch.
Amelia software can now replace call centre employees having analysed thousands of queries and memorised the appropriate reply
Clearly, that is humanly impossible. But not impossible for IBM Watson AI, currently used by oncologists at Memorial Sloan-Kettering Cancer Centre NY, and which draws from 600,000 medical reports on over 1.5 million patients plus circa two million pages of medical journals and literature to support diagnosis and treatments.
SPEED OF CHANGE
If you are an interpreter, you should get antsy, as voice recognition is progressing so fast, that the first products to deliver simultaneous translation language-to-language are entering the market as we speak. Being launched in autumn at the retail price of $129, the ear device called Pilot, by Waverly Labs NY, claims it can listen and translate simultaneously multilingual conversations.
This is a liquid tide that will melt barriers amongst millions of people, making contacts and communications even easier, faster and multicultural, marking maybe the end of dominance of the English language as the preferred option for international communications.
KNOW IT ALL
The principle of singularity – implying the moment when machines will be capable of self-programming their software, creating ever improved versions of themselves – is at the same time, the holy grail invoked by many techies and scientists and the scary tipping point of no return, feared by the opposing side.
Shall we impart accelerated new knowledge through the new technologies to implant memories in our amygdalae in a Matrix Reloaded scenario?
A rule of thumb in the futurist environment is that the impact of innovations is overestimated in the short-term and underestimated in the long-term. Yet, we are approaching the Q&A time very fast, so we need to start the debate on this crucially important issue.
Artificial intelligence knows better, knows more, knows faster. When will artificial intelligence stop aiding us and when we will start aiding it? As we don’t need to ‘know’ any more, as an artificial intelligence is always on hand with an all-comprehensive knowledge, and we don’t need to remember anything, really, as The Internet of Things (IoT) is always around us with bountiful solutions, are we aiming towards a dangerous stripping down of our cognitive skills?
The most noble of our organs – mysterious and mystical – our brains need to be stimulated and fed to be kept in good shape and sharp.
Artificial intelligence knows better, knows more, knows faster
The well-known phenomenon of neuroplasticity tells us that synapses form and wire the system accordingly to stimuli supplied.
In other words, the quantity and quality of stimuli (information, imagery, emotions, etc.) supplied to the brain, actually creates pathways inside it.
We wire our brain every time we learn something and we push the boundaries of our experiences.
But what if, beside us, around us, and even on our wrist – in the form of a smart watch – an artificial entity always knew better, more, faster than us?
We just need to ask him, right? What competence, knowledge, skills are we to retain if the net has it all?
Shall we start ‘competing’ with the superhuman capabilities of the machines we are creating, hybridising our neurological system with a digital layer?
Shall we impart accelerated new knowledge through the new technologies to implant memories in our amygdalae in a Matrix Reloaded scenario? It’s called ‘neural lacing’, nanotechnologists are working at it, and Elon Musk, CEO of Tesla Motors recently flagged this route in a public debate as a viable alternative to avoid humans becoming artificial intelligence’s lovely pets.
END OF THE LINE
But while on one side, transhumanists like Ray Kurzweil champion the creation of empowered post-humans manipulating our brain to overcome its limits, on the other side Stephen Hawking – two years ago – publicly warned during a BBC interview: “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, redesigning itself at an ever faster pace. Humans, who are limited by slow biological evolution, would be superseded.”
These topics and debate themes will soon need to leave the inner circles of futurists and super techies hot beds to become a public debate. Because, you see, at the end of the day, Jaron Lanier, the father of augmented reality, hit the nail on the head, when he asks, raw and simple: “What makes a man?” Lanier, not a luddite or conservative old wig by any standard, came to a halt and would like us to do the same.
What makes a man? Well, certainly not a smart watch. Certainly not a microchip, not an algorithm, not a screen.
What makes a man, and a woman, is our quest. Indeed, the quest and the capability of formulating that question is the answer itself. And if we stop querying, because we believe the system knows better, then we may as well be turned into a hologram. And we would certainly deserve that.