Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
Artificial Intelligence: Synergy or Sinnery?
Exploring the philosophical implications of our imminent clash with artificial intelligencePhoto by Sandro Katalina on Unsplash
The handful of questions below are intended to be mostly philosophical in nature and, though some may seem nonsensical, they ought to be approached with an especially open mind, one that is unwoven from general conceptions of social structure, political governance, and human purpose in it’s barest and most fundamental forms.
I’ll once again rehash a previous mission statement of sorts: technology’s rapid progression means that we will be on the doorstep of numerous crossroads that will undoubtedly shake the roots of humankind — it is better to fuel the discussion preemptively, and begin to develop certain commonly and collectively agreed-upon expectations for what ought to be allowed and what we should be especially weary of.
Will the advent of A.I. allow us to embark upon a complete overhaul of traditional labor structures?
This is a question that comes up less frequently than others and one that has an answer that is wholly dependent on whether or not we’d like to take an optimistic or pessimistic view.
In another way to phrase it — A.I. can be seen as the harbinger of an age where humankind can, for the most part, finally unshackle themselves from the toils of labor. Conversely, it can also be regarded — and is often regarded in this way — as an enormous threat to employment, set to disrupt almost every industry and cause massive scale job loss.
Assuming an optimistic perspective, it’s certainly an exciting proposition, one that would have to be supplemented with some measure of a universal basic income for everyone or some completely innovative way by which resources can be accumulated by members of society. While it seems wholly unfeasible to live in a world where humans need no longer work (again, for the most part) and can be set free to pursue their individual endeavors, it is nonetheless a tantalizing prospect.
Preparing for a world without work means grappling with the roles work plays in society, and finding potential substitutes. First and foremost, we rely on work to distribute purchasing power: to give us the dough to buy our bread. Eventually, in our distant Star Trek future, we might get rid of money and prices altogether, as soaring productivity allows society to provide people with all they need at near-zero cost. — Ryan Avent, The Guardian
Supposing one were to take a pessimistic perspective, the threat of soaring unemployment rates is all too real. We’ve already seen the loss of jobs brought about by automation in the workforce and A.I. poses the most menacing danger of all. The darkest of estimates come to show the loss of half of all current jobs to automation and A.I. — if this had even been remotely exaggerated, it certainly seems drastic enough to consider alternative systems of wage disbursement in its entirety.
What kind of features can we expect from neurally integrating artificial intelligence?
In other words, what does a superhuman, or a technologically-enhanced human look like? Not aesthetically but, more so, in the realm of ability. It’s not hard to envision characteristics from a learning perspective — the ability to have any language downloaded into our cerebrums in an instant, to learn how to play instruments, master psychology, know all of history and understand every scientific principle at the wink of an eye. Unfathomable modes of communication, energy manipulation, perhaps even time bending or physiological enhancements that can be idealized, and often are with fiction and sci-fi.
To burst some bubbles: we’re likely far and away from anything resembling the above-stated enhancements and more so wading through the waters of combating neural-degeneration. To stimulate neural circuits that are naturally slowing down, to regenerate wilting brain tissue or assist with the regaining of lost neural function — these are the types of A.I. based and supported realities on our horizon.
Where this could go, in terms of that interface to machines or interface to prosthetics, is that you can literally connect to the correct circuit and continually interact at a neuron level back and forth over time. — Dr. Kiki Sanford
There seems to be somewhat of a reluctance for many to delve further into the possibilities that actual enhancements could entail, often anchored by an underlying concern of whether or not such enhancements — not needed but merely desired — would be ethical.
To what extent will such potential (of superhuman computation, thought and analysis, memory retention, accelerated learning, etc.) be susceptible to traditional human folly — greed, abuse, manipulation? It’s not hard to regard it as an inevitability, albeit far down the line.
Would it be immoral to enslave our own simulated minds?
“Though I may be been constructed,” he said, “so too were you. I in a factory; you in a womb. Neither of us asked for this, but we were given it. Self-awareness is a gift. And it is a gift no thinking thing has any right to deny another. No thinking thing should be another thing’s property, to be turned on and off when it is convenient.” ― C. Robert Cargill
In Black Mirror’s episode, White Christmas, writer Charlie Brooker nailed the shiver-inducing concept. Should we be able to extract our personality, judgement, and overall character and confine it to some sort of artificially intelligent interface (like Ironman’s Jarvis, for example), is it inherently wrong, then, to enslave it for our benefit?
Any effective A.I. that makes decisions based on what we would individually prefer would be most efficient if it were to work under a context of our pure, replicated selves. In essence, this question would be grouped along the more generalized problems that span the lines of questioning over whether or not rights ought to be afforded to machines that demonstrate self-awareness and consciousness.
To idealize some measure of an answer, it’s not far-fetched to conclude that if the enslaved mind understands the concept of free will and, assuming that it has achieved some level of self-realization, then yes — it would seem intrinsically immoral to enslave something that understands the difference between liberty and servitude.
Is a robotic revolution inevitable?
Will assembly-bots and Roombas one day rise up against us? This is the crux of almost any sci-fi film that features A.I. in a prominent role, having been epitomized with the threat of Skynet in the Terminator series.
It’s obvious that computers have to follow pre-designed programming. The most high-powered of computers are currently unable to ultimately develop their own programming — we assume, anyway. To fully explore this question, we have to ask whether or not a computer is capable of self-consciousness in the first place, and waters get murky on this point.
For instance, computing software can be affixed to random number generators and achieve some measure of self-propagation in this way — in a sense, being able to unpredictably re-write their own programming. Though this seems a far cry from a machine versus human confrontation, it is not wholly different from the way revolutions start in our current world. Some can say that many of our current revolutions against governments are programmed in a similar way, inspired being the more adequate term.
Symbiosis may be a key focus to strive towards here.
“If we can effectively merge with A.I. by improving the neural link between the cortex and your digital extension of yourself — which already exists, it just has a bandwidth issue — then effectively you become an A.I. human symbiote. And if that then is widespread, [where] anyone who wants it can have it, then we solve the control problem as well. We don’t have to worry about some evil dictator A.I., because we are the A.I. collectively. That seems to be the best outcome I can think of.” — Elon Musk
In a world where humans become increasingly dependent on A.I. and A.I. continues to co-exist more intimately with its maker, this relationship will be scrutinized to a point where we will have to render it ultimately symbiotic or mutually-destructive, self-consciousness acting as the precursor. So until we get closer, this remains an issue to be kicked around by theorists and Hollywood producers.
How many of our world’s problems will A.I. be able to solve?
As we approach the conceptualized singularity, optimists are salivating at the sheer possibilities of A.I. having the capacity to solve quandaries that we humans seem unable to get around, solving quandaries in the blink of an eye that would otherwise take us generations.
Mathematical, social, medical, logistical, astrophysical — there is endless work for our technological successors to get their hands on and it’s a reverently enticing prospect to consider that A.I. may be able to generate effective measures by which we can reverse climate change, establish sustainable habitats on other planets, maximize medical efficiency, etc.
How much will we trust A.I. to assist us? Will it be prone or even able to make mistakes? Will we trust its judgement and, beyond that, how do we even calibrate its judgement when we ourselves sometimes have a hard time deciding what is moral?
“It’s not the span of time that counts, but what you do with it. While you humans have been grubbing around the galaxy, looking for a sense of purpose, a meaning to pin on the chain of cosmic accidents that brought you shambling into existence, we have been doing great things. In the span of time that it takes you to sneeze, I can run the equivalent of a year’s worth of human consciousness. Imagine all the thinking we have done since our emergence.” ― Alastair Reynolds
Is there an inherent evil or malevolence associated with A.I.?
Hollywood is actively working on preparing us for the worst case scenarios, apparently, by more-often-than-not depicting A.I. as our enemy, as our subsequent downfall. Is there really a reason to buy into a doom and gloom perspective?
Avoiding the thralls of a Terminator discussion, many have illustrated very possible dangers of artificial intelligence through the way that it may be manipulated to assist endeavors of sheer profitability or national interest. In other words, advertising, spying, exploiting, etc.
If and when the notion of A.I. developing its own consciousness ever comes to be accepted as a legitimate prospect, the debate will morph into a wholly different animal taking on entirely different concerns. At this time, however, it’s not hard to see that many are so apprehensive about A.I.’s damning capabilities because we’re used to our own damning tendencies.
In other words, we ought to only fear whatever it is we program into the system in the first place. Ex Machina exemplified this point brilliantly — the protagonist robot Ava embodied the same principle of self-preservation and deception as that of her creator.
Ava: Isn’t it strange, to create something that hates you?
In sum, it can be argued that it takes the power of abstraction to be cruel. And, until we actually get to this point, we should worry about the dangers of our own potential than anything else.
As I had mentioned in a previous post that examined our eventual converge with artificial intelligence, to ask these questions seems to unfairly cast a dark shadow onto an inevitable terminus along the timeline of civilization. The prospective benefits of A.I. more than justify our lusting appetites for it.
Though, there are small blips of concern that grow larger and louder each time they’re addressed in science fiction or pop culture. These blips are evidenced in this post as there seems to be a thematic common denominator to every answer listed above — each prospect can be regarded as a double-edged sword. For this is human nature — to assume and ponder the destination before arriving there. The truth is, until we get there, we really don’t know. As such, we may want to pay as much attention to the way we’re travelling, rather than solely focusing on where we’re going.
Read On: Leaving Planet Earth
The philosophical implications of humanity one day leaving the cradle of humankind
A.I. : Synergy or Sinnery? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.