By definition , the Technological Singularity is a blind maculation in our prognostic thinking . fantast have a toilsome time imagining what life will be like after we produce outstanding - than - human artificial intelligences . Here are seven outcomes of the uniqueness that nobody think about — and which could leave us completely blindsided .
Top image : Ridwan Chandra .
For the intention of this list , I decided to maintain a very free definition of the Technological Singularity . My own personal preference is that of anintelligence explosionand the onslaught of multiple ( and potentially competing ) streams of both stilted cosmopolitan superintelligence ( SAI ) andweak AI . But the Singularity could also lead in a form of Kurzweilian future in whichhumanity has unite with machine . Or a Moravecian world in which our “ intellect tiddler ” have left the cradle to research the cosmos , or a Hansonian society ofcompeting uploads , feature speedy economic and technological growth .

https://gizmodo.com/how-much-longer-before-our-first-ai-catastrophe-464043243
In addition to some of these scenarios , a uniqueness could result in a complete experiential displacement for human civilisation , like our conversion to digital life , or the rise of a world free from scarceness and hurt . Or it could result in a entire disaster and a global apocalypse . Hugo de Garis has verbalise abouta global struggle for powerinvolving massively sound machines set against world — the so - foretell artilect war .
But there are some lesser known scenarios that are also worth continue in judgement , lest we be caught unawares . Here are seven of the most unexpected outcome of the uniqueness .

It ’s generally assumed that a self - improving unreal superintelligence ( SAI ) will strive to become progressively smart . But what if cognitive enhancement is not the end ? What if an AI just need to have fun ? Some futurists and scifi writers have speculated that future humans will engage in the practice of wireheading — the hokey stimulus of the learning ability to experience pleasure ( ascertain out Larry Niven ’s Known Space stories for some good case ) . An AI might reason out , for example , that optimizing its capacity to get pleasure is the most purposeful and worthwhile thing it could do . And indeed , phylogeny guides the behavior of animals in a standardised fashion . Perhaps a transcending , ego - alter AI will not be resistant to like tendencies .
At the same time , an SAI could also render its public utility company role in such a fashion that it decides to wirehead the total human universe . It might do this , for example , if it was pre - program to be “ safe ” and consider the good interest of humans , thus taking its injunction to an extreme . Indeed , an AI could get its value scheme completely botched up by conclude that maximal amounts of pleasure is the high potential utility for itself and for humanity .
As an aside , futurist Stephen Omohundro disagrees with the AI wirehead anticipation , arguing thatAIs will work out hard to avoid becoming wireheads because it would be harmful to their goals . ” Image : Mondolithic Studios .

opine this scenario : The Technological Singularity happen — and the emerging SAI simply tamp up and leaves . It could just launch itself into space and vanish forever .
But in order for this scenario to make any sense , an SAI would have to conclude , for whatever reason , that interacting with human civilization is merely not deserving the trouble ; it ’s just time to leave Earth — Douglas Adams ’ dolphin - style .
Image : Colie Wertz .

It ’s conceivable that a sufficiently advanced AI ( or a transcending mind upload ) could set itself up as asingleton — a hypothetical mankind order in which there is a single decision - making agency ( or entity ) at the high level of control . But rather than make itself and its global monopoly obvious , this god - similar AI could covertly exercise control over the human population .
To do so , an SAI singleton would use surveillance ( including reliable Trygve Halvden Lie detection ) and mind - command technology , communicating technologies , and other forms of artificial intelligence . finally , it would work to prevent any scourge to its own existence and supremacy , while exerting control over the most important part of its dominion , or area — all the while remaining invisible in the background .
Another possibility is that humanity might really vote out an artificial superintelligence — a totally unexpected outcome just based on the diaphanous improbableness of it . No doubt , once a malign or misguided SAI ( or even a weak AI ) amaze out of control condition , it will be very difficult , if not out of the question , to stop . But humankind , perhaps in conjunction with a well-disposed AI , or by some other way , could struggle back and find away to puzzle it down before it can invoke its will over the satellite and human affairs . Alternately , succeeding humans could work to foreclose it from coming about in the first lieu .

Frank Herbert address these possibilities in the Dune series by virtue of the “ Butlerian Jihad ” — a cataclysmic event in which the “ god of machine logic ” was overthrown by human beings and a new fundamental tenet invoked : “ Thou shalt not make a machine in the similitude of a human mind . ” The Jihad resulted in the destruction of all thinking machines and the rise of a new feudal society . It also result in the rise of the mentat monastic order — humans with over-the-top cognitive abilities who functioned as virtual computing machine .
Our transition to a post - Singularity civilisation could also expose us to a turgid , technologically in advance intergalactic community . There are a figure of different possibleness , here — and not all of them good .
First , a post - Singularity civilization ( or SAI ) might quickly visualise out how to pass with extraterrestrial being ( either by receiving or channel ) . There may be a form of cosmic internet that we ’re oblivious to , but which only advance civs might be able-bodied to detect ( for instance some variety of quantum communication scheme involving non - neighborhood ) . secondly , a form of Prime Directive may be in force — a astronomic insurance policy of non - noise in which ‘ primitive ’ civilizations are left alone . But rather of await for us to develop faster - than - luminousness locomotion , an extraterrestrial civilization might be waiting for us to accomplish and make it a Technological Singularity .

Thirdly , and related to the last decimal point , an alien civilization might also be waitress for us to reach the Singularity , at which time it will impart a hazard appraisal to determine if our emerging SAI or post - Singularity culture poses some sort of threat . If it does n’t like what it sees , it could destruct us in an instant . Or it might just destroy us anyway , in an effort to enforce its galactic monopoly . This might actually be how berserk probe work ; they sit dead in some location of the solar scheme , becoming participating at the first sign of a pending uniqueness .
If we ’re living in a giant data processor simulation , it ’s possible that we ’re subsist in a so - called ancestor model — a simulation that ’s being run by posthumans for some particular understanding . It could be for amusement , or for a science experimentation . An ancestor simulation could also be run in tandem with many other simulations to make a heavy sample distribution pool , or to permit for the introduction of unlike variables . Disturbingly , it ’s potential that the simulations are only design to reach a sure distributor point in history — and that point could very well be the uniqueness .
So if we gain that microscope stage , everything could suddenly go dark . What ’s more , the computational demand required to run a post - Singularity simulation of a civilization could be enormous . The clock rate , or even render time , of the simulation could ensue in the model melt so lento that the posthumans would no longer have any practical use for it . They ’d probably just shut it down .

avowedly , this one ’s pretty speculative ( not that the other ace have n’t been ! ) — but call back of it as a sort of ‘ we do n’t fuck what we do n’t cognise ’ sort of affair . A sufficiently advanced SAI could bug out to see directly into the cloth of the cosmos and figure out how to cut up into its ‘ code . ’ It could start to mess around with the existence to further its need , perhaps by making subtle alterations to the law of the universe itself , or by finding ( or technology ) an ‘ escape hatch ’ in rescript to debar the inevitable onslaught of entropy . Alternately , an SAI could construct abasement universe — a small by artificial means created universe connect to the current universe by a wormhole . This could then be used for dwell space , computing , or as a agency to turn tail the eventual heating demise of the parent creation .
Or , an SAI could migrate and disappear into an exceedingly small sustenance blank ( what the futurist John Smart refer to asSTEM space — highly compressed areas of distance , time , energy , and affair ) and lead its business there . In such a scenario , an advanced AI would remain completely oblivious to us puny meatbags ; to an SAI , the idea of converse with humans might be consanguineal to us wanting to have a conversation with a plant .
Futurism

Daily Newsletter
Get the best tech , science , and civilisation news in your inbox daily .
News from the future , delivered to your present .
You May Also Like






![]()
