[one_third]Liberal – Rahul Kapoor
Elon Musk, SpaceX and Tesla CEO, recently launched Neuralink, which is a venture to merge the human brain with Artificial Intelligence. The goal of this initiative is to create devices that allow human brains to merge with software that would allow enhancement of the biological self as we know it. Such technological leaps shine a light towards the next steps for human kind.
In an era where technological advances are faster than ever before, and the sheer volume of information accumulated is arduous to keep up with, Singularity is inevitable. Today, the hypothetical nature of this scenario allows us to examine the confluence of technological revolutions and subsequent human interactions. Even if the pace of progress is slowed, artificial intelligence will be equipped with the capacity to improve itself, rendering human intervention obsolete. Such an intervention needs to be embraced since it would be a major overhaul in the treatment of neurological disorder like Parkinson’s.
[one_third]Independent – Diviyot Singh
No one understands what happens inside a black hole. Similarly, no one knows what will happen when technological singularity will occur.
Technology is growing exponentially as we speak. AI could easily overtake us in few decades. So, technological singularity is a real threat, but there is no point panicking about it. You cannot know what life will be like once it happens, so you cannot do anything to prepare for it. We cannot stop technological progress from happening—after all, it is the development of AI and other technologies that make our lives so comfortable. For instance, you can already ask google assistant to turn off the kitchen lights if you’re too lazy to do it yourself. We can’t stop this progress from happening, even if lights turn off completely for us in future. So, I think we must accept the inevitable. Let it come closer, and the human race can then decide what to do about it.
[one_third]Conservative – Beshoy Shokralla
The main point of the “singularity” hypothesis is that one day humanity may create something so intelligent, that it will begin creating even more intelligent versions of itself at higher rates and alter human civilization in ways we could never imagine. The concept is interesting, but is it really inevitable? To be quite honest, I don’t think so.
The very idea of a super intelligent mega computer being invented by humanity in a way that it will somehow surpass everything we’ve ever done is more fictitious than scientific. Humans don’t even understand how our own brains work, how we produce such complex thoughts, how we create memories, connect them to emotions, create incredible designs and inventions, and yet we’re supposed to inevitably not only come to understand these extremely complex processes, but somehow invent a super computer capable of recursively self-improving?
What I believe can be learned by the singularity, is that people are afraid of one day truly playing God, and then having their creation cease to need them to survive.