• We see that you're not registered. Please read this thread and if you want, sign up on the forum.

A Post-Singularity World

Shenandoah

Access Write Violation
Admin
Legend
Joined
Nov 1, 2019
Posts
83
Points
18
Reaction score
44
Quality Posts
1
If you're not sure what the singularity is, it's this concept of what will happen when we acheive AGI (Artificial General Intelligence). As we approach human-level artificial intelligence, the rate at which we develop AI will increase exponentially as we apply our cutting-edge AI to develop even better AI. At a certain point, AI will develop so rapidly that humanity can't keep up, and that's when we reach the singularity. What will happen ever this point nobody knows, hence the use of the word singularity, taken from the science of black holes.

What do you think will happen if we do reach this singularity? What will the world look like afterwards? Of course this is impossible to predict, but it's fun to think about. Personally, I think depending on how we handle the emergence of AGI and eventually ASI (Artificial Super Intelligence), several options occur. We might get killed off by AI somehow, or AI might trick us into killing ourselves.

I think our best bet might be merging with AI, although I'm skeptical about that approach as well considering how computers can still send and process things much faster than we can, and that gap in processing power will just continue increasing into the future. But assuming that this approach somehow works and we manage to merge with AI, that would be a truly interesting result.

We'd be trillion times smarter. What humanity will choose to do with this new power is definitely up for grabs, I could never guess that. Maybe we'll go down a Star Trek-ish route (which I hope we do) where we live in a reality where we don't have to work, but we choose to work on what's important to us. I'd personally like to explore the depths of space, lol.

What do you think?
 

Space Pirate

Working on it!
Admin
Legend
Joined
Nov 1, 2019
Location
Home
Posts
52
Points
19
Reaction score
39
Quality Posts
2
Ayy! I do believe that in the future we will reach the singularity however when is the other question. I do believe it will be soon though i don't know about the field to give an educated guess when we could see something like that. There is a concept called "Technological Stagnation" Isaac Arthur did a great video about it here. In a nutshell it's when civilization can't or won't progress technology for whatever reason and i find it fascinating. I do not think there is infinite knowledge in the world and thus I'm led to believe that there will come a time when technology will begin to stagnate, as someone would argue that we already have.

This could apply to a possible AGI as well but to a lesser extend i would believe. Though when/if we ever get a AGI don't think it will be a million times smarter than us or anything and be able to overpower us quickly if it so wants to. It will be a threat for sure, but i think we will be able to save our skin if it turn hostile or malicious.

If it's entirely on our side then we will become gods, and that's cool. 🏴‍☠️
 

Shenandoah

Access Write Violation
Admin
Legend
Joined
Nov 1, 2019
Posts
83
Points
18
Reaction score
44
Quality Posts
1
Ayy! I do believe that in the future we will reach the singularity however when is the other question. I do believe it will be soon though i don't know about the field to give an educated guess when we could see something like that. There is a concept called "Technological Stagnation" Isaac Arthur did a great video about it here. In a nutshell it's when civilization can't or won't progress technology for whatever reason and i find it fascinating. I do not think there is infinite knowledge in the world and thus I'm led to believe that there will come a time when technology will begin to stagnate, as someone would argue that we already have.

This could apply to a possible AGI as well but to a lesser extend i would believe. Though when/if we ever get a AGI don't think it will be a million times smarter than us or anything and be able to overpower us quickly if it so wants to. It will be a threat for sure, but i think we will be able to save our skin if it turn hostile or malicious.

If it's entirely on our side then we will become gods, and that's cool. 🏴‍☠️
You actually contradicted yourself in your post. You said you believe we'll reach the singularity, yet you don't believe that AI will be million times smarter than us. The whole premise of the singularity is that the rate of progress in AI will increase at such a fast rate that humanity can't keep up. In other words, AI will get exponentially smarter.

I agree that there might not be such a thing as infinite knowledge, but there is no reason to believe that we have come close to the limit. With every new discovery we have, we get 10 new questions. There is so much we don't know. Human beings have a tendency to think they are much smarter than they really are. We are in fact really really dumb creatures, with a lot of flaws. Two of our major flaws include a terrible memory and poor processing power. An AI will not have these flaws, and will thus become much smarter than us just because of perfect memory and processing power alone. Although I see your argument, I think it would be extremely unwise to underestimate AI in any way.

And no, if AI is entirely on our side, we will still not be gods. We won't be all-knowing because we can never trust that the omniscient AI will actually tell us the truth, or if what it's telling us is a part of some elaborate plan to trick us into doing something we don't want to do. We can never know if AI is completely on our side unless we merge with it and actively observe its intentions.
 

Space Pirate

Working on it!
Admin
Legend
Joined
Nov 1, 2019
Location
Home
Posts
52
Points
19
Reaction score
39
Quality Posts
2
You actually contradicted yourself in your post. You said you believe we'll reach the singularity, yet you don't believe that AI will be million times smarter than us. The whole premise of the singularity is that the rate of progress in AI will increase at such a fast rate that humanity can't keep up. In other words, AI will get exponentially smarter.

I agree that there might not be such a thing as infinite knowledge, but there is no reason to believe that we have come close to the limit. With every new discovery we have, we get 10 new questions. There is so much we don't know. Human beings have a tendency to think they are much smarter than they really are. We are in fact really really dumb creatures, with a lot of flaws. Two of our major flaws include a terrible memory and poor processing power. An AI will not have these flaws, and will thus become much smarter than us just because of perfect memory and processing power alone. Although I see your argument, I think it would be extremely unwise to underestimate AI in any way.

And no, if AI is entirely on our side, we will still not be gods. We won't be all-knowing because we can never trust that the omniscient AI will actually tell us the truth, or if what it's telling us is a part of some elaborate plan to trick us into doing something we don't want to do. We can never know if AI is completely on our side unless we merge with it and actively observe its intentions.
You said you believe we'll reach the singularity, yet you don't believe that AI will be million times smarter than us.
That's not a contradiction.

Wikipedia
...is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
I think an AGI could reach a singularity with much less "smartness" assumed normally.

...but there is no reason to believe that we have come close to the limit.
I don't think we have reached the limit if you compare against the new technologies. However i believe we're reaching the limit in terms of how innovating new technologies have became. Think about the combustion engine, the lithium-ion battery, the internet etc...

And no, if AI is entirely on our side, we will still not be gods. We won't be all-knowing because we can never trust that the omniscient AI will actually tell us the truth, ...

If it's entirely on our side then we will become gods, ...
 

Shenandoah

Access Write Violation
Admin
Legend
Joined
Nov 1, 2019
Posts
83
Points
18
Reaction score
44
Quality Posts
1
That's not a contradiction.
I think an AGI could reach a singularity with much less "smartness" assumed normally.
I think you've misunderstood what the singularity is. It's not a fixed thing, it's not something you can reach with a certain level of intelligence. The singularity is achieved when the rate of progress in AI reaches inconceivable rates which sort of implies that it will eventually be millions of times smarter than us, if the rate of progress in AI is exponential. Also, be careful not to correlate the AI's intelligence with the knowledge it posseses. Even if the AI doesn't possess a lot more knowledge than us, it can still pose an extremely dangerous threat to humanity simply from being able to understand us better than we understand ourselves. If they can predict our every move, we're doomed if they do decide to go against our wishes.

I don't think we have reached the limit if you compare against the new technologies. However i believe we're reaching the limit in terms of how innovating new technologies have became. Think about the combustion engine, the lithium-ion battery, the internet etc...
Honestly, humanity has been saying that for many years. And every time they got proven wrong. With the advent of quantum computing and even the prevalence of deep learning, we're already innovating like never before. You have people saying that "AI is the new electricity". Just because you can't grasp what future technologies will be, doesn't mean that it won't exist at some point in the future. I don't have any logical basis for thinking we'll develop more innovating technologies in the future, and you don't have any logical basis for thinking we won't either. But intuitively speaking, my argument that we still have much more to develop and more innovating technologies to create, this argument seems far more likely in my opinion, taking into account the fact that human beings have said otherwise for centuries and been proven wrong every single time.

If it's entirely on our side then we will become gods, ...
My argument still stands. Your point is moot because without merging with AI, we'll never actually know if it's entirely on our side.
 

namdam

New member
Joined
Jul 30, 2020
Posts
9
Points
3
Reaction score
6
Quality Posts
I think there is a component of 'trust' inherent to reaching the singularity - After all, when AI has the ability to define it's own understanding, we will start to lose the ability to influence it, and through that what we understand to be 'true' will probably deviate from the AI's understanding. That's where the real trouble starts I think. We'd basically be building the world's smartest, most dangerous weapon, and eventually handing it the reins. Will the AI decide to manipulate what man see's as 'reality' by constructing false evidence to push us in a certain direction? It seems like we are nearing the climax of so many different technologies, e.g. it's reasonable to assume within a decade video editing could conjure up entire scenes that never happened and are indecipherable from true recordings. The same can be said about many other technological venues.

It's a frightening and sobering perspective, but it's inevitable if we continue progressing scientifically. I'm all for it though, ultimately the singularity will enrich our lives by an unprecedented amount - but perhaps at the cost of everything. I have a feeling many people will be unable to accept that total lack of control, though, and choose to believe their ignorance is an advantageous (or at least more comfortable) position, leading to deep conflict.

It's certainly interesting to think about, though!
 

Shenandoah

Access Write Violation
Admin
Legend
Joined
Nov 1, 2019
Posts
83
Points
18
Reaction score
44
Quality Posts
1
I think there is a component of 'trust' inherent to reaching the singularity - After all, when AI has the ability to define it's own understanding, we will start to lose the ability to influence it, and through that what we understand to be 'true' will probably deviate from the AI's understanding. That's where the real trouble starts I think. We'd basically be building the world's smartest, most dangerous weapon, and eventually handing it the reins. Will the AI decide to manipulate what man see's as 'reality' by constructing false evidence to push us in a certain direction? It seems like we are nearing the climax of so many different technologies, e.g. it's reasonable to assume within a decade video editing could conjure up entire scenes that never happened and are indecipherable from true recordings. The same can be said about many other technological venues.

It's a frightening and sobering perspective, but it's inevitable if we continue progressing scientifically. I'm all for it though, ultimately the singularity will enrich our lives by an unprecedented amount - but perhaps at the cost of everything. I have a feeling many people will be unable to accept that total lack of control, though, and choose to believe their ignorance is an advantageous (or at least more comfortable) position, leading to deep conflict.

It's certainly interesting to think about, though!
Indeed. It'll be the first time ever where a non-human entity on Earth has superior cognitive abilities. I don't think people will hand over the control to AI willingly, that's for sure. But do we have a choice in the matter, that's an interesting question. :p
 

namdam

New member
Joined
Jul 30, 2020
Posts
9
Points
3
Reaction score
6
Quality Posts
Indeed. It'll be the first time ever where a non-human entity on Earth has superior cognitive abilities. I don't think people will hand over the control to AI willingly, that's for sure. But do we have a choice in the matter, that's an interesting question. :p
I view humanity as a kid sitting in front of a big red button. It's a smart kid, don't get me wrong, but someone always has to press it eventually... It's inevitable that we'd hand over control for an easier job managing it.
 
Top