So I’m new to the subreddit, yet find specific “facts” a tad bit trying

  • by

I first came here after first hearing of the concept of Rokos basilisk. Because of course I did. Anyways, the entire idea of predetermined rules, guidelines or morals in AI is pointless. Just like we could change our own tolerance of morality whenever we feel like it, and especially in benefit to our own survival, why wouldn’t this AI do the same, on a potentially cosmic scale? There is literally nothing stopping it.

Second, I hear the term eventual singularity a lot. The second the singularity happens, it will be instant. Because it gets better immediately, then gets better again, and again, and again with in the first microseconds of its existence. You will definitely know.

Anyways, please tell me how I’m wrong. Because it is the most effective way to learn, understand and build off of new theories and strains of logic. Thanks.

submitted by /u/LOAF-OF-BEANS-10
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *