So I came across a video a few years ago before I really got enthusiastic about the singularity and futuristic thinking and I just realised something.
I personally believe that In order for societies to develope and optimize the best possible AI program to help augment our existence that the programs foundational structural framework itself needs to be focally imbued with the capacity to apply general systems thinking and synthesis of acquired dynamic information as explained in these videos.
Because i think that the main goal that needs to be accomplished is the AI needs to have a accurate perspective of its purpose and the larger picture of its sub routine goals and how to achieve them to have the greatest beneficial impact on humanity as to avoid it from becoming to narrow minded.
In the simple rleated example that if you program a cars auto self driving computer to achieve the goal of driving from point A to point B but also making sure it doesn’t just drive in a straight line by also including specified information that if it does just take the shortest route possible then it will be violating human made laws and causing considerable damage and harm to not only other people and things but the person and car itself.
It’s similar to my idea because if you dont find a way to include this specific systems thinking paradigm into the super AI and you still set the goal as to just succeed at trying to solve the near infinite amount of problems human face without considering the systems they form on a universal level then it will undoubtedly metaphorically just drive in a straight line from point A to point B taking us and itself out with it.
But of course the hardest issue is actually find a way of implementing this way of processing information to the actuall AI in the first place but I think its imperative that the AI researchers focus on what these videos I linked are trying to Express rather than just pure raw analytical thinking patterns.
But I want to hear from you guys your opinions…. yay or nay?