I have heard over and over from many different authorities in technological research, that Superhuman AI will one day solve all of the world’s hardest problems. They say this intelligence will far surpass that of a man, even to the point where we will begin to rely on this AI Intelligence rather than our own intelligence. I have spent some time looking at the concept of how graphs and neural networks are use to solve problems and arrive a solution, and I see some reasons why this could be a problem solving real world problems.
It is my understanding that computers are extremely good at solving purely mathematical problems. Also, it is my understanding that the only logical proof, or chain of logic, is when you have a series of pure mathematical statements.
It is my understanding that using neural networks and graph theory, this works well to find the best series of decisions, and to work through a graph to finally achieve the final result. The problem comes when there needs to be a “rule” hard code into the decision framework. I tested this a while ago with a war strategy problem, and ChatGPT could solve the problems up till the point where there was a decision it could not make without human intervention to resolve. For example when evaluating a theoretical decision of what to do when an war time conflict reaches a specific number of casualities, then
, then a “rule” must be hard-coded to define “casualities.” For instance is casualties defined by: men who are soldiers, domestic men, women, children, elderly, animals, etc. There is no way a computer can make a moral decision to define what types of casualties are morally acceptable. So, people have to create a rule.
The problem comes, then when you bring people together to decide on how the rule is defined. Each time a new rule is created, then this will require some kind of meeting or agreement by a certain group that we trust to come to a conclusion, and establish a rule. Even one rule, for instance if the rule was about abortion, and the rule establishes the decision to move a head with an abortion (I am using this example, because it is such a political dilemma between opposing sides). This type of rule, if solveable, will take an extremely large effort to figure out.
To have AI be able to solve problems like strategies for wars, for example, there will be an extraordinary number of rules that need to be decided on by humans (which require collaboration and resources to solve). And, in addition, this again becomes a political issue to chose people who are qualified to make ethical and moral decisions for the masses.
For a large scale problem like war strategy, it will take an extraordinary human effort to bring people together and stipulate a rule, so the framework can navigate up the decision tree and arrive at a final solution.
Also, next there is the problem of closed off systems, that a Super Intelligence computer cannot access due to security reasons. We have long heard that these Super Intelligence AI systems will control all aspects of society, banks, heathcare, electrical grids, water supplies, everything, yet many of these critical systems are even now considered intranets that are closed off from the internet entirely, so a Super Intelligence computer cannot access it.
In addition, we have all noticed that hackers into these systems are increasingly getting more and more advanced causing us to jump through an extraordinary number of validations before we can enter into a system.
My point is that there are two “showstoppers” in the logic of how a Super Intelligence system would need to work:
-
Rules: Rules in the decision tree that require human intervention, is exponential depending on the complexity of the problem.
-
Closed of Systems: A Super Intelligence system already cannot access all the systems in the world, and it is actually getting harder and harder to validate a user into at system.
These are significant conditions that will never allow a Super Intelligence to solve the world problems like many of the world’s top tech leaders predict.