Whats The Technical case for Neuralink helping AI safety?

  • by

I have seen various references to the idea that neuralink, or other brain machine interfaces would help with AI safety. I have heard people talking about enhancing human cognition with AI, and merging with machines. What I haven’t seen is any of the following.

1) A proposal for a specific algorithm to attach to peoples brains, and an argument for why that algorithm is safe when attached to human brains.

2) An argument for why AI safety problems are easier in the abstract when the AI is attached to a human brain.

3) A specific AI safety problem that could be solved if we had brain machine interfaces.

4) A reason to think that when you merge two computational systems, you get the safety of the safest component, not the danger of the more dangerous component or all the failure modes of both components.

5) An AI design that would only be seriously useful if attached to a human brain with BMI, rather than having the AI and human separate. (That design might happen to be one of the safer ones)

‚Äč

My current views. Spoilered because they aren’t in the spirit of innocent questioning, and I don’t want to prejudice anyones answers.

To me, “tampering with the inner workings of the human brain” is exactly the sort of thing that you don’t want an untrusted AI doing. If you have an AI that is smart enough to really understand what its doing to the human brain, and that is still friendly, then you probably have a friendly superintelligence and it can make its own BMI.

I don’t think that BMI makes a malevolent superintelligence much worse. If you were careless enough to give it internet access, humanity is doomed. If you kept it in your best high security boxes, humanity is only probably doomed. Wiring a malevolent superintelligence into human brains is massively stupid, but doesn’t actually make the situation much more hopeless because it was hopeless anyway.

Most ideas I have seen discussing how to build a benevolent superintelligence are half formed and fragmented because the research isn’t done yet. But anyway, they don’t rely on BMI. Once you have an AI that is trying to figure out what you actually want, and do that, you’ve won. It can get a pretty accurate model from just talking to people. It can build its own brainscanning if need be. BMI might enable it to see the exact details of what you want. Without it, the AI will have a still pretty accurate idea of what you want, and still greatly improve the world. If all the AI knows about beauty is from looking at things humans call beautiful, it can make something pretty. But it would be even prettier if the AI could scan your beauty detecting neurons. Stuff like “cure all diseases” is obviously considered good, so it does that whether or not it has BMI.

However, when dealing with AI in the totally dumb to average human level range, wiring it into a human brain gives much more potential for things to go wrong. Not in a humanity ending way, more in the sense of mentally incapacitating the test subjects. The only pro AI safety argument for neuralink seems to me to be “Neuralink -> Dumb AI disasters -> Public AI panic -> AI safety funding”

submitted by /u/donaldhobson
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *