How can we differentiate between AI making an error, or we misunderstanding it’s reasoning?

  • by

Especially as AI acquires more critical functions, we’ll need to thoroughly evaluate it’s outputs before exerting them on whatever area it’s being applied to. We can consider that, in a few decades, AI will likely start to make better guesses than most experts (before being even close to an AGI). How can we hope to know when it is decidedly illogical (which may well be extremely rare at that point, but consider you can’t afford to be wrong) or when it is into something we never expected or wouldn’t even conceive in the first place?

submitted by /u/ThetaSigma3141
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *