MIRI updates Abram Demski distinguishes different versions of the problem of “pointing at” human values in AI alignment. Evan Hubinger discusses “Risks from Learned Optimization” on the AI X-Risk Research Podcast. Eliezer Yudkowsky comments on AI safety via debate and Goodhart’s law. MIRI supporters donated ~$135k on Giving Tuesday, of which ~26% was matched by Facebook and ~28% by employers for a total of $207,436! MIRI also received $6,624 from TisBest Philanthropy in late December, largely… Read More »February 2021 Newsletter
Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in… Read More »Fostering ethical thinking in computing
AI Humiliates Spacetime Empire – I enjoyed this article – as it makes a point about blackbox AI bypassing known maths/physics – but I’m no expert on these matters. Would love to know what others think of this? (if they downvote, no problem, but perhaps please say why? Trying to learn here) cheers!
submitted by /u/Jackson_Filmmaker [link] [comments]