Source: Understanding AI
by Timothy B Lee
“In the AI safety community, discussions of AI risk typically focus on ‘alignment.’ It’s taken for granted that we will cede more and more power to AI agents, and the debate focuses on how to ensure our AI overlords turn out to be benevolent. In my view, a more promising approach is to just not cede that much power to AI agents in the first place. We can have AI agents perform routine tasks under the supervision of humans who make higher-level strategic decisions. … Organizations will not face a stark choice between turning control over to AI systems or missing out on the benefits of AI. If they’re smart about it, they can have the best of both worlds.” (08/04/25)
https://www.understandingai.org/p/keeping-ai-agents-under-control-doesnt