Di Posting Oleh : NAMA BLOG ANDA (NAMA ANDA)
Kategori : AI Google Brain Publications Research
.DialogCon { text-align: center; color: rgb(102, 102, 102); width: 825px; background-color: rgb(255, 255, 255); } .xDialog { position: fixed; z-index: 1000; left: 262px; top: 300px;opacity:0;} @media screen and (max-width: 600px) { .DialogCon { width:300px; height: 120px; } .xDialog { width:300px; height: 120px; left:0px;} } .DialogCon2 { text-align: center; color: rgb(102, 102, 102); width: 825px; background-color: rgb(255, 255, 255); } .xDialog2 { position: fixed; z-index: 1000; left: 262px; top: 300px;opacity:0;} @media screen and (max-width: 600px) { .DialogCon2 { width:300px; height: 120px; } .xDialog2 { width:300px; height: 120px; left:0px;} } .DialogCon3 { text-align: center; color: rgb(102, 102, 102); width: 825px; background-color: rgb(255, 255, 255); } .xDialog3 { position: fixed; z-index: 1000; left: 262px; top: 300px;opacity:0;} @media screen and (max-width: 600px) { .DialogCon3 { width:300px; height: 120px; } .xDialog3 { width:300px; height: 120px; left:0px;} }
We believe that AI technologies are likely to be overwhelmingly useful and beneficial for humanity. But part of being a responsible steward of any new technology is thinking through potential challenges and how best to address any associated risks. So today we�re publishing a technical paper, Concrete Problems in AI Safety, a collaboration among scientists at Google, OpenAI, Stanford and Berkeley.
While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it�s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.
We�ve outlined five problems we think will be very important as we apply AI in more general circumstances. These are all forward thinking, long-term research questions -- minor issues today, but important to address for future systems:
- Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
- Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don�t want this cleaning robot simply covering over messes with materials it can�t see through.
- Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
- Safe Exploration: How do we ensure that an AI system doesn�t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn�t try putting a wet mop in an electrical outlet.
- Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it�s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.
We go into more technical detail in the paper. The machine learning research community has already thought quite a bit about most of these problems and many related issues, but we think there�s a lot more work to be done.
We believe in rigorous, open, cross-institution work on how to build machine learning systems that work as intended. We�re eager to continue our collaborations with other research groups to make positive progress on AI.
0 Response to "Bringing Precision to the AI Safety Discussion"
Post a Comment