Addressing Uncertainty in Machine Learning

Engineering a 'safe' system involves a degree of uncertainty, as it relies on the knowledge of the people involved in the assessment of all the different aspects of the system, including its expected behavior. In ML, this uncertainty is aggravated by the inability to guarantee that the algorithm will make the right predictions (inductive or deductive). Because they make predictions based on input data models, which may be inherently inaccurate. This makes the case that a system whose behavior cannot be fully predicted is not a high-integrity system.

Very simply put, the intrinsic difficulty with ML is the possibility if making bad predictions.

How can we ensure that we understand all the factors that influence these systems and limit the magnitude of such misbehaviors and potential safety impacts?

#machinelearning #artificialintelligence #highintegritysystems #safety

Comments

Popular posts from this blog

Nurturing a Sense of Security Beyond the Surface

Reliability & Safety

Cost of disagreements!!