Subtle biases in AI can influence emergency decisions
Artificial intelligence (AI) systems are increasingly relied upon to support health care. Recent research has shown that machine learning models can encode biases against patients from minority subgroups and consequently influence the recommendations they make—but the harm from a discriminatory AI system can be minimised if the advice it delivers is properly framed, an MIT team has shown.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.