AI models can be biased, as they are trained on specific sets of data or people. This means that specific characteristics that belong to minority groups (for example, age, gender, or skin type) can be overlooked or misinterpreted by AI algorithms. This could negatively impact diagnostic and treatment processes of members of these minority groups.
The study that was presented by the Barco team is titled “Open-source tool for model performance analysis for subpopulations”. It focuses on the research question: “How to guarantee AI model safety and effectiveness for all subgroups of the target population?”
The paper presents an open-source, customizable tool that can measure how good an AI algorithm works. For example, it can highlight the subpopulations that are most at risk of being misinterpreted by the algorithm.
Stijn Vandewiele, Research Engineer at Barco, comments: “Artificial intelligence will certainly lead to massive advances in healthcare. However, it’s important to remember that it’s not an infallible technology. Healthcare professionals need to be able to rely on correctly trained algorithms and have checks in place for errors. We believe that this paper can contribute to ascertaining a safe use of AI in the medical world.”
This research was funded through the Vivaldy project, PENTA 19021, and financially supported by the Flemish Government (Vlaio grant HBC.2019.274).
Barco is a global company with headquarters in Kortrijk (Belgium). Our visualization and collaboration technology helps professionals accelerate innovation in the healthcare and enterprise and entertainment markets. We count over 3,000 visioneers, whose passion for technology is captured in over 500 unique patents.
Barco is a listed company (Euronext: BAR; Reuters: BARBt.BR; Bloomberg: BAR BB) and realized sales of 1,058 million euro in 2022.
Barco. Visioneering a bright tomorrow.
© Copyright 2023 by Barco