Helen Schneider, Elif Cansu Yildiz and Prof. Dr. Rafet Sifa, together with other scientists from the Lamarr Institute and the University Hospital Bonn, have won a Best Paper Award for the publication of their results on the use of Machine Learning for the diagnosis of lung diseases. The award was granted in the context of the International Conference on Artificial Neural Networks (ICANN) 2023.
To diagnose lung diseases such as pneumonia, doctors often evaluate X-ray images of their patients’ chest. Tools based on Artificial Intelligence (AI) can assist in the evaluation of X-ray images and thus make everyday medical workflows more efficient and relieve the burden on specialist staff.
Researchers at the Lamarr Institute have further developed the AI-based analysis of X-ray images of the lungs using an informed Machine Learning approach. To this end, they have complemented classic data-based Machine Learning (ML) with prior knowledge from anatomy and medicine.
The scientists described their results in the paper “Symmetry-Aware Siamese Network: Exploiting Pathological Asymmetry for Chest X-Ray Analysis” and presented them at the International Conference on Artificial Neural Networks ICANN 2023. For this, the researchers received the “Springer & ENNS Best Paper Award”, which is presented annually by the publisher Springer Nature and the organizer of the ICANN conference, the European Neural Network Society (ENNS).
The ML model developed by researchers at the Lamarr Institute in collaboration with the University Hospital Bonn uses the symmetry of healthy lungs specifically for the evaluation of X-ray images. Diseases of the lung can often be recognized by the fact that the images of the left and right lung lobes show deviations from symmetry in certain areas.
In order to identify the areas in the scans that show a deviation from left-right symmetry, the researchers apply a special ML method known as a Siamese Neural Network. This method is used to evaluate the X-ray image pixel by pixel. It is also possible to generate a heat map for each model prediction, which shows the diseased areas of the lungs.
The analysis method for X-ray images developed by the researchers at the Lamarr Institute is an example of explainable AI. The analyses are based on an interpretable ML model that provides insights into how the results were obtained. Such procedures are suitable for strengthening users’ trust in the technology and will be continued and deepened in the future as part of the research on trustworthy Artificial Intelligence at the Lamarr Institute.