OminiAbnorm-CT, a model for automated interpretation of CT images, outperforms existing methods in localizing and describing abnormalities across different body regions using text queries and visual prompts.
Automated interpretation of CT images-particularly localizing and describing
abnormal findings across multi-plane and whole-body scans-remains a significant
challenge in clinical radiology. This work aims to address this challenge
through four key contributions: (i) On taxonomy, we collaborate with senior
radiologists to propose a comprehensive hierarchical classification system,
with 404 representative abnormal findings across all body regions; (ii) On
data, we contribute a dataset containing over 14.5K CT images from multiple
planes and all human body regions, and meticulously provide grounding
annotations for over 19K abnormalities, each linked to the detailed description
and cast into the taxonomy; (iii) On model development, we propose
OminiAbnorm-CT, which can automatically ground and describe abnormal findings
on multi-plane and whole-body CT images based on text queries, while also
allowing flexible interaction through visual prompts; (iv) On benchmarks, we
establish three representative evaluation tasks based on real clinical
scenarios. Through extensive experiments, we show that OminiAbnorm-CT can
significantly outperform existing methods on all the tasks and metrics.