Assistant Professor Ankita Shukla and Foundation Professor George Bebis are heading up a $1 million research project that aims to make AI smarter, safer and more reliable, particularly around medical information.
Shukla and Bebis, both in the College of Engineering’s Department of Computer Science & Engineering, are partners in the Institute for Foundations of Machine Learning (IFML) based at the University of Texas at Austin. Established in 2020 with a grant from the National Science Foundation, the IFML received a renewal grant last summer, $1 million of which was sub-awarded to the 夜色视频.
Shukla, who is the principal investigator for the University’s sub-awarded project, says the project team will be developing medical AI systems that work reliably across three areas: breast cancer, medical text and sleep disorders.
“A common challenge runs through all three (areas),” Shukla said.
Basically, AI models work well in lab or research settings, but can break down in real-world situations, where the data can be more biased or variable.
“The 夜色视频's sub-award addresses this gap through three interconnected research problems, each targeting a different modality but all united by the goal of accelerating the safe and equitable clinical translation of AI,” Shukla said.
Below, Shuka and Bebis discuss the three modalities in their own words, and the project team’s interest in collaborating with local clinicians and teachers.
Breast cancer and the ‘domain shift’ problem (Bebis, Shukla)
“Breast cancer remains the second-leading cause of cancer-related deaths among women in the United States, with approximately one in eight women expected to be diagnosed during their lifetime, according to the American Cancer Society. Early detection and accurate risk assessment remain the most powerful strategies for reducing deaths from breast cancer.
“Although mammography, the current standard of care for early detection, has significantly improved outcomes, it still faces important limitations, including false alarms and false negatives that can lead to unnecessary biopsies and missed diagnoses. AI, particularly Deep Learning (DL), offers the potential to help radiologists detect subtle patterns in breast images and to identify women who may be at higher risk, enabling more personalized screening and earlier intervention. DL methods have shown impressive ability to analyze medical images, sometimes matching expert radiologists. However, translating Medical AI from research to the clinic requires systems that perform reliably across diverse patient populations, imaging devices and acquisition protocols.
“(Our research) works to address these challenges by studying how hidden biases in mammography datasets affect AI models for breast cancer early detection and risk assessment. A key technical challenge is ‘domain shift,’ which is caused by differences in imaging devices from various vendors, acquisition protocols, breast positioning and compression, and patient-related factors such as age, anatomy and breast density.
“Modern mammography systems first produce ‘raw’ images that are later converted into ‘for-presentation’ images optimized for radiologists. Since raw data are typically discarded, AI models trained only on processed images may learn vendor-specific processing signatures rather than true breast tissue characteristics.
“The team will also be investigating additional sources of bias such as synthetic data generation, where generative AI methods may create visually realistic but physically implausible images, and in Digital Breast Tomosynthesis (DBT or 3D mammography), where 2D slices are reconstructed algorithmically rather than directly acquired, potentially introducing reconstruction-dependent distortions.
“To mitigate these biases, the team will systematically evaluate state-of-the-art DL models across multiple datasets and imaging systems. Moreover, it will develop advanced DL domain adaptation approaches, methods to infer raw-image information from processed data, and physics-informed strategies to ensure that synthetic or adapted images remain clinically meaningful and physically consistent. By identifying and mitigating diverse sources of bias in datasets, the team aims to improve the accuracy and robustness of AI methods for early detection and risk assessment, narrowing the gap for clinical adoption.”
Medical text (Shukla, Bebis)
“Clinicians and researchers generate enormous volumes of written records, case reports and biomedical literature every day, but teaching an AI to understand that text typically requires thousands of carefully labeled examples, which are expensive and slow to produce. Worse, medicine moves fast: new treatments, emerging side effects and evolving clinical guidelines mean that yesterday's training data can quickly become outdated.
“UNR researchers are developing explainable AI methods that can accurately interpret and annotate medical documents even without large, labeled datasets, giving clinicians flexible, trustworthy tools that keep pace with the speed of medicine.”
Sleep disorders (Shukla)
“Sleep disorders affect millions of Americans, yet diagnosing them requires analyzing hours of complex biological signals such as brain activity, eye movements, muscle tension, heart rhythms and breathing patterns recorded simultaneously overnight. The problem is that these signals look different depending on the patient, the clinic and the equipment used, making it hard to build AI tools that work consistently across settings.
“UNR researchers are developing interpretable AI methods for sleep analysis that are robust across diverse patients and recording environments, bringing the promise of personalized, reliable sleep medicine closer to reality.”
Medical clinicians and teachers can collaborate
Shukla said her team plans to validate its methods by recruiting and collaborating with breast radiologists and doctors throughout the nation, particularly local clinicians. Their involvement could help ensure the research directly benefits local patients and communities, according to Bebis.
Additionally, the team plans to work with local middle and high school teachers to integrate AI concepts into their curricula.
“Across all these projects,” Shukla said, “University researchers are addressing the same fundamental challenge that lies at the heart of IFML's mission: building AI systems that do not merely perform well under controlled research conditions, but that are accurate, fair, and trustworthy when deployed in the real world, where patient populations are diverse, data is messy and the stakes are high.”