Research

Multimodal AI

We develop multimodal fusion algorithms that combine information from multiple imaging modalities, patient histories, and multi-omics data. By integrating these diverse data sources, we aim to enable more precise diagnostic, prognostic, and therapeutic decisions.

Papers

Diffusion Models and GANs

We investigate the potential of generative models, such as diffusion models and Generative Adversarial Networks (GANs), in the medical domain. We leverage these techniques to synthesize realistic medical images for data sharing or augmentation of training data. By pushing the boundaries of generative modeling, we seek to enhance the capabilities of AI systems in healthcare.

Papers

Large Language Models

We explore the capabilities of large language models in processing and analyzing medical data. By fine-tuning these models on domain-specific medical texts and electronic health records, we aim to extract valuable insights, generate clinical summaries, and support decision-making processes. Our research investigates the potential of language models in tasks such as medical question answering, clinical note summarization, and patient risk stratification.

Papers