This is Parsa Hariri.
Hello! My name is Parsa, and I work as a computational biologist and bioinformatician at the University of Göttingen. I specialize in modeling biological networks, such as those found in the brain and embryos. If you’re interested in a scientific discussion, want to know more about my educational journey, or have any other questions, please don’t hesitate to reach out. I also organize some talks about bioinformatics, you can learn more about it here
Latest Projects
master's thesis
Image segmentation is a fundamental task in computer vision, with applications in medical imaging, autonomous driving, and object recognition. Recent advancements in machine learning have led to the development of powerful models like U-Net, diffusion models, and transformers, which show promise in segmenting images with high precision.
GANs for Biomedical Image Augmentation
Generative Adversarial Networks (GANs) are a class of neural networks consisting of two models: a generator and a discriminator. The generator creates fake data, while the discriminator evaluates it against real data. Both networks train together in a competitive process, where the generator improves at creating realistic outputs, and the discriminator becomes better at distinguishing between real and fake data. Over time, the generator learns to produce high-quality, realistic images that are hard for the discriminator to distinguish from real data.
Skin Cancer Detection with ViTs
Skin cancer is one of the most common types of cancer worldwide, with millions of new cases diagnosed each year. Early detection is critical in improving patient outcomes, as it increases the chances of successful treatment. In recent years, the advancement of artificial intelligence (AI) techniques has opened new possibilities for detecting skin cancer at an early stage using automated methods. Specifically, Vision Transformers (ViTs) have emerged as a powerful tool in the field of medical imaging, providing state-of-the-art performance in tasks like image classification and segmentation. The goal of this project is to explore the use of ViTs for the classification of skin cancer images. Using data from the ISIC 2024 Challenge, this project investigates the efficacy of ViTs in identifying high-risk cancerous lesions from high-resolution 3D Total Body Photography (3D-TBP) images. By comparing multiple variants of ViT models, this study aims to determine which architecture offers the best performance for skin cancer detection. The dataset used in this project is provided by the ISIC 2024 Challenge, which focuses on detecting skin cancer from high-resolution 3D-TBP images. These images pose a challenge due to their high resolution, requiring efficient model architectures that can handle large amounts of data while maintaining precision in prediction. The LeViT model achieved the highest accuracy (94%), while models like CaiT and Simple ViT also performed well. Token-to-Token ViT ran into memory issues, likely due to its high-resolution patching technique.
Latest Posts
Enzosol: AI-Powered Protein Design
Why to use git for Science
Exploring Spatial Multi-Omics Integration: An Interactive Infographic
This infographic translates complex scientific concepts into an easily digestible visual narrative, covering everything from data acquisition challenges to the latest deep learning integration methods and their applications in disease understanding.