Deep Transfer Learning for Mammographic Classification: Comparative Performance Analysis of Vision Transformer vs. CNN Architectures
DOI:
https://doi.org/10.53469/jrse.2025.07(05).15Keywords:
Breast Cancer Classification, Deep Learning Models, Histopathological Images, DenseNet121, AUC - ROC, Class ImbalanceAbstract
Accurately diagnosing breast cancer through histopathological images is crucial for making the right treatment decisions. In this study, the performance of three pre - trained deep learning models—MobileNetV2, ResNet50, and DenseNet121 was evaluated in classifying breast tumor images from the BreakHis dataset as benign or malignant. We calculated detailed metrics such as accuracy, AUC - ROC, and Cohen's Kappa for assessment. DenseNet121 stood out, achieving a test accuracy of 99.93%, a perfect AUC - ROC of 1.0, and a Cohen's Kappa score of 0.9984, demonstrating its strong ability to differentiate between benign and malignant cases. MobileNetV2 is known for its efficiency and balanced accuracy with resource usage, making it a solid choice for resource - limited environments. The performance of DenseNet121 was statistically confirmed to be significantly better than ResNet50, indicating its potential usefulness in clinical settings where high precision is essential. However, this study did not address the class imbalance in the dataset, which could affect the results. Future research will address this imbalance to enhance model performance further and contribute to developing effective, resource - efficient deep learning models for medical image analysis.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Jayesh Jhurani

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.