Deep Transfer Learning for Mammographic Classification: Comparative Performance Analysis of Vision Transformer vs. CNN Architectures

Authors

  • Jayesh Jhurani

DOI:

https://doi.org/10.53469/jrse.2025.07(05).15

Keywords:

Breast Cancer Classification, Deep Learning Models, Histopathological Images, DenseNet121, AUC - ROC, Class Imbalance

Abstract

Accurately diagnosing breast cancer through histopathological images is crucial for making the right treatment decisions. In this study, the performance of three pre - trained deep learning models—MobileNetV2, ResNet50, and DenseNet121 was evaluated in classifying breast tumor images from the BreakHis dataset as benign or malignant. We calculated detailed metrics such as accuracy, AUC - ROC, and Cohen's Kappa for assessment. DenseNet121 stood out, achieving a test accuracy of 99.93%, a perfect AUC - ROC of 1.0, and a Cohen's Kappa score of 0.9984, demonstrating its strong ability to differentiate between benign and malignant cases. MobileNetV2 is known for its efficiency and balanced accuracy with resource usage, making it a solid choice for resource - limited environments. The performance of DenseNet121 was statistically confirmed to be significantly better than ResNet50, indicating its potential usefulness in clinical settings where high precision is essential. However, this study did not address the class imbalance in the dataset, which could affect the results. Future research will address this imbalance to enhance model performance further and contribute to developing effective, resource - efficient deep learning models for medical image analysis.

Downloads

Published

2025-05-29

How to Cite

Jhurani, J. (2025). Deep Transfer Learning for Mammographic Classification: Comparative Performance Analysis of Vision Transformer vs. CNN Architectures. Journal of Research in Science and Engineering, 7(5), 76–79. https://doi.org/10.53469/jrse.2025.07(05).15

Issue

Section

Articles