|
Diabetic Retinopathy Detection Using Convolutional Neural Network: A Comparative Study on Different Architectures |
|---|---|
| รหัสดีโอไอ | |
| Creator | Tassanee Hattiya |
| Title | Diabetic Retinopathy Detection Using Convolutional Neural Network: A Comparative Study on Different Architectures |
| Contributor | Kwankamon Dittakan, Salang Musikasuwan |
| Publisher | Faculty of Engineering Mahasasakham University |
| Publication Year | 2564 |
| Journal Title | Mahasarakham International Journal of Engineering Technology |
| Journal Vol. | 7 |
| Journal No. | 1 |
| Page no. | 50-60 |
| Keyword | Diabetic Retinopathy detection, Image Analysis, Image Classification, Neural network, Convolutional neural network |
| URL Website | https://ph02.tci-thaijo.org/index.php/mijet/index |
| Website title | THAIJO MIJET Mahasarakham International Journal of Engineering Technology |
| ISSN | 2408-1577 |
| Abstract | Diabetic retinopathy (DR) is a diabetes complication affects the eyes. The patients who lack treatments may be affected to the visual such as no clear vision, bleeding, or blindness. The problem of diabetic patients is the difficulty in the detection of DR until the symptom has happened. Early diagnosis is typically made using retina imagery obtained from the fundus camera. In this paper, an automated mechanism for DR screening is proposed. The idea is to construct a classifier that can be used to distinguish between DR or non-DR retina images. The fundamental idea is to segment retina images for obtaining the region of interest (ROI), while remaining compatible with the classification process. The ROI is then transformed into an appropriate format. It is suggested that the convolutional neural network (CNN) is the most classifier learning mechanism to be considered. With respect the work presented in this paper, seven convolutional neural network architectures have been applied to compare the classification performance: (i) AlexNet, (ii)ResNet50, (iii) DenseNet201, (iv) InceptionV3, (v) MobileNet, (vi) MnasNet and (vii) NASNetMobile. The process is fully described and evaluated. The data used for the evaluation was obtained from the Kaggle of 23,513 images. The best results were obtained from AlexNet learning mechanism with accuracy values of 98.42% and 81.32% for training and testing sets, respectively. |