O
Orclever
Back to Journal
Research Article Open AccessOrclever Native

Industrial Fish Classifier with Deep Artificial Neural Network

Mahamat Ahmat Issamadine1,
Yasemin Erkan2,
Ersin Alaybeyoğlu3
1Bartin University
2Bartin University
3Bartin University
Published:December 31, 2024
DOI: 10.56038/oprd.v5i1.504
Vol. 5, No. 1 · pp. 99–109

Abstract

Today, machine learning-based decision support systems play a facilitating role in almost every aspect of life. With the integration of these intelligent systems into industrial production systems, fast and effective production solutions emerge. Today, machine learning-based artificial intelligence technologies mostly offer solutions based on the image processing approach. In this study, a new artificial intelligence model that can effectively classify different fish species is proposed with the YOLO image processing algorithm, a deep artificial neural network based on the image processing approach. A real-time land support solution that can be easily integrated into industrial applications is presented with the model trained with an original fish data set.

Keywords
Görüntü SınıflandırmaAkıllı TeknolojiYOLOEndüstriyel Uygulamalar

References

  1. 1.. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review, vol. 65, no. 6, p. 386, 1958.
  2. 2.. A.-r. Mohamed, G. Dahl, G. Hinton et al., “Deep belief networks for phone recognition,” in Nips workshop on deep learning for speech recognition and related applications, vol.1, no. 9, 2009, p. 39.
  3. 3.. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
  4. 4.. B. Hou, C. Yang, B. Ren, and L. Jiao, “Decomposition-feature-iterative clustering-based superpixel segmentation for polsar image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 8, pp. 1239–1243, 2018.
  5. 5.. K. ¨ U. Akdemir and E. Alaybeyo˘ glu, “Classification of red mullet, bluefish and haddock caught in the black sea by” single shot multibox detection”,” in 2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2021, pp. 1–4.
  6. 6.. M. J. Shafiee, S. A. Haider, A. Wong, D. Lui, A. Cameron, A. Modhafar, P. Fieguth, and M. A. Haider, “Apparent ultra-high b-value diffusion weighted image reconstruction via hidden conditional random fields,” IEEE Transactions on Medical Imaging, vol. 34, no. 5, pp. 1111–1124, 2015.
  7. 7.. M. Luengo-Oroz, E. Faure, B. Lombardot, R. Sance, P. Bourgine, N. Peyrieras, and A. Santos, “Twister segment morphological filtering. a new method for live zebrafish embryos confocal images processing,” in 2007 IEEE International Conference on Image Processing, vol. 5, 2007, pp. V– 253–V– 256.
  8. 8.. W. Yang, L. Zhong, Y. Chen, L. Lin, Z. Lu, S. Liu, Y. Wu, Q. Feng, and W. Chen, “Predicting ct image from mri data through feature matching with learned nonlinear local descriptors,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 977–987, 2018.
  9. 9.. G. Bannerjee, U. Sarkar, S. Das, and I. Ghosh, “Artificial intelligence in agriculture: A literature survey,” International Journal of Scientific Research in Computer Science Applications and Management Studies, vol. 7, no. 3, pp. 1–6, 2018.
  10. 10.. J. M. Antelis, L. E. Falc´ on et al., “Spiking neural networks applied to the classification of motor tasks in eeg signals,” Neural networks, vol. 122, pp. 130–143, 2020.
  11. 11.. Y. Luo, Q. Fu, J. Xie, Y. Qin, G. Wu, J. Liu, F. Jiang, Y. Cao, and X. Ding, “Eeg-based emotion classification using spiking neural networks,” IEEE Access, vol. 8, pp. 46007–46016, 2020.
  12. 12.. Wu, Y. Chua, and H. Li, “A biologically plausible speech recognition framework based on spiking neural networks,” in 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018, pp. 1–8.
  13. 13.. Wu, E. Yılmaz, M. Zhang, H. Li, and K. C. Tan, “Deep spiking neural networks for large vocabulary automatic speech recognition,” Frontiers in neuroscience, vol. 14, p. 199, 2020.
  14. 14.. M. J. Shafiee, B. Chywl, F. Li, and A. Wong, “Fast yolo: A fast you only look once system for real-time embedded object detection in video,” arXiv preprint arXiv:1709.05943, 2017.
  15. 15.. R. Huang, J. Pedoeem, and C. Chen, “Yolo-lite: a real-time object detection algorithm optimized for non-gpu computers,” in 2018 IEEE international conference on big data (big data). IEEE, 2018, pp. 2503 2510.
  16. 16.. W. Lan, J. Dang, Y. Wang, and S. Wang, “Pedestrian detection based on yolo network model,” in 2018 IEEE international conference on mechatronics and automation (ICMA). IEEE, 2018, pp. 1547–1551.
  17. 17.. A. Wong, M. Famuori, M. J. Shafiee, F. Li, B. Chwyl, and J. Chung, “Yolo nano: A highly compact you only look once convolutional neural network for object detection,” in 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS). IEEE, 2019, pp. 22–25.
  18. 18.. A. T. Azar and S. A. El-Said, “Performance analysis of support vector machines classifiers in breast cancer mammography recognition,” NeuralComputing and Applications, vol. 24, pp. 1163–1177, 2014.
Download PDF
Cite This Article
Issamadine, M. A., Erkan, Y., Alaybeyoğlu, E. (2024). Industrial Fish Classifier with Deep Artificial Neural Network. *Orclever Proceedings of Research and Development*, 5(1), 99-109. https://doi.org/10.56038/oprd.v5i1.504

Bibliographic Info

JournalOrclever Proceedings of Research and Development
Volume5
Issue1
Pages99–109
PublishedDecember 31, 2024
eISSN2980-020X