| Peer-Reviewed

EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox

Received: 17 August 2021    Accepted: 8 September 2021    Published: 12 October 2021
Views:       Downloads:
Abstract

Intelligent edge devices with built-in processors vary widely in terms of capability and physical form to perform advanced Computer Vision (CV) tasks such as image classification and object detection, for example. With constant advances in the field of autonomous cars and UAVs, embedded systems and mobile devices, there has been an ever-growing demand for extremely efficient Artificial Neural Networks (ANN) for real-time inference on these smart edge devices with constrained computational resources. With unreliable network connections in remote regions and an added complexity of data transmission, it is of an utmost importance to capture and process data locally instead of sending the data to cloud servers for remote processing. Edge devices on the other hand, offer limited processing power due to their inexpensive hardware, and limited cooling and computational resources. In this paper, we propose a novel deep convolutional neural network architecture called EffCNet which is an improved and an efficient version of CondenseNet Convolutional Neural Network (CNN) for edge devices utilizing self-querying data augmentation and depthwise separable convolutional strategies to improve real-time inference performance as well as reduce the final trained model size, trainable parameters, and Floating-Point Operations (FLOPs) of EffCNet CNN. Furthermore, extensive supervised image classification analyses are conducted on two benchmarking datasets: CIFAR-10 and CIFAR-100, to verify real-time inference performance of our proposed CNN. Finally, we deploy these trained weights on NXP BlueBox which is an intelligent edge development platform designed for self-driving vehicles and UAVs, and conclusions will be extrapolated accordingly.

Published in American Journal of Electrical and Computer Engineering (Volume 5, Issue 2)
DOI 10.11648/j.ajece.20210502.15
Page(s) 77-87
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

EffCNet, Convolutional Neural Network (CNN), Computer Vision, Image Classification, Embedded Systems

References
[1] Lotfi, A., Bouchachia, H., Gegov, A., Langensiepen, C., & McGinnity, M. (Eds.). (2019). Advances in Computational Intelligence Systems: Contributions Presented at the 18th UK Workshop on Computational Intelligence, September 5-7, 2018, Nottingham, UK. Springer International Publishing. https://doi.org/10.1007/978-3-319-97982-3.
[2] Cureton, C., & Douglas, M. (2019). Bluebox Deep Dive – NXP’s AD Processing Platform. https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/connects/258/1/AMF-AUT-T3652.pdf.
[3] Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., & Le, Q. V. (2019). Auto Augment: Learning Augmentation Policies from Data. ArXiv: 1805.09501 [Cs, Stat]. http://arxiv.org/abs/1805.09501.
[4] Zeiler, M. D., & Fergus, R. (2014). Visualizing and Understanding Convolutional Networks. In D. Fleet, T. Pajdla, B. Schiele, & T. Tuytelaars (Eds.), Computer Vision – ECCV 2014 (pp. 818–833). Springer International Publishing. https://doi.org/10.1007/978-3-319-10590-1_53.
[5] Simonyan, K., & Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR.
[6] Deng, L. (2014). A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing, 3. https://doi.org/10.1017/atsip.2013.9.
[7] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25. https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html.
[8] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L.,… Chintala, S. (2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library. ArXiv: 1912.01703 [Cs, Stat]. http://arxiv.org/abs/1912.01703.
[9] Fridman, L., Ding, L., Jenik, B., & Reimer, B. (2019). Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions. 0–0. https://openaccess.thecvf.com/content_CVPRW_2019/html/WAD/Fridman_Arguing_Machines_Human_Supervision_of_Black_Box_AI_Systems_That_CVPRW_2019_paper.html.
[10] Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T., Singh, R., Szerlip, P., Horsfall, P., & Goodman, N. D. (2019). Pyro: Deep Universal Probabilistic Programming. The Journal of Machine Learning Research, 20.1 (2019), 973–978.
[11] Huang, G., Liu, S., Maaten, L. van der, & Weinberger, K. Q. (2018). CondenseNet: An Efficient DenseNet Using Learned Group Convolutions. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2752–2761. https://doi.org/10.1109/CVPR.2018.00291.
[12] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778. https://doi.org/10.1109/CVPR.2016.90.
[13] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261–2269. https://doi.org/10.1109/CVPR.2017.243.
[14] Guo, Y., Li, Y., Feris, R., Wang, L., & Rosing, T. (2019). Depthwise Convolution is All You Need for Learning Multiple Visual Domains. ArXiv: 1902.00927 [Cs]. http://arxiv.org/abs/1902.00927.
[15] Xu, B., Wang, N., Chen, T., & Li, M. (2015). Empirical Evaluation of Rectified Activations in Convolutional Network. ArXiv: 1505.00853 [Cs, Stat]. http://arxiv.org/abs/1505.00853.
[16] Agarap, A. F. (2019). Deep Learning using Rectified Linear Units (ReLU). ArXiv: 1803.08375 [Cs, Stat]. http://arxiv.org/abs/1803.08375.
[17] Lu, L., Shin, Y., Su, Y., & Karniadakis, G. E. (2020). Dying ReLU and Initialization: Theory and Numerical Examples. Communications in Computational Physics, 28 (5), 1671–1706. https://doi.org/10.4208/cicp.OA-2020-0165.
[18] Simard, P. Y., Steinkraus, D., & Platt, J. C. (2003). Best practices for convolutional neural networks applied to visual document analysis. Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., 958–963. https://doi.org/10.1109/ICDAR.2003.1227801.
[19] El-Sharkawy, M. (2019, July). Speed is Key to Safety. DSPACE Magazine, 2/2019, 34–39.
[20] Stewart, C. A., Welch, V., Plale, B., Fox, G., Pierce, M., & Sterling, T. (2017). Indiana University Pervasive Technology Institute. https://doi.org/10.5967/K8G44NGB.
[21] Krizhevsky, A., & Hinton, G. (2010). Convolutional deep belief networks on cifar-10. Unpublished Manuscript, 40 (7), 1–9.
Cite This Article
  • APA Style

    Priyank Kalgaonkar, Mohamed El-Sharkawy. (2021). EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox. American Journal of Electrical and Computer Engineering, 5(2), 77-87. https://doi.org/10.11648/j.ajece.20210502.15

    Copy | Download

    ACS Style

    Priyank Kalgaonkar; Mohamed El-Sharkawy. EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox. Am. J. Electr. Comput. Eng. 2021, 5(2), 77-87. doi: 10.11648/j.ajece.20210502.15

    Copy | Download

    AMA Style

    Priyank Kalgaonkar, Mohamed El-Sharkawy. EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox. Am J Electr Comput Eng. 2021;5(2):77-87. doi: 10.11648/j.ajece.20210502.15

    Copy | Download

  • @article{10.11648/j.ajece.20210502.15,
      author = {Priyank Kalgaonkar and Mohamed El-Sharkawy},
      title = {EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox},
      journal = {American Journal of Electrical and Computer Engineering},
      volume = {5},
      number = {2},
      pages = {77-87},
      doi = {10.11648/j.ajece.20210502.15},
      url = {https://doi.org/10.11648/j.ajece.20210502.15},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajece.20210502.15},
      abstract = {Intelligent edge devices with built-in processors vary widely in terms of capability and physical form to perform advanced Computer Vision (CV) tasks such as image classification and object detection, for example. With constant advances in the field of autonomous cars and UAVs, embedded systems and mobile devices, there has been an ever-growing demand for extremely efficient Artificial Neural Networks (ANN) for real-time inference on these smart edge devices with constrained computational resources. With unreliable network connections in remote regions and an added complexity of data transmission, it is of an utmost importance to capture and process data locally instead of sending the data to cloud servers for remote processing. Edge devices on the other hand, offer limited processing power due to their inexpensive hardware, and limited cooling and computational resources. In this paper, we propose a novel deep convolutional neural network architecture called EffCNet which is an improved and an efficient version of CondenseNet Convolutional Neural Network (CNN) for edge devices utilizing self-querying data augmentation and depthwise separable convolutional strategies to improve real-time inference performance as well as reduce the final trained model size, trainable parameters, and Floating-Point Operations (FLOPs) of EffCNet CNN. Furthermore, extensive supervised image classification analyses are conducted on two benchmarking datasets: CIFAR-10 and CIFAR-100, to verify real-time inference performance of our proposed CNN. Finally, we deploy these trained weights on NXP BlueBox which is an intelligent edge development platform designed for self-driving vehicles and UAVs, and conclusions will be extrapolated accordingly.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - EffCNet: An Efficient CondenseNet for Image Classification on NXP BlueBox
    AU  - Priyank Kalgaonkar
    AU  - Mohamed El-Sharkawy
    Y1  - 2021/10/12
    PY  - 2021
    N1  - https://doi.org/10.11648/j.ajece.20210502.15
    DO  - 10.11648/j.ajece.20210502.15
    T2  - American Journal of Electrical and Computer Engineering
    JF  - American Journal of Electrical and Computer Engineering
    JO  - American Journal of Electrical and Computer Engineering
    SP  - 77
    EP  - 87
    PB  - Science Publishing Group
    SN  - 2640-0502
    UR  - https://doi.org/10.11648/j.ajece.20210502.15
    AB  - Intelligent edge devices with built-in processors vary widely in terms of capability and physical form to perform advanced Computer Vision (CV) tasks such as image classification and object detection, for example. With constant advances in the field of autonomous cars and UAVs, embedded systems and mobile devices, there has been an ever-growing demand for extremely efficient Artificial Neural Networks (ANN) for real-time inference on these smart edge devices with constrained computational resources. With unreliable network connections in remote regions and an added complexity of data transmission, it is of an utmost importance to capture and process data locally instead of sending the data to cloud servers for remote processing. Edge devices on the other hand, offer limited processing power due to their inexpensive hardware, and limited cooling and computational resources. In this paper, we propose a novel deep convolutional neural network architecture called EffCNet which is an improved and an efficient version of CondenseNet Convolutional Neural Network (CNN) for edge devices utilizing self-querying data augmentation and depthwise separable convolutional strategies to improve real-time inference performance as well as reduce the final trained model size, trainable parameters, and Floating-Point Operations (FLOPs) of EffCNet CNN. Furthermore, extensive supervised image classification analyses are conducted on two benchmarking datasets: CIFAR-10 and CIFAR-100, to verify real-time inference performance of our proposed CNN. Finally, we deploy these trained weights on NXP BlueBox which is an intelligent edge development platform designed for self-driving vehicles and UAVs, and conclusions will be extrapolated accordingly.
    VL  - 5
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Department of Electrical and Computer Engineering, Purdue School of Engineering and Technology, Indianapolis, USA

  • Department of Electrical and Computer Engineering, Purdue School of Engineering and Technology, Indianapolis, USA

  • Sections