Research Article Open Access

Quantization and Pipelined Hardware Implementation of Deep Neural Network Models

El Hadrami Cheikh Tourad1 and Mohsine Eleuldj1
  • 1 Department of Computer Science, École Mohammedia d’Ingénieurs (EMI), Mohammed V University in Rabat, Morocco


Inrecent years, Deep Neural Networks (DNNs) have garnered much interest due toadvances in computational power and data availability. Indeed, DNNs presents a considerable advantage in several challenges, suchas classification problems and video analysis. Although, such accomplishmentleads to significantly increasing energy demands, computational expenses, andmemory capacity. In addition, current efficient DNNs may have more complex andextensive structures. As a result, implementing these huge models on embedded systems with limited sources ischallenging. However, several works have attempted to solve the implementationissues while maintaining optimum accuracy. Among these ideas is compressing themodel size using the quantization method and deploying it on FieldProgrammable Gate Arrays (FPGA) to enhance the latency and minimize the energy cost. This article presents a modeloptimizer using quantization methods to ensure the model hardware implementation.This optimizer compresses the model size and is integrated with a design flowthat implements the model on the hardware. Furthermore, this article presents"DNN2FPGA," a design flow that can automatically implement the DeepLearning models on FPGA by producing pipelined HDL codes. This articleindicates an excellent performance by decreasing the model's size and latencyby 4x while maintaining the model's accuracy. It also presents a full review ofthe state of the art.

Journal of Computer Science
Volume 18 No. 11, 2022, 1021-1029


Submitted On: 24 June 2022 Published On: 26 October 2022

How to Cite: Cheikh Tourad, E. H. & Eleuldj, M. (2022). Quantization and Pipelined Hardware Implementation of Deep Neural Network Models. Journal of Computer Science, 18(11), 1021-1029.

  • 0 Citations



  • DNN
  • Design Flow
  • Quantization
  • FPGA
  • Pipeline