الفهرس | Only 14 pages are availabe for public view |
Abstract There is an increasing interest in hardware accelerators, both in academia and industry. The industry invests in application-level accelerators, like Graphics Processing Units (GPUs) or Field programmable Gate Array (FPGA) accelerators connected to the Peripheral Component Interconnect Express (PCIe) bus. Hardware accelerators outperform general purpose Central Processing Units (CPUs) in terms of power consumption and performance. Hardware accelerators seek to optimize arithmetic operations, since it is the heart of the computation circuitry in different algorithms and applications. As opposed to past projections using traditional floating-point data type, a new numbering system is being used in our thesis. In this context, posit is proposed to replace The Institute of Electrical and Electronics Engineers (IEEE) Standard 754-2008 floating-point and offers more efficient arithmetic units in terms of accuracy and Power-Performance Area (PPA) matrix. Image classification and object detection neural networks (NNs) have been investigated; subsequently the need to design an optimized hardware accelerators to fit in different Image/Video tasks especially the critical ones. The thesis proposes a novel Posit Arithmetic Unit (PAU) to fit in the core of different hardware accelerators. There are two algorithms to decode posits. Two’s complement algorithm is proven to have less power consumption than sign-magnitude algorithm due to the removal of the two’s complement modules in the decoding/encoding of the negative posit numbers. Hence, two’s complement algorithm is chosen in the thesis. This thesis introduces a low power Verilog Hardware Description Language (HDL) design and an implementation of PAU for efficient hardware accelerators. Our regular proposed PAU is synthesized on Xilinx ZYNQ-7000. The results show 34% area improvement and 14% power saving. Additionally, we implemented a novel compact PAU that achieves 25% area reduction and 45% power saving. To expand; New object detection You Only Look Once (YOLO) v5 model is implemented using Qtorch+ to validate that 8-bit posit can easily replace floating-point in the inference of Deep Neural Networks (DNNs) without losing significant accuracy. |