Search In this Thesis
   Search In this Thesis  
العنوان
THE EFFECTS OF NANO-METER PROCESS TECHNOLOGIES ON DIGITAL CIRCUIT DESIGN FLOW\
المؤلف
Abul-Makarem,Mohamed Said Mohamed.
هيئة الاعداد
مشرف / عادل عزت الحناوي
مشرف / محمد امين دسوقي
مناقش / هاني فكري رجائي
مناقش / . يحيي اسماعيل
تاريخ النشر
2014.
عدد الصفحات
160p.;
اللغة
الإنجليزية
الدرجة
الدكتوراه
التخصص
الهندسة الكهربائية والالكترونية
تاريخ الإجازة
1/1/2014
مكان الإجازة
جامعة عين شمس - كلية الهندسة - كهربة اتصالات
الفهرس
Only 14 pages are availabe for public view

from 32

from 32

Abstract

Following the aggressive technology scaling trends and the fundamental limits of optical lithography, the gap between the designed layout and the functionality of what is really fabricated on silicon is increasing. As a consequence, performances predicted during the design implementation may significantly differ from post-silicon measurements. Furthermore, with the increasing difficulty in process control in nanometer technologies, manufacturing variations are growing as a percent of feature sizes. The number of variability sources is also growing as the fabrication processes become more and more complex, and the correlation between different variability sources is also more difficult to predict. The parameter fluctuations cause parametric yield loss, i.e., performance degradation, and the fabricated chips do not function as required by specification. Traditionally, the methodology adopted to determine the timing performance spread of a design in presence of variability is to run multiple static timing analyses at different “corners” that include “best-”, “nominal-”, and “worst-case”. This approach is breaking down because finding those corners requires huge amount of runs to catch the exact corner due to extra layout effects due to cell/device proximities. A possible solution to reduce the number of timing analyses is design and verification with extra design margin normally named as on-chip-variation (OCV) It could be handled by existing corner-based design methodologies only by applying different derating factors for data-path and clock-path delay, and/or by introducing uncertainty margins. Therefore, designing for extreme conditions would automatically take care of the nominal case. However, considering the corner values for each electrical parameter may lead to an over pessimistic estimation of the performance. The scenario with all parameters in their worst-case values has really a minimal probability to happen in practice, and in several cases it cannot happen at all. Another drawback of the worst-case approach is that it does not provide information to designers about the sensitivity to various parameters, which can potentially be very useful in driving the optimization efforts towards a more robust design. In the digital domain both the situation is worst as timing and power performance of integrated circuits are increasingly affected by proximity effects. Therefore, it is mandatory to take into account such effects during digital cell characterization accurately to capture the process variations and hence generating more precise timing models. The more accurate modeling of all digital cell variations, the less value of defined On Chip Variation (OCV) specifically in recent technologies. Reducing OCV value leads to more relaxed timing constraints and hence, less power, smaller area and shorter design cycle. This work presents a proposal for a design flow to enhance the manufacturability of the traditional standard cell library. The novel method comprises fully automated proximity-aware techniques to measure timing variations in digital cells. The new proposed flow is examined on 45 nm recent technologies and results indicate a ±6% variation across whole library contexts with respect to mean value. This shows the importance of having a variability-aware method that qualifies the libraries to be adopted for circuit designs.