Search In this Thesis
   Search In this Thesis  
العنوان
SOLVING LINEAR LEAST-SQURES ERROR PROBLEMS USING A SINGLE NEURON/
الناشر
Adel Amin Abd El-Azim ,
المؤلف
Abd El-Azim, Adel Amin
هيئة الاعداد
باحث / عادل امين عبد العظيم يوسف
مشرف / عمر عبد العزيز عبدالرحمن السباخى
مشرف / امين احمد فهمى شكرى
مناقش / بدر محمد عبدالله ابو النصر
مناقش / علي سلامه
الموضوع
Solving Linear Least-Squares Error. Single Neuron Squares Error.
تاريخ النشر
1998 .
عدد الصفحات
xii, 82 P.:
اللغة
الإنجليزية
الدرجة
ماجستير
التخصص
الهندسة (متفرقات)
تاريخ الإجازة
1/1/1998
مكان الإجازة
جامعة الاسكندريه - كلية الهندسة - هندسة الحاسبات و النظم
الفهرس
Only 14 pages are availabe for public view

from 16

from 16

Abstract

In many dynamic systems such as adaptive control systems and computer vision systems, there is often a need for real-time parameter estimation. Among the existing parameter estimation methods, the least-squares method is the most common due to its simplicity in terms of formulation and prior information. The method is particularly simple if the model has the property of being linear in the parameters. In this case the least-squares estimate can be calculated analytically.The method of Least Squares Error (LSE) has been used for a long time. Many classical algorithms have been used to solve it. However, all these classical techniques have a common problem: they are not valid for real time applications when the solution is to be obtained within the time of the order of hundred nanoseconds. In order to be reliable, it is required to use a very powerful and expensive digital computer. That is why analog artificial neural networks have been employed to solve this problem
Many of the known neural networks techniques used to solve LSE problem, require to use at least n (n is the number of unknown parameters) linear/non-linear processing elements (neurons). In many engineering applications, e.g., in image reconstruction and in computer tomography it is required to solve very large system of linear algebraic equations with high speed and throughput rate. For such problems, neural network architecture requires an extremely large number of processing units so that practical hardware implementation of the neural network may be difficult, expensive and even impossible.In this thesis, a new approach is proposed which enables us to solve the LSE problem more efficiently and economically. The neural network used contains only one linear neuron. The problem is formulated as an eigenvector problem and it s shown that the LSE solution corresponds to the minimal eigenvector of a real, symmetric, and positive definite matrix. Then, the minimal eigenvector problem can be transformed into a maximal eigenvector problem, hence, the problem can be solved using a deterministic version of the Oja unsupervised learning rule that adapts the weight vector of a single linear neuron so that it converges to the first principal component of the data input space. Finally, to demonstrate the operating characteristic and the performance of the new proposed algorithm, two application examples are discussed: curve or hyper surface fitting and identification of discrete-time control system.