Error analysis of the results of multiplication by AND gate

03. August, 2009, Autor článku: Raschman Emil, Elektrotechnika, Študentské práce
Ročník 2, číslo 8 This page as PDF Pridať príspevok

The paper deals with error analysis of multiplication by AND gate. Advantage this method of multiplication is simple circuits complexity opposite to hardware multiplier. We are analyzing influence of some parameters at the error results of multiplication. The main parameters influenced by the size of error are size of multiplying factor and size of time interval (time-window) during which is realized multiplication.

Introduction

In the present time with an increasing number of informations is constantly increasing requirements for processing. There are two ways for to increasing computing power. Increasing the frequency for the type of processors RISC, which is working on principle of serial calculation or using system with parallel information processing. In the present time one of most widely used parallel system is neural network (Hänggi and Moschytz, 2000). Neural network allow the information processing with a large number simple operations. In most case are neural networks realized at the software level by some application program (Sordo M. 2002). Such networks achieved relatively high speeds but their calculation is seriall, because the program is implemented in the serial processor.

The advantage of such network is low cost because it is sufficient of create program which is running on standard PC. In applications with the requirement to achieve the highest possible speed (for example real-time processing) are used hardware neural networks implemented on the chip. We are focused to the cellular neural networks (CNN) (Chua and Yang, 1988a, b, Larsen, 1999) the main application the image processing. The network is composed with elementary processors so-called cells, which operated in parallel. The higher number of cells of the network is efficient to handle more informations together. Therefore it is an effort to proposed network so that the chip fits as many cells with the smallest possible area consumption of the chip. From this area derives it’s cost.

The basic block diagram one cell of neural network is in the Fig. 1 (Seung, 2002). The input signals are multiplied by appropriate weights. The results of multiplication are then summarized and transferred with any transfer function. This circuit is possible to realize by analog or digital circuits. In of digital design is most complicated circuit cell of CNN block, which realize multiplication. Because the digital hardware multiplier is most complicated part of the circuit and then takes most area consumption on the chip, therefore we are proposed an alternative method multiplication which is simpler in term of circuit complexity. The proposed method of multiplication is realized by simple is realized AND gate, where the multiplication of two signals in the transition which must by distributed in a special time.


Fig. 1 The block diagram of CNN cell

The multilication signals distibuted in time by AND gate

The basis of the proposed alternative a multiplication is AND gate. The order to implementation of multiplication by AND gate must be distributed in time by entering special signals.

The principle of multiplication by AND gate

At first we need to define time interval (time-window) during which it will be realized multiplication. We are defining the size of time-window according to size of maximal number of multiplication. For example if we want multiply two 3-bits number we will need time-window with seven clock cycles because in this time-window we can defined values from 0 to 7 which represented 3 bits (23 = 8). In the case 4-bits number we will need 15 clock cycles etc. The size of time-window define the speed of calculation of multiplication which is defined by the size of time-window (number of clock cycles). The time of calculation is doubling with each place value of digit therefore it is important to correctly identify the maximum number that you want to multiply. If we are selecting larger time-window, that is necessary the calculation speed of multiplication is unnecessarily slow. After the definition of time-window is the need to encode signals in the time-window. The size of one signal must be transformed to the time interval from begin of time-window (Fig. 2a). In our case it’s an incoming input signal as the number 5 represented 5-clk cycles from begin of time-window. The value of second signal must be evenly distributed in time so, that it was the center-symmetric time-window. In our case these values was represented by weights. Example distribution of signal (weight) in the time is for 3-bits number is in the Fig. 2b and for 4-bits number is in the Fig. 3.


Fig. 2 Distribution of signals for multiplication 3-bits numbers


Fig. 3 Distribution of signals for multiplication 3-bits numbers

The signals simultaneously entering the AND gate where are multiplied bit by bit. There are on the output of gate sub-results of multiplication of individual bits. Overall result of multiplication is the sum of the log. “1” in the output AND gate. To use the summation counter which will summarize log. “1” to AND gate output during one time-window. At the end of this time-window will be at the to output of counter the value of overall results of the multiplication. The maximum value counter is given by number of clock signals in the time-window. Than we can multiply number with sign we need add XOR gate, which will compare the signals representing the sign of numbers. When we multiplied numbers with signs we needed to use up/down counter. Input of u/d counter is connected to the output of XOR gate. When the result of multiplication is positive, counter counts upwards and when is negative, counter counts down. The complete block diagram is in the Fig. 4.


Fig. 4 The block diagram of multiplication by AND gate

An example of multiplication by AND gate

In the Fig. 5 is example of multiplication of two 3-bits number using AND gate. The calculation is realized during 7 clock cycles (the size of time-window). The first multiplied input -3/7 with weight -4/7. At the output of AND gate was log. “1” during first time-window a 2 clock cycles. Because we have multiplied two negative numbers the output of the XOR gate was log. “0”.

The counter added value of 2. Results of multiplication representing values 2/7 which is 0.28. Real result of multiplication is 0.24. Natural property of this method of multiplication is rounding. Then we multiplied input -5/7 with weight 6/7. Output of AND gate was one clock cycle and output XOR gate was negative, it means, that the counter value subtract 4. Result of multiplication is -4/7 which is 0.571 and real result of multiplication is 0.61.


Fig. 5 An example of mutiplication by AND gate

Main results

Natural property of multiplication by AND gate is the result of rounding. Error results of changes depending on the size of multiplied number as well as using different size time-window. To be able to eliminate calculation error we analyzed how impact presented parameters to the error of result.

In the Fig. 6 you can see dependence average error from the multiplied numbers. For each system (3, 4 a 5-bits number) we are calculated for each number value of average error and making graphic dependence. Average error values error of each number were given averaged all error of results of multiplied specific number with each number.

From the Fig. 6 we see how to change, thes size of relative error form depending on the size of multiplied numbers. Independently from using system is the biggest error of multiplication when multiplied smallest number. When value of number of multiplication increase the error exponentially goes down. It follows that to obtain the highest accuracy we need to multiply the greatest numbers. For applications in cellular neural network we can the size of the weights change while preserving their ratio to vote almost arbitrarily. From the Fig. 6 we can see further, how to changes, size of relative error form depending from the using system. When using a system with a larger time-window the relative error decreases. System on influence of size of relative error is much smaller than influence of size of on multiplied number.

The value of average absolute error (Fig. 6) is for specific system relative constant i.e. size of absolute error independent from numbers of multiplied dependence only change in the system.

For better comparison influence of using system on size of error of multiplication we are averaged values of errors for each system given in tables. In the table 1, 2 and 3 are values for different systems of multiplication. Maximum absolute error of system for multiplication of 3-bits numbers is 0.0612 what representing 91.835% from smallest number of system (1/7) for 4-bits numbers is 0.0444 what representing 66.66% from smallest number of system (1/15) and for system 5-bits numbers is 0.026 what representing 163.89% from smallest number of system (1/31). Smallest number of system in image processing by neural network represented one gray shadow. This means that in the case of 5-bits system the biggest error caused the error output of one grey shadow.

The values of average relative error were considerably smaller to opposite maximum relative error. For 3-bits system it’s values of average relative error 0.023 what representing 34.43% from 1/7 for 4-bits system it’s 0.0134 what representing 20.03% from 1/15 and for 5-bits system it’s 0.0084 what representing 52.94% from 1/31.

From this results we can deduce that for 3 and 4-bits systems is average relative errors smaller than 50% i.e. bigger part of results will be correct and in the case 5-bits system will be approximately half the results correct and half the results incorrect. In the case application this multiplication to the neural network the incorrect result cause that specific pixel will be in the output changed of one gray shadow. Because the neural networks processed image in the several iterations so incorrect results are in the most case removed with following iteration. But it may cause that the final output is reached on the one iteration later.

On the basis of the analysis error of computation of multiplication by AND gate we found that presented method is applicable to the implementation of cellular neural network.


a)


b)


c)

Fig. 6 Dependence average error from number of multiplication

3-bits numbers
Values of min. number 1/7 \rightarrow 0.1427
Max. absolute error 0.0612
91.835% from min number
Avg. absolute error 0,023
34.43% from min number
Avg. Relative error 18.25%

Tab. 1. System for multiplication 3-bit numbers

4-bits numbers
Values of min. number 1/15 \rightarrow 0.0666
Max. absolute error 0.0444
66.66% from min number
Avg. absolute error 0,0134
20.03% from min number
Avg. Relative error 15.26%

Tab. 2. System for multiplication 4-bit numbers

5-bits numbers
Values of min. number 1/31 \rightarrow 0.3223
Max. absolute error 0.026
163.89% from min number
Avg. absolute error 0,0084
52.94% from min number
Avg. Relative error 12.57%

Tab. 3. System for multiplication 5-bit numbers

Conclusion

We are analyzed errors of computation of multiplication by AND gate and signals distributed in the time. We are analyzed impact numbers of multiplication on error of result of multiplication as well as how to change error in dependence on specific system. We found that the values of number of multiplication strongly influence on the size of error of multiplication. The biggest error is for smallest numbers and with increased numbers the value of error exponentially decreased.

For applications in cellular neural network the size of the weights can be vote almost arbitrarily, while preserving their ratio. The correct choices this parameters can minimalize the error. The maximum error of multiplication in the case of processed image by neural network cause, that specific pixel will changed in the output about one gray shadow.

On basis the presented facts we found that multiplication by AND gate is appropriate for implementation of cellular neural network.

Acknowledgement

This contribution was supported by the Ministry of Education Slovak Republic under grant VEGA No 1/0693/08 and conducted in the Centre of Excellence CENAMOST (Slovak Research and Development Agency Contract No. VVCE-0049-07).

References

  1. Hänggi M. a Moschytz G. S. (2000), Cellular Neural Network: Analysis, Design and Optimalization, Kluwer Academic Publisher, Boston, ISBN 0-7923-7891-1
  2. Chua L. O. a  Yang L. (1988a). Cellular Neural Network: Theory, IEEE Trans. Circuits Systems, Vol 35, pp. 1257-1272
  3. Chua L. O. a  Yang L. (1988b), Cellular Neural Network: Aplications, IEEE Trans. Circuits Systems, Vol 35, pp. 1273-1290
  4. Larsen J.(1999), Introduction to Artifical Neural Network, Section for Digital Signal Processing Department of Mathematical Modeling Technical University of Denmark, 1st Edition
  5. Sordo M. (2002), Introduction to Neural Networks in Healthcare, Open Clinical knowledge management for medical care
  6. Seung S. (2002), Introduction to Neural Networks: Lecture 1, The Massachusetts Institute of Technology – The Seung Lab

Co-authors of this paper are R. Záluský, D. Ďuračková, Slovak University of Technology, Faculty of Electrical Engineering and Information Technology, Ilkovičova 3, 812 19 Bratislava

Napísať príspevok