2 Types of neurons for a neural network using ASM x86 FPU

Started by Theo Gottwald, April 13, 2024, 03:16:04 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Theo Gottwald

You can implement different types of neurons for a neural network using assembly or similar low-level code. However, it's important to note that assembly code is not typically used for implementing neural networks due to its complexity and lack of readability. High-level languages like Python, with libraries such as TensorFlow or PyTorch, are more commonly used for this purpose.

That being said, here's an example of a simple ReLU (Rectified Linear Unit) neuron and a sigmoid neuron using the fastest available floating-point data type (EXTENDED) in assembly-like code. Note that this is a simplified example and does not include any optimizations or additional features that would be needed for a real-world neural network implementation.

**ReLU Neuron:**

MACRO FUNCTION ReLUNeuron_Extended(input)
  MACROTEMP output
  DIM output AS EXTENDED

  !fld input       'load input value
  !fldz            'load 0.0
  !fcomip st(1), st(0) 'compare input with 0.0
  !fnstsw ax       'store FPU flags in ax
  !sahf            'transfer FPU flags to CPU flags
  !jb .Lnegative  'jump to .Lnegative if input < 0.0

  !fstp output     'store input value as output if input >= 0.0
  !jmp .Lend

.Lnegative:
  !fstp input      'discard input value
  !fldz            'load 0.0
  !fstp output     'store 0.0 as output

.Lend:
END MACRO = output

**Sigmoid Neuron:**

MACRO FUNCTION SigmoidNeuron_Extended(input)
  MACROTEMP output
  DIM output AS EXTENDED

  !fld input       'load input value
  !fldl2e          'load log2(e)
  !fyl2x           'compute input * log2(e)
  !frndint         'round to nearest integer
  !f2xm1           'compute 2^(input * log2(e)) - 1
  !fld1            'load 1.0
  !faddp           'compute 1.0 + (2^(input * log2(e)) - 1)
  !fld1            'load 1.0
  !fscale          'compute 1.0 / (1.0 + (2^(input * log2(e)) - 1))
  !fstp output     'store output value

END MACRO = output

Backpropagation

**Backpropagation in Assembly-like Code for ReLU and Sigmoid Neurons**

**🚀 Forward Pass:**
1. **Calculate Outputs:** Sequentially compute and store the output for each neuron from input to output layers.

**🎯 Output Layer Error Calculation:**
1. **Compute Error:** Use mean squared error (MSE) for each neuron by \( \frac{{(\text{expected value} - \text{output value})^2}}{2} \).

**⏪ Backward Pass:**
1. **Error Propagation:** Using errors from subsequent layers, compute errors for the previous layers.
2. **Activation Derivatives:**
  - **ReLU:** Derivative is 1 if input > 0; otherwise, it's 0.
  - **Sigmoid:** Derivative is output * (1 - output).
3. **Gradient Calculation:** Multiply error by the derivative to get the gradient.

**🔧 Weight Update:**
1. **Adjust Weights:** Apply updates using SGD; \( \text{weight} = \text{weight} - \text{learning rate} \times \text{gradient} \).

**Example for a ReLU Neuron Backward Pass and Weight Update:**
; Assuming eax holds the input to the neuron
; ebx holds the derivative
; ecx holds the error from the next layer
; edx holds the learning rate

cmp eax, 0          ; Check if input is positive
jle .zero_gradient  ; Jump if not positive, derivative is zero
mov ebx, 1          ; Set derivative to 1
jmp .continue      ; Continue computation

.zero_gradient:
mov ebx, 0          ; Set derivative to zero

.continue:
; Calculate gradient: gradient = error * derivative
mul ebx, ecx        ; Multiply error by derivative to get gradient

; Update weight: weight -= learning_rate * gradient
; Assuming esi points to weight
mov eax, [esi]      ; Load current weight
sub eax, edx        ; Subtract learning rate times gradient from weight
mov [esi], eax      ; Store updated weight back

This is just a simplified example demonstrates the process for managing backpropagation for ReLU neurons using assembly-like pseudocode.

Following is real code.

'##################################################################################################
'
'##################################################################################################
' BackpropagateReLUNeuron_Extended(P1, P2, P3, P4)
'
' This macro performs backpropagation for a single ReLU neuron using the provided parameters:
'
' P1 (input, EXTENDED): The input value to the neuron.
'   Value range: Any real number.
'
' P2 (error, EXTENDED): The error value for the neuron, calculated based on the output error and the weights from the subsequent layer.
'   Value range: Any real number.
'
' P3 (weight, EXTENDED): The weight value for the neuron.
'   Value range: Any real number.
'   On output, the updated weight value is stored back into this memory location.
'
' P4 (learning_rate, EXTENDED): The learning rate value used for updating the weight.
'   Value range: A positive real number, typically between 0.0 and 1.0.

MACRO BackpropagateReLUNeuron_Extended(P1, P2, P3, P4)
  MACROTEMP gradient
  DIM gradient AS EXTENDED

  !fld P1          'load input value
  !fldz            'load 0.0
  !fcomip st(1), st(0) 'compare input with 0.0
  !fnstsw ax       'store FPU flags in ax
  !sahf            'transfer FPU flags to CPU flags
  !jb .Lnegative  'jump to .Lnegative if input < 0.0

  !fld P2          'load error value
  !fmul P1         'compute gradient = error * input
  !fstp gradient   'store gradient value
  !jmp .Lend

.Lnegative:
  !fldz            'load 0.0
  !fstp gradient   'store gradient value as 0.0

.Lend:
  !fld P3          'load weight value
  !fld P4          'load learning rate value
  !fmul gradient   'compute learning_rate * gradient
  !fsubp           'compute weight - learning_rate * gradient
  !fstp P3         'store updated weight value

END MACRO