What is the simplest Neural Network, that you can really use to understand AI ?
Ok, assume that you have Sort of Medicine,
- that will not help it you do take a low dose.
- that will work if you take a medium dose
- that will not work or kill you if you take too much.
So we need a function that can
a) Take an Input
b) Separate these 3 States
c) Show at the Output 0 to 1 if the Dose that is at Input will work or not.
This Code is completely and unchanged from BING.
' Number of neurons in each layer
CONSTANT INPUT_NEURONS = 1
CONSTANT HIDDEN_NEURONS = 2
CONSTANT OUTPUT_NEURONS = 1
' Learning rate
CONSTANT LEARNING_RATE = 0.5
' Initialize weights
DIM SHARED weights1(HIDDEN_NEURONS, INPUT_NEURONS) AS SINGLE
DIM SHARED weights2(OUTPUT_NEURONS, HIDDEN_NEURONS) AS SINGLE
' Set random values for weights
FOR i = 1 TO HIDDEN_NEURONS
FOR j = 1 TO INPUT_NEURONS
weights1(i, j) = RND - 0.5
NEXT j
NEXT i
FOR i = 1 TO OUTPUT_NEURONS
FOR j = 1 TO HIDDEN_NEURONS
weights2(i, j) = RND - 0.5
NEXT j
NEXT i
' Feedforward (forward pass)
FUNCTION feedforward(inputValue AS SINGLE) AS SINGLE
DIM hidden(HIDDEN_NEURONS) AS SINGLE
DIM output(OUTPUT_NEURONS) AS SINGLE
' Calculate hidden layer
FOR i = 1 TO HIDDEN_NEURONS
hidden(i) = 0
FOR j = 1 TO INPUT_NEURONS
hidden(i) = hidden(i) + inputValue * weights1(i, j)
NEXT j
' Apply ReLU activation function
IF hidden(i) < 0 THEN hidden(i) = 0
NEXT i
' Calculate output layer
FOR i = 1 TO OUTPUT_NEURONS
output(i) = 0
FOR j = 1 TO HIDDEN_NEURONS
output(i) = output(i) + hidden(j) * weights2(i, j)
NEXT j
NEXT i
' Return result of output layer
FUNCTION = output(1)
END FUNCTION
' Backpropagate (backward pass)
SUB backpropagate(inputValue AS SINGLE, targetValue AS SINGLE)
DIM hidden(HIDDEN_NEURONS) AS SINGLE
DIM output(OUTPUT_NEURONS) AS SINGLE
' Perform feedforward to calculate current values of neurons.
' Calculate hidden layer.
FOR i = 1 TO HIDDEN_NEURONS
hidden(i) = 0
FOR j = 1 TO INPUT_NEURONS
hidden(i) = hidden(i) + inputValue * weights1(i, j)
NEXT j
' Apply ReLU activation function.
IF hidden(i) < 0 THEN hidden(i) = 0
NEXT i
' Calculate output layer.
FOR i = 1 TO OUTPUT_NEURONS
output(i) = 0
FOR j = 1 TO HIDDEN_NEURONS
output(i) = output(i) + hidden(j) * weights2(i, j)
NEXT j
NEXT i
' Calculate error in output layer.
DIM error_output(OUTPUT_NEURONS) AS SINGLE
FOR i = 1 TO OUTPUT_NEURONS
error_output(i) = targetValue - output(i)
NEXT i
' Calculate error in hidden layer.
DIM error_hidden(HIDDEN_NEURONS) AS SINGLE
FOR i = 1 TO HIDDEN_NEURONS
error_hidden(i) = 0
FOR j = 1 TO OUTPUT_NEURONS
error_hidden(i) = error_hidden(i) + error_output(j) * weights2(j, i)
NEXT j
' Apply derivative of ReLU activation function.
IF hidden(i)<=0 THEN error_hidden(i)=0 ELSE error_hidden(i)=error_hidden(i)
NEXT i
' Update weights.
FOR i=1 TO HIDDEN_NEURONS
FOR j=1 TO INPUT NEUROSN
weights1 (i,j)=weights1 (i,j)+LEARNING_RATE*error_hidden (i)*inputValue
NEXT j
NEXT I
FOR I=1 to OUTPUT NEUROSN
FOR J=1 to HIDDEN NEUROSN
weights2 (i,j)=weights2 (i,j)+LEARNING_RATE*error_output (i)*hidden (j)
NEXT J
NEXT I
END SUB
' Sample data
DIM input(4) AS SINGLE
DIM target(4) AS SINGLE
input(1) = 0
target(1) = 0
input(2) = 1
target(2) = 1
input(3) = 2
target(3) = 2
input(4) = 3
target(4) = 3
' Train network/Learn
FOR epoch = 1 TO 1000
FOR i = 1 TO 4
' Perform forward pass and calculate error
predictedValue = feedforward(input(i))
errorValue = target(i) - predictedValue
' Perform backward pass to update weights
backpropagate(input(i), target(i))
NEXT i
NEXT epoch
' Make predictions (Work)
PRINT feedforward(4)
PRINT feedforward(5)
PRINT feedforward(6)
This code trains the network on four sample data points for 1000 epochs. After training, it makes predictions for the input values 4, 5, and 6. You can adjust the training data and number of epochs to fit your specific needs.
I hope this helps! Let me know if you have any further questions.
He also gives an explanation
QuoteSure! Here is a step-by-step explanation of the above code in English:
First, the number of neurons in each layer and the learning rate are set.
Then the weights between the neurons are initialized and set randomly.
The feedforward function performs a forward pass through the network. The input value is multiplied by the weights and the sum is stored in the hidden neurons. Then the ReLU activation function is applied to the hidden neurons. Finally, the values of the hidden neurons are multiplied by the weights and the sum is stored in the output neurons. The result of the output neuron is returned.
The backpropagate procedure performs a backward pass through the network. First, a forward pass is performed to calculate the current values of the neurons. Then the error in the output layer is calculated by subtracting the target value from the actual value. Then the error in the hidden layer is calculated by multiplying the error in the output layer by the weights and summing it up. Finally, the weights are adjusted by multiplying them with the error and learning rate and adding them to their current value.
To improve the network, you could for example increase the number of hidden neurons or use a different activation function. There are many ways to optimize a neural network and it depends on your specific requirements.
I hope this explanation was helpful! If you have any further questions, don't hesitate to ask.
Also there is a VIDEO that will explain this in Detail.
Neural Network for Beginners (https://www.youtube.com/watch?v=CqOfi41LfDw)and the second part
Neural Networks learning: Backpropagation with Numbers! (https://youtu.be/IN2XmBhILt4)Take this 30 Minutes and you will understand the new AI Technology.