Hebb’s rule

Table of Contents

Hebb’s rule is an algorithm of pattern extraction/recognition using neuron.

1. settings

standard neural network:

  • input i at time t: \(a_i^t\)
  • weight of input i at time t: \(w_i^t\)
  • output at time t+1(on input at time t): \(X^{t+1}\)
  • forward computation: \(X = (> (vec-mult a w) theta)\)

special parameter:

  • learning rate \(C\)

2. algorithm

At any time step: \[ \delta w_i^t = Ca_i^tX^{t+1}, w_i^{t+1} = w_i^t + \delta w_i^t \]

  • setting:
    • learning rate \(C\) : 1 could do
    • threshold \(\theta\): 1 could do

3. characteristics

  • do not converge: all 3 elements of \(\Delta w\) are positive, so weight can only stay the same, or be larger.
  • as the output corresponds to successful matching the better weights model the dataset pattern, the more quickly they become larger.
  • in time, would be all firing(do not converge)

4. how it works

  • if one input is 1, \(X\) would be one
  • every input that is 1, their weight would increment
  • every input that is not 1, their weight won’t change
  • in time, the frequent pattern (inputs that fire together) would make their corresponding weight larger than those that are not as frequent

Backlinks

weight likes triaining data likes input

pattern recognition could be down by making weight looks like training data and hence the input.

This works in Hebb’s rule as input could only be 1 and 0 and larger weight on the pattern’s 1 slots make it easier for the pattern to fire output 1.

Oja’s rule

Hebb’s rule, but normalize weight after each increment

Author: Linfeng He

Created: 2024-04-03 Wed 23:17