School of Computing. Dublin City University. Home Blog Teaching Research Contact My big idea: Ancient Brain 


Say finite weights, and (positive) infinite threshold. Then output 0 no matter what input. Hidden unit always outputs 0. Nothing added to summed input of next layer. Hidden unit might as well not exist.
Say finite weights, minus infinite threshold. Then output 1 no matter what. Every summed input in next layer gets w_{jk}.1 added to it. Might as well scrap w_{jk} and just modify the threshold t_{k}. No advantage in having 2 weights added to form the "threshold" instead of 1. So output 1 no matter what is useless as well.
Conclusion  Infinite threshold (with finite weights) seems to be useless.
Say finite threshold, one weight is positive infinite, other weights finite. Then if input on that link is positive, output = 1, no matter what threshold is (so long as finite). If input negative, output = 0. Steep threshold at 0.
Say weight negative infinite, still steep threshold at 0, just any negative input leads to 1, any positive to 0.
Is this useful?
Let's say:
x_{j} =
w_{1j} I_{1} +
w_{2j} I_{2} + ... +
w_{nj} I_{n}
and a single weight
w_{ij} is infinite,
all others finite, and t_{j} finite
Then:
Hidden node j does nothing except recognise whether the single input I_{i} is positive or negative.
It makes no difference what the other inputs are. The links from all the other inputs to hidden node j may as well not exist.
Also this is only of use in w_{ij} layer. In the w_{jk} layer, a recogniser for whether y_{j} is positive or not is useless  y_{j} is always positive. It becomes a constant output 0 or 1.
sig(n(xt))
is centred on t.
For example, sig(5(x3)) is centred on 3.
sig(nxt) is centred on t/n.
For example, consider sig(5x3)
= sig(5(x(3/5)))
Above is centred not on 3, but on 3/5.
sig(nxt) centred on t/n: