School of Computing. Dublin City University.
My big idea: Ancient Brain
Say finite weights, and (positive) infinite threshold. Then output 0 no matter what input. Hidden unit always outputs 0. Nothing added to summed input of next layer. Hidden unit might as well not exist.
Say finite weights, minus infinite threshold. Then output 1 no matter what. Every summed input in next layer gets wjk.1 added to it. Might as well scrap wjk and just modify the threshold tk. No advantage in having 2 weights added to form the "threshold" instead of 1. So output 1 no matter what is useless as well.
Conclusion - Infinite threshold (with finite weights) seems to be useless.
Say finite threshold, one weight is positive infinite, other weights finite. Then if input on that link is positive, output = 1, no matter what threshold is (so long as finite). If input negative, output = 0. Steep threshold at 0.
Say weight negative infinite, still steep threshold at 0, just any negative input leads to 1, any positive to 0.
Is this useful?
xj = w1j I1 + w2j I2 + ... + wnj In
and a single weight wij is infinite, all others finite, and tj finite
Hidden node j does nothing except recognise whether the single input Ii is positive or negative.
It makes no difference what the other inputs are. The links from all the other inputs to hidden node j may as well not exist.
Also this is only of use in wij layer. In the wjk layer, a recogniser for whether yj is positive or not is useless - yj is always positive. It becomes a constant output 0 or 1.
is centred on t.
For example, sig(5(x-3)) is centred on 3.
sig(nx-t) is centred on t/n.
For example, consider sig(5x-3) = sig(5(x-(3/5)))
Above is centred not on 3, but on 3/5.
sig(nx-t) centred on t/n: