Talk:Cuil Theory/@comment-26356598-20150607030509

I don't know if the author seriously thinks that this has any place in science, but the lack of scientific literacy that people have here is astounding. No, there is absolutely NOTHING here that would assist in the advancement of machine learning. This is philosophy, and it's not even good philosophy, it's amateur philosophy. Through what processes could the concept of "cuil" assist in machine learning and EASILY MODEL AND MAP THE "PURE SEMANTIC NOISE" IN AI BRAIN WAVES? I despise pseudoscientific crap like this. By brain waves I assume you mean AI in the form of artifical neural networks.

We already KNOW how to interpret neural nets, we know how to create them, train them, improve them, etc. We don't need your pseudo scientific / pseudo philosophical ideas. Here's how neural nets work. We have multiple layers of simulated neurons, each neuron connects to each neuron in the next layer (normally). We have a non linear activation function (that gives us the base level of the neurons output). Each connection has a weight, this weight is multiplied by the base output to equal the net output of that neuron. There is an input layer, which feeds information into the network, multiple layers in between, and an output layer. There can be hundreds of millions to billions of connections (and therefore hundreds of millions - billions of weights) in a complicated neural network. At the start, the weights are random. We have the output that we want the neural network to output given a certain input. We then compute an error function, which is the squared difference between the desired output and the output we got. We then take the partial derivitave with respect to each weight of the error function. Once we do this for each weight, we get the gradient, which tells us where the error function is increasing the fastest with respect to each weight. We then move in the opposite direction. We keep doing this for each iteration until our error function is at it's minimum.

Basically it's conceptually simple: there is some combination of weights that will get us the output (or series of outputs) that we want. I kinda nerded out here, but i really dislike when people make these sorts of unfounded claims. In any case there was nothing creepy about this at all, I'd give it 4/10.