Neural networks are build by putting a bunch of neurons (each of which is a perceptron — the focus of Module 6 — or a variant thereof) in layers and then connecting those layers to one another in such a manner that the input goes into the first layer, the output of the first layer is used as input to the second layer, and so forth, until the output of the final layer is taken as the output of the neural network itself. Whereas the incoming data flows forward, the training to adjust to errors propagates backwards.