Neural networks are at the heart of the revolution taking place in Machine Learning today, but they are often badly explained and therefore poorly understood. We hope to explain them well, in a math-minimal way, thus rendering them easy to understand (which they are).
We will start (assuming no knowledge of NNs) and explain how the very simplest ones (perceptrons) work. To prove how simple, we’ll show you how to create a perceptron in Excel. Thereafter all examples will be coded in either R or Python, because why choose when you can have both? As we continue our journey we’ll show you that even the humble Perceptron can do useful work. Then we’ll start plugging simple network units together to really ramp up the power. We’ll move on to illustrate different types of network architectures including deep-layered models such as CNN (Convolutional Neural Nets) and Autoencoders. We’ll finish with one of the most hotly debated models in the media today, Generative Adversarial Networks (GAN), which are responsible for the controversial deepfake-style videos.
By the end of the day, delegates will be in a position to make a bid for World domination by neural networks. Or, at least, know where to start.