Talk on Artificial Intelligence, Deep Neural Networks and Deep Learning
Talk on Artificial Intelligence, Deep Neural Networks and Deep Learning
On June 19, 2018 we gave a talk (in German) on artificial intelligence, deep neural networks and deep learning at the Technologiehof Münster in the Let’s Talk series of the Venture Club Münster.
Our talk is available on Youtube and starts with 20 minutes of recent research milestones on applications for deep neural networks. In the following 40 minutes we explain the underlying mathematical principles of deep neural networks and deep learning and introduce into implementations with Python and Tensorflow as well as Java and Deeplearning4j.
We regularly give talks on deep neural networks, deep learning, artificial intelligence and software as a service (SaaS), as they are at the heart of LoyJoy. If you have feedback or want to invite us to speak at your event, please lets us know. Special thanks go to the Technologieförderung Münster for hosting us!
Concepts covered:
-
An artificial neural network is a network of artificial neurons, loosely modeling a biological brain. It maps numerical inputs to outputs and can be trained for tasks such as computer vision, speech recognition and machine translation.
-
Deep learning is a machine learning method for training deep neural networks, i.e. artificial neural networks with multiple layers.
-
ImageNet is a visual database with several million of labeled images used for visual object recognition research.
-
A convolutional neural network (CNN) is a class of artificial neural networks inspired by the visual cortex. It is primarily used in computer vision, but also has applications in other areas such as natural language processing (NLP).
-
A recurrent neural network (RNN) is a class of artificial neural networks with feedback connections between layers. It can be applied to tasks constituted by sequences, such as handwriting recognition, speech recognition and machine translation. Affected by the vanishing gradient problem.
-
Long short-term memory (LSTM) is a specific type of RNN neurons or layers addressing the RNN vanishing gradient problem. Applications include grammar learning, machine translation, speech recognition and time series prediction.
-
Backpropagation is an algorithm for training deep neural networks. For a given input from the training data the output of the network is calculated and compared to the desired output from the training data. Based on this result and a loss function a gradient is calculated to update the weights of the deep neural network.
-
Stochastic gradient descent (SGD) is an iterative method for minimizing the loss function. Some extensions of SGD are momentum method, AdaGrad, RMSProp and Adam, which offer dynamic learning rates i.e. step sizes per iteration.
-
An activation function defines the output of a neuron depending on the inputs. It loosely models the biological action potential and has to be nonlinear for nontrivial problems.
The research papers and examples covered in the talk are:
-
ImageNet Classification with Deep Convolutional Neural Networks
-
Deep Visual-Semantic Alignments for Generating Image Descriptions
-
Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US
Ready to give LoyJoy a Try?
Get monthly LoyJoy News with Product Updates & Success Stories.
Unsubscribe anytime.