Python Deep Learning tutorial: Create a GRU (RNN) in TensorFlow

Posted on September 17th, 2018

Posted by Capri Granville on January 27, 2018 at 7:00pm

Guest blog post by Kevin Jacobs.

MLPs (Multi-Layer Perceptrons) are great for many classification and regression tasks. However, it is hard for MLPs to do classification and regression on sequences. In this Python deep learning tutorial, a GRU is implemented in TensorFlow. Tensorflow is one of the many Python Deep Learning libraries.

By the way, another great article on Machine Learning is this article on Machine Learning fraud detection. If you are interested in another article on RNNs, you should definitely read this article on the Elman RNN.

 

What is a GRU or RNN?

A sequence is an ordered set of items and sequences appear everywhere. In the stock market, the closing price is a sequence. Here, time is the ordering. In sentences, words follow a certain ordering. Therefore, sentences can be viewed as sequences. A gigantic MLP could learn parameters based on sequences, but this would be infeasible in terms of computation time. The family of Recurrent Neural Networks (RNNs) solve this by specifying hidden states which do not only depend on the input, but also on the previous hidden state. GRUs are one of the simplest RNNs. Vanilla RNNs are even simpler, but these models suffer from the Vanishing Gradient problem.

[responsive_video type=’youtube’ hide_related=’1′ hide_logo=’0′ hide_controls=’0′ hide_title=’0′ hide_fullscreen=’0′ autoplay=’0′]https://www.youtube.com/watch?v=dFARw8Pm0Gk[/responsive_video]

Mathematical GRU Model

The key idea of GRUs is that the gradient chains do not vanish due to the length of sequences. This is done by allowing the model to pass values completely through the cells. The model is defined as the following [1]:

z_t = sigma(W^{(z)} x_t + U^{(z)} h_{t-1} + b^{(z)}) r_t = sigma(W^{(r)} x_t + U^{(r)} h_{t-1} + b^{(r)}) tilde{h}_t = tanh(W^{(h)} x_t + U^{(h)} h_{t-1} circ r_t + b^{(h)}) h_t = (1 - z_t) circ h_{t - 1} + z_t circ tilde{h}_t

I had a hard time understanding this model, but it turns out that it is not too hard to understand. In the definitions, circ is used as the Hadamard product, which is just a fancier name for element-wise multiplication. sigma(x) is the Sigmoid function which is defined as sigma(x) = frac{1}{1 + e^{-x}}. Both the Sigmoid function (sigma) and the Hyperbolic Tangent function (tanh) are used to squish the values between 0 and 1.

z_t functions as a filter for the previous state. If z_t is low (near 0), then a lot of the previous state is reused! The input at the current state (x_t) does not influence the output a lot. If z_t is high, then the output at the current step is influenced a lot by the current input (x_t), but it is not influenced a lot by the previous state (h_{t-1}).

r_t functions as forget gate (or reset gate). It allows the cell to forget certain parts of the state.

The Task: Adding Numbers

In the code example, a simple task is used for testing the GRU. Given two numbers a and b, their sum is computed: c = a + b. The numbers are first converted to reversed bitstrings. The reversal is also what most people would do by adding up two numbers. You start at the right from the number and if the sum is larger than 10, you carry (memorize) a certain number. The model is capable of learning what to carry. As an example, consider the number a = 3 and b = 1. In bitstrings (of length 3), we have a = [0, 1, 1] and b = [0, 0, 1]. In reversed bitstring representation, we have that a = [1, 1, 0] and b = [1, 0, 0]. The sum of these numbers is c = [0, 0, 1] in reversed bitstring representation. This is [1, 0, 0] in normal bitstring representation and this is equivalent to 4. These are all the steps which are also done by the code automatically.

The Code

The code is self-explaining. If you have any questions, feel free to ask! The code can also be found on GitHub. Sharing (or Starring) is Caring :-)!

Results

After ~2000 iterations, the model has fully learned how to add 2 integer numbers!

Conclusion (TL;DR)

This Python deep learning tutorial showed how to implement a GRU in Tensorflow. The implementation of the GRU in TensorFlow takes only ~30 lines of code! There are some issues with respect to parallelization, but these issues can be resolved using the TensorFlow API efficiently. In this tutorial, the model is capable of learning how to add two integer numbers (of any length).

To access the source code and view the original article, click here. 

DSC Resources

  • Services: Hire a Data Scientist | Search DSC | Classifieds | Find a Job
  • Contributors: Post a Blog | Ask a Question
  • Follow us: @DataScienceCtrl | @AnalyticBridge

Popular Articles

  • Difference between Machine Learning, Data Science, AI, Deep Learnin…
  • What is Data Science? 24 Fundamental Articles Answering This Question
  • Hitchhiker’s Guide to Data Science, Machine Learning, R, Python
  • Advanced Machine Learning with Basic Excel

 

Content retrieved from: https://www.datasciencecentral.com/profiles/blogs/gru-implementation-in-tensorflow.