We are now seeing a paradigm shift in software development, where decision making is increasingly shifting from hand-coded program logic to Deep Learning (DL) - popular applications of Speech Processing, Image Recognition, Robotics, Go game, etc. are using DL as their core components. Deep Neural Network (DNN), a widely used architecture of DL, is the key behind such progress. With such spectacular progress, they are also increasingly being used in safety-critical systems such as autonomous cars, medical diagnosis, malware detection, and aircraft collision avoidance systems. Such wide adoption of DL techniques comes with concerns about the reliability of these systems, as several erroneous behaviors have already been reported. Thus, it has become crucial to rigorously test these DL applications with realistic corner cases to ensure high reliability. However, due to the fundamental architectural differences between DNN and traditional software, existing software testing techniques do not apply to them in any obvious way. In fact, companies such as Google, Tesla, etc. are increasingly facing all the traditional software testing challenges to ensure reliable and safe DL applications. This talk will address how to systematically test Deep Learning applications.