Sketch

Layer 4

Layer 5

Layer 6

Layer 7

Net Output

Photo

Training

Explain ⇓

Live Demo: Sketches to Photos

The live demo utilizes a Neural Network to synthesis photos with face sketches, which is helpful for various applications, e.g., identifying the suspect. The Neural Network in this example is regressing pixel values live in your browser based on ConvNetJS, a JavaScript based ConvNet library. It takes a pixel of a sketch and transforms it through a series of functions into RGB values as the output. The transformed representations in this visualization can be losely thought of as the activations of the neurons along the way. The Sketch-Photo pairs are from the CUHK Face Sketch Database (CUFS). By the end of the class, you will know exactly what all these numbers mean.

Teaching Assistants

Xihui Liu
Head TA
Yixiao Ge
TA

Annoucements

[] The project presentation will be held on the whole day of May 2nd at ERB 408. Details here.
[Mar 13] Please send your final project proposals to your TA-in-charge by Mar 24. Details on the final project can be found here.
[Mar 5] Assignment 2 is now available. Please submit your assignments through BlackBoard by March 28.
[Feb 20] For auditing students who need a certificate, please send your assignment1 to [email protected] with keyword [ELEG5491] in the email subject field.
[Feb 13] We will have an one-hour open-book quiz on Feb 19, at the usual lecture time and classroom. The quiz will cover all lectures before and including CNN
[Jan 28] Assignment 1 is now available. Please submit your assignments through BlackBoard by Feb 21.
[Jan 24] Optional tutorials start from this week. You can find the tutorial syllabus and slides here.
[Jan 9] If you need the instructor's signature to select this course, please bring your CS-1 form to Room 304, SHB.
[Jan 4] Welcome to ELEG 5491 Introduction to Deep Learning!

Course Description

This course provides an introduction to deep learning. Students taking this course will learn the theories, models, algorithms, implementation and recent progress of deep learning, and obtain empirical experience on training deep neural networks. The course starts with machine learning basics and some classical deep models, followed by optimization techniques for training deep neural networks, implementation of large-scale deep learning, multi-task deep learning, transferred deep learning, recurrent neural networks, applications of deep learning to computer vision and speech recognition, and understanding why deep learning works. The students are expected to have some basic background knowledge on calculus, linear algebra, probability, statistics and random process as a prerequisite. The course offered in Spring 2019 features:

Time and Venue

Term 2 (January - April), 2019
Lecture:
  • Tuesday, 14:30-16:15
    LT, T.Y. Wong Hall
  • Tuesday, 16:30-17:15
    LT2, Mong Man Wai Building (MMW)
Tutorial:
  • Thursday, 14:30-15:15
    G18, Basic Medicine Science Building

Contact information

Xiaogang Wang: [email protected]
Hongsheng Li: [email protected]
Xihui Liu: [email protected]
Hang Zhou: [email protected]
Yixiao Ge: [email protected]
Hongyang Li: [email protected]

Grading Policy

3 assignments: 30%
2 quizzes: 30%
Final Project: 40%

FAQ

I am a student outside the EE department, can I register in the class?
Yes, you are welcome to register. For graduate students outside our EE department, you can fill in this form and ask approval from both your supervisor and the course instructor during the add/drop period.
Is this course hard for undergrad students?
The course is designed for senior undergrad and graduate students. It is not for the faint of heart. However, we will show lots of interesting cases and hands-on experience about deep learning models. Some part of the lectures requite calculus and linear algebra, but we will walk you through those knowledge. We think for undergrads, you will learn a lot at the end of the course through lectures, tutorials and the final project.
Can I work in groups for the Final Project?
No. The final project is done individually and details will be announced later.

Resources

You may find relevant notes or resources here. We also provide the 2017 lecture notes and tutorials.

Past Contributors

We sincerely thank all the contributors who made great efforts in supporting this course:
Prof. Wanli Ouyang, Prof. Hongsheng Li
Dr. Xingyu Zeng, Dr. Zhe Wang, Dr. Tong Xiao, Dr. Xiao Chu, Dr. Wei Yang, Dr. Kai Kang