DEEP LEARNING FOR ENGINEERING APPLICATIONS
cod. 1010702

Academic year 2024/25
1° year of course - Second semester
Professors
Academic discipline
Sistemi di elaborazione delle informazioni (ING-INF/05)
Field
A scelta dello studente
Type of training activity
Related/supplementary
60 hours
of face-to-face activities
6 credits
hub: UNIBO
course unit
in ENGLISH

Learning objectives

Knowledge and understanding
The purpose of the course is to illustrate basic and advanced concepts for the development of discriminative and generative deep neural networks for different applications.

Applying knowledge and understanding
The student will acquire the ability to design and develop complex networks for image recognition and classification, for object detection, and for image generation.
Students completing the course will be able to write Python code (PyTorch) for training deep learning networks and test them both in discriminative and generative fields.

Prerequisites

Course unit content

The course provides basic and advanced knowledge on deep learning architectures for images, by tackling both detailed theoretical aspects and implementations with possible applications. After an introductory part and the review of basic concepts (on probability, machine learning and algebra), both main discriminative deep learning networks (MLP, CNN, RNN, ...) and generative models (auto-encoder, VAE, GAN, ...) will be analyzed.

Full programme

The course includes the following topics:
- Brief introduction on the history and evolution of machine learning and neural networks;
- Review/Recap of basic concepts of probability, algebra and machine learning;
- Difference between discriminative and generative approaches;
- Fundamentals of neural networks: back-propagation algorithm, gradient-based optimization methods (ADAM, SGD, BGD, ...);
- Feed-forward, fully-connected, multi-layer perceptron networks;
- Introduction to deep learning: basic concepts and importance of data (curse of dimensionality, data cleaning, data augmentation);
- Convolutional deep learning networks (CNN): basic architecturs, types of layers, activation functions, pooling layers, loss functions, training normalization methods;
- Recurrent and recursive networks: RNN, Mask R-CNN, self- and cross-attention, spotlights on LSTM
- Generative models: auto-encoders (AE), variational AE, generative adversarial networks (GAN) and its variants (cGAN, StyleGAN, StarGAN, CycleGAN)
- Spotlights on optimization methods for deep learning networks
- Spotlights on advanced topics: latent space analysis (disentanglement, geometry discovery, ...), knowledge distillation, Transformers, explanaibility of deep neural networks

Bibliography

- I. Goodfellow, Y. Bengio, A. Courville, “Deep Learning”, The MIT Press, 2016
- E. Stevens, L. Antiga, T. Viehmann, “Deep Learning with PyTorch”, Manning Publications
- D. Foster, "Generative Deep Learning", O'Reilly Media, 2019

Teaching methods

The course includes around 35 hours of traditional frontal lectures and 25 hours of training in the laboratory.

Assessment methods and criteria

The exam consists in two tests, which can be taken independently, in terms of both order and exam session.
The oral test consists in reading and understanding a scientific paper assigned by the teacher and in the oral presentation with slides of its main content (on which the teacher can pose questions for evaluating the understanding of the theoretical concepts presented in the lessons).
The practical test consists in completing and/or modifying the PyTorch code provided during the exam to modify the architecture of the proposed network, its parameters, the training procedure or the final objective of the network.
The final grade will be the average of the two grades.

Other information

2030 agenda goals for sustainable development