Today deep learning has penetrated almost every conceivable area of technology and applied sciences. Convolutional Neural Network (CNN), Deep Belief Network (DBN), Stacked Autoencoder (SAE) and Recurrent Neural Network (RNN) are known to technologists and scientists alike. However, each of these existing frameworks, have certain limitations. For example, CNNs are mostly applicable to images, DBNs have complicated formulation that is not amenable to mathematical manipulation and SAEs are usually prone to overfitting. And all of them are trained using three decade old backpropagation algorithm that are fraught with many issues. There have been no real development on new deep learning frameworks in the past two decades. We are working on a new age deep learning framework – deep dictionary learning. It is based on using the well understood dictionary learning as the basic building block. Since the framework is new and developing, I would like to introduce this framework and seek collaboration on problems of mutual interest.
Differentiating Radiation Necrosis from Tumor Progression in Brain Metastases after Radiation Therapy
Speaker
Simona Juniva, Centre de Vision Numérique (CVN), CentraleSupélec
Time and Place
11h, 24 May 2019, Laboratoire MICS, CentraleSupélec
The Effect of Constraints on Learning Neural Networks
Speaker
Ana Neacsu, Centre de Vision Numérique (CVN), CentraleSupélec
Time and Place
11h, 24 May 2019, Laboratoire MICS, CentraleSupélec
Neural Network Based Image Compression
Speaker
Tasnim Dardouri, Centre de Vision Numérique (CVN), CentraleSupélec
Time and Place
11h, 24 May 2019, Laboratoire MICS, CentraleSupélec
With the rapid growth of social media and networking platforms, the available textual resources have been increased. Text categorization refers to the machine learning task of assigning a document to a set of two or more predefined categories (or classes). In this talk, I will present a graph-based framework for text categorization. Contrary to the traditional Bag-of-Words approach, we consider the Graph-of-Words (GoW) model in which each document is represented by a graph that encodes relationships between the different terms. Based on this formulation, the importance of a term is determined by weighting the corresponding node in the document, collection and label graphs, using node centrality criteria. We also introduce novel graph-based weighting schemes by enriching graphs with word-embedding similarities, in order to reward or penalize semantic relationships. Our methods produce more discriminative feature weights for text categorization, outperforming existing frequency-based criteria.
AtlasNet: Multi-Atlas non-Linear Deep Networks for Medical Image Segmentation
Deep learning methods have gained increasing attention in addressing segmentation problems for medical images analysis despite challenges inherited from the medical domain, such as limited data availability, lack of consistent textural or salient patterns, and high dimensionality of the data. In this paper, we introduce a novel multi-network architecture that exploits domain knowledge to address those challenges. The proposed architecture consists of multiple deep neural networks that are trained after co-aligning multiple anatomies through multi-metric deformable registration. This multi-network architecture can be trained with fewer examples and leads to better performance, robustness and generalization through consensus. Comparable to human accuracy, highly promising results on the challenging task of interstitial lung disease segmentation demonstrate the potential of our approach.
Computational 3D Spine Imaging: From Disease Prognosis to Surgical Guidance
Speaker
Prof. Samuel Kadoury, Associate Professor at Department of Computer Engineering at Polytechnique Montreal, Canada
Time and Place
14 May 2018, 11h, Salle VI.126, Bâtiment Eiffel, Centrale; Supélec
Spinal deformities such as adolescent idiopathic scoliosis are complex 3D deformations of the musculoskeletal trunk. For the past two decades, 3D spine reconstructions obtained from diagnostic scans have assisted orthopedists assess the severity of deformations and establish treatment strategies. However, these procedures required significant manual intervention and were not suited for routine clinical practice. This presentation will expose computational methods recently developed in our lab based on deep learning and statistical analysis to automatically segment the personalized spine geometry from X-rays or pre-operative CT/MRI, classify various deformation patterns in 3D, predict disease progression and perform intra-operative guidance during surgical procedures, with the use of biomechanical simulation models and multi-modal registration. Experiments performed at the CHU Sainte-Justine Hospital on adolescent patients demonstrate the potential clinical benefit of capturing statistical variations in the spine geometry to help diagnose and treat this disease.
Some Aspects of Duality in Convex Optimization
Speaker
Prof. Patrick L. Combettes, Distinguished Professor at the Dept. of Mathematics, North Carolina State University, USA
Time and Place
4 May 2018, 11h, Salle e.212, Bâtiment Bouygues, CentraleSupélec
Representation learning techniques have gained popularity over the years. Machine Learning community is well aware of several representation learning tools, viz. AutoEncoders, Deep belief networks, Convolutional Neural Networks and Dictionary Learning(similar to matrix factorization and latent factor model). While there has been extensive research on learning synthesis dictionaries and some recent work on learning analysis dictionaries, Transform Learning is a new form of representation learning. It is more generalized analysis equivalent of dictionary learning. Till now, Transform Learning has been restricted to the signal processing community. We start with a standard algebraic model and keep converting our intuitions and observations into mathematical model. We develop formulations aimed at learning representations from data. In this talk, the major part will cover the importance of transform learning and its advantages over other representation learning techniques. Then, Supervised Transform Learning and Deep Transform Learning will be discussed, followed by more robust transform formulations.
Currently there are three basic frameworks in deep learning – stacked autoencoders (SAE), deep belief network (DBN) and convolutional neural network (CNN); SAE and DBN can be applied to arbitrary inputs but CNN can only be applied to natural signals having local correlations (speech, image, ECG, EEG etc.). I am working on developing a new framework for deep learning – deep dictionary learning (DDL). Just as SAE uses autoencoders as basic units and DBN uses restricted Boltzmann machines, DDL uses dictionaries as the basic unit. DDL is formed by stacking one dictionary after another such that the output (features) from the shallower layer feeds into the next (deeper) layer as input. The initial work on DDL was a greedy sub-optimal solution, i.e. each of the layers were solved separately. My work has been on proposing an optimal solution to jointly learn all the layers. This is a solution for unsupervised feature extraction using DDL. Later, I worked on supervised (greedy) versions of deep dictionary learning with a plug-and-play approach. I have developed a framework for multi-label classification problems using the DDL framework. It has been used for solving a practical problem of Non-Intrusive Load Monitoring (NILM).
Iterative Regularization for General Inverse Problems
In the context of linear inverse problems, we propose and study a general iterative regularization method allowing to consider large classes of regularizers and data-fit terms. We were particularly motivated by dealing with non-smooth data-fit terms, such like a Kullback-Liebler divergence, or an L1 distance. We treat these problems by designing an algorithm, based on a primal-dual diagonal descent method, designed to solve hierarchical optimization problems. The key point of our approach is that, in presence of noise, the number of iterations of our algorithm acts as a regularization parameter. In practice this means that the algorithm must be stopped after a certain number of iterations. This is what is called regularization by early stopping, an approach which gained in popularity in statistical learning. Our main results establishes convergence and stability of our algorithm, and are illustrated by experiments on image denoising, comparing our approach with a more classical Tikhonov regularization method.
Keywords
Inverse problems, regularization, optimization, primal-dual algorithm, early stopping.