TFG Medical Image Assistant
Application of deep learning in medical images analysis.
Implement and deploy an application... for medical images. The goal is to get a virtual assistant for medical images.
@IFCA: Lara Lloret <email@example.com>
@Alumna: Carmen Garcia Bermejo <firstname.lastname@example.org>
TFG will be presented by January 2019 at latest
What is Deep Learning?
It is a set of algorithms of machine learning that tries to model high-level abstractions in data using architectures composed of multiple non-linear transformations. To sum up, it is a way of learning an optimal representation for given samples in many phases. This representation that is done in layers, is learned through models (neural networks).
So... How it works?
It is possible to check some medical images in the following links, in order to be used as the training of the convolutional neural network. More detailed in https://grid.ifca.es/wiki/Projects/DeepMedical/.
These are DICOM archives. That is, Digital Imaging and Communications in Medicine. DICOM it is used both as a communications protocol and a file format, so it can keep medical information with the patient's information, in one file.
Tracking of the project. How will it be done. Add subsections as needed
6-Nov-2017 meeting to check progress
- python course started
- wiki TFG started
- deep learning intro (Master Data Science)
13-Nov-2017 meeting to check progress
- finished deep learning intro
- starting phyton learning (using a classifier program of numbers in Jupyter)
20-Nov-2017 meeting to check progress
- Classifier of cat and dogs
4-Dic-2017 meeting to check progress
- Starting Standford course of CNN
11-Dic-2017 meeting to check progress
- Piano and guitar classifier
18-Dic-2017 meeting to check progress
- Piano and guitar classifier
About the used of deep learning in '''chest''' medical images analysis:
Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated Image Annotation. By Hoo-Chang Shin et all. 28/03/2016. Link: https://arxiv.org/abs/1603.08486 In this paper, a deep learning model to detect a disease from a medical image and annotate its contexts is presented by employing a publicly available radiology dataset of chest x-rays and their reports. Its image annotations are used to mine disease names to train convolutional neural networks (CNNs). Then, recurrent neural networks (RNNs) are trained to describe the contexts of a detected disease, based on the deep CNN features.
Application of deep learning in other medical images analysis:
DEEP LEARNING OF FEATURE REPRESENTATION WITH MULTIPLE INSTANCE LEARNING FOR MEDICAL IMAGE ANALYSIS. Link: https://pdfs.semanticscholar.org/a789/7e61633a97e648057bee95c082c1aab40d41.pdf This paper studies automatic extraction of feature representation through deep learning (DNN), by using multiple instance learning (MIL) framework in classification training with deep learning features. It is proposed an algorithm with a minimum of manual annotation and good feature representations to accomplish high-level tasks such as classification and segmentation in medical image analysis, comparing four experiments of feature representations on the dataset consisting of colon cancer histopathology images. The experiment results demostrated that feature learning is superior to manual feature operators.
Deep Learning in segmetation of medical images. By Germán A.García Ferrando. Universitat Politècnica de València.
This paper studies a model based on deep learning, inspired by the encoder-decoder architecture, in order to obtain morphological segmentations of medical images using a reduced data set. In addition, Long Residual Connections are used and also the Dice (F1-Score) as a loss function. The model is evaluated in two scenarios: first, the perform of a prostate segmentation using the T2-weighted volumetric images, acquired by magnetic resonance; and second, femoral segmentation using the images acquired through X-rays.
Using Convolution neural network in other applications:
Implementing a convolution neural network for recognizing poses in images of faces. By Paul Méndez, Julio Ibarra. 19/12/2014. The report explores the using of convolution neural networks in the recognition of images in horizontal positions out of the plane of faces by an implementation based of OpenCV open code libreries to the classification of human faces images in seven different predeterminated positions. This implementation has been trained with groups of 2600 images in different sizes: 33x33, 41x41, 65x65 and 81x81. The success rate the 85%.
Large-Scale Plant Classification with Deep Neural Networks. By Ignacio Heredia. Instituto de Fisica de Cantabria. Link: https://arxiv.org/pdf/1706.03736.pdf
In this case, it is studied the potential of applying deep learning techniques for plant classification and its usage for citizen science in large-scale biodiversity monitoring by using near state-of-the-art convolutional network architectures like ResNet50. The preictions can be used as a base line classification in citizen science communities like iNaturalist.
Application of a Convolutional Neural Network for image classification to the analysis of collisions in High Energy Physics. By Celia Fernández Madrazo et all. Link: https://arxiv.org/abs/1708.07034 The report explores the application of deep learning techniques using convolutional neural networks in the classification of particle collisions in High Energy Physics. They proposed an approach to transform physical variables into a single image that is suopposed to capture the relevant information using a deep learning framework on a simulation dataset. The results improve the ones obteined than classical approaches.
CS231n Lecture 1: Introduction and Historical Context
A brief history of computer vision, from the eyes evolution that takes millions of years to nowdays. Just taking a picture from a 10 MPX camera, has a potencial combination of pixels to form a picture in that is bigger than the number of atoms in the Universe.
CS321 overview, focused on image classification using CNN, which is a type of deep learning architecture.
The importance of data, that is the driving force for a high capacity model to enable the end-to-end trainning to help avoid overfitting.
CS231n Lecture 2: Data-driven approach, kNN, Linear Classification 1
Image classification and some of its challenges: illumination, deformation... Data-driven approach:
1- Collect a dataset of images and labels.
2- Use machine learning to train an image classifier
3- Evaluate the classifier
K-Nearest Neighbour Classifier (it finds the k nearest images, for eg: a five nearest neighbour will give us the five most similar images in the training data).
CS231n Lecture 3: Linear Classification 2, Optimization
Multiclass SVM loss (given an example (xi,yi) where xi is the image and where yi is the (integer) label, and using the shorthand for the scores vector: s=f(xi, W))).
Weight regularization: R(W)
- Numerical gradient: approximate, slow, easy to write. - Analytic gradient: exact, fast, error-prone