In PAI, a Learning Set organizes all of the necessary data and parameters for training, including the data samples, the data preparation parameters and the neural network settings.
Note that any necessary data preparations should been done before creating or extending a Leaning Set.
Learning Set Dialog Window
To create a Learning Set or start a training run, select Edit Learning Set from the Segment + AI menu:
A dialog window appears listing the existing learning sets in the upper section. It allows new learning sets to be created and existing leaning sets to be extended in the lower section.
A text description can be appended to the Learning Set using the Description navigation button:
Leaning Set Creation
Create Learning Set opens a dialog window which allows an empty learning set to be created and saved to the database (as component with extension .aiSet).
The next step is to add training samples. First select the learning set in the upper 1. Learning Set list, then select Add Samples at the bottom of the 2. Samples list.
A dialog window appears for selection of the input images. The flat database view
and advanced filtering options are useful to list the only the image series which are first in the Associated list. Select all appropriate series from the filtered list and Set Selected. The dialog window is closed and the samples added are listed in 2. Samples. For a quick quality control, the fusion of the sample image and the corresponding reference segment can be shown in the Validate Sample area by activating the Preview box:
In 3. Learning parameters, the data preparation steps, architecture to be used and the training settings are configured.
The available data preparation steps are:
•Anonymize Samples: Anonymizes all images selected for training or workspace export (this is particularly relevant for training on cloud-computing infrastructure).
•Crop Image: Enables cropping to the associated VOI as described in Data Preparation or to a fixed Box size.
•Pixel Size: Re-sampling of the images to a certain pixel size by either interpolation or down-sampling.
•Gaussian Smoothing: Input image smoothing to reduce noise.
•Scale Values to: Normalization of the pixel value range by scaling according to a method selected.
Note that as an alternative to such pre-processing steps, the input images could be (manually) pre-processed in other PMOD tools. In this case, however, the identical operations have to be applied to the input images before prediction.
Data preparation helps to reduces the amount of unnecessary information in the sample and standardize images which were not acquired using the same protocol.
Neural Network and Training Parameters
The neural network Architecture and the training parameters are selected in the area to the right.
The neural network architecture is selected from the drop-down menu Architecture. The list corresponds to the the content of the Pmod4.2/resources/pai folder, where the neural network configurations are stored in sub-directories. Note that the Multichannel Segmentation architecture is designed to be retrainable. It was initially tested for the 4 input series, 3 segment output MICCAI BraTS example case, and was successfully reapplied for the Rat Brain Dopaminergic PET example case. This retraining is described as a Case Study later in this documentation.
The checkbox Use GPU allows you to choose between training using the CPU or available/compatible GPU.
The training parameters are:
•Batch Size: This parameter defines the number of samples that are processed before the internal model parameters are updated.
•Number of Epochs: Defines the number of times that the learning algorithms processes the entire training data sets. The length of the vector of loss values recorded in the Manifest corresponds to the number of epochs. Hence multiple epochs are required to observe an evolution of the loss value through training.
•Learning Rate: Defines the rate of change of the Weights. (For a Learning Set that has been used for training the final learning rate reported in the Manifest is shown)
Definition of Target Segments
The reference segment image may contain more segments than actually required. The option Select mask values on the 4. Target Settings panel allows the integer value of the required segments to be specified by entering the label values of the target segments, separated by a “comma”.
Saving of Training Result
Two files result from a neural network training, Weights and Manifest. The Weights file contains the weighting given to each layer in the network. The Manifest file contains details of the Learning Set and the training process such as the samples used for training, samples used for validation (every fifth sample), number of epochs used, batch size, volume size, and the segments in the output.
The file locations are defined on the 5. Weights & Manifest panel, and the files will be logically attached to the current learning set.