pymic.util package

Submodules

pymic.util.evaluation_cls module

Evaluation module for classification tasks.

pymic.util.evaluation_cls.accuracy(gt_label, pred_label)

Calculate the accuracy.

pymic.util.evaluation_cls.binary_evaluation(config)

Evaluation of binary classification performance. The arguments are given in the config dictionary. It should have the following fields:

Parameters:
  • metric_list – (list) A list of evaluation metrics. The supported metrics are {accuracy, recall, sensitivity, specificity, precision, auc}.

  • ground_truth_csv – (str) The csv file for ground truth.

  • predict_prob_csv – (str) The csv file for prediction probability.

pymic.util.evaluation_cls.get_evaluation_score(gt_label, pred_prob, metric)

Get an evaluation score for binary classification.

Parameters:
  • gt_label – (array) Ground truth label.

  • pred_prob – (array) Predicted positive probability.

  • metric – (str) One of the evaluation metrics in {accuracy, recall, sensitivity, specificity, precision, auc}.

pymic.util.evaluation_cls.main()

Main function for evaluation of classification results. A configuration file is needed for runing. e.g.,

pymic_evaluate_cls config.cfg

The configuration file should have an evaluation section with the following fields:

Parameters:
  • task_type – (str) cls or cls_nexcl.

  • metric_list – (list) A list of evaluation metrics. The supported metrics are {accuracy, recall, sensitivity, specificity, precision, auc}.

  • ground_truth_csv – (str) The csv file for ground truth.

  • predict_prob_csv – (str) The csv file for prediction probability.

pymic.util.evaluation_cls.nexcl_evaluation(config)

Evaluation of non-exclusive binary classification performance. The arguments are given in the config dictionary. It should have the following fields:

Parameters:
  • metric_list – (list) A list of evaluation metrics. The supported metrics are {accuracy, recall, sensitivity, specificity, precision, auc}.

  • ground_truth_csv – (str) The csv file for ground truth.

  • predict_prob_csv – (str) The csv file for prediction probability.

pymic.util.evaluation_cls.sensitivity(gt_label, pred_label)

Calculate the sensitivity for binary prediction.

pymic.util.evaluation_cls.specificity(gt_label, pred_label)

Calculate the specificity for binary prediction.

pymic.util.evaluation_seg module

Evaluation module for segmenation tasks.

pymic.util.evaluation_seg.binary_assd(s, g, spacing=None)

Get the Average Symetric Surface Distance (ASSD) between a binary segmentation and the ground truth.

Parameters:
  • s – (numpy.array) a 2D or 3D binary image for segmentation.

  • g – (numpy.array) a 2D or 2D binary image for ground truth.

  • spacing – (list) A list for image spacing, length should be 2 or 3.

Returns:

The ASSD value.

pymic.util.evaluation_seg.binary_dice(s, g, resize=False)

Calculate the Dice score of two N-d volumes for binary segmentation.

Parameters:
  • s – The segmentation volume of numpy array.

  • g – the ground truth volume of numpy array.

  • resize – (optional, bool) If s and g have different shapes, resize s to match g. Default is True.

Returns:

The Dice value.

pymic.util.evaluation_seg.binary_hd95(s, g, spacing=None)

Get the 95 percentile of hausdorff distance between a binary segmentation and the ground truth.

Parameters:
  • s – (numpy.array) a 2D or 3D binary image for segmentation.

  • g – (numpy.array) a 2D or 2D binary image for ground truth.

  • spacing – (list) A list for image spacing, length should be 2 or 3.

Returns:

The HD95 value.

pymic.util.evaluation_seg.binary_iou(s, g)

Calculate the IoU score of two N-d volumes for binary segmentation.

Parameters:
  • s – The segmentation volume of numpy array.

  • g – the ground truth volume of numpy array.

Returns:

The IoU value.

pymic.util.evaluation_seg.binary_relative_volume_error(s, g)

Get the Relative Volume Error (RVE) between a binary segmentation and the ground truth.

Parameters:
  • s – (numpy.array) a 2D or 3D binary image for segmentation.

  • g – (numpy.array) a 2D or 2D binary image for ground truth.

Returns:

The RVE value.

pymic.util.evaluation_seg.dice_of_images(s_name, g_name)

Calculate the Dice score given the image names of binary segmentation and ground truth, respectively.

Parameters:
  • s_name – (str) The filename of segmentation result.

  • g_name – (str) The filename of ground truth.

Returns:

The Dice value.

pymic.util.evaluation_seg.evaluation(config)

Run evaluation of segmentation results based on a configuration dictionary config. The following fields should be provided in config:

Parameters:
  • metric_list – (list) The list of metrics for evaluation. The metric options are {dice, iou, assd, hd95, rve, volume}.

  • label_list – (list) The list of labels for evaluation.

  • label_fuse – (option, bool) If true, fuse the labels in the label_list as the foreground, and other labels as the background. Default is False.

  • organ_name – (str) The name of the organ for segmentation.

  • ground_truth_folder_root – (str) The root dir of ground truth images.

  • segmentation_folder_root – (str or list) The root dir of segmentation images. When a list is given, each list element should be the root dir of the results of one method.

  • evaluation_image_pair – (str) The csv file that provide the segmentation images and the corresponding ground truth images.

  • ground_truth_label_convert_source – (optional, list) The list of source labels for label conversion in the ground truth.

  • ground_truth_label_convert_target – (optional, list) The list of target labels for label conversion in the ground truth.

  • segmentation_label_convert_source – (optional, list) The list of source labels for label conversion in the segmentation.

  • segmentation_label_convert_target – (optional, list) The list of target labels for label conversion in the segmentation.

pymic.util.evaluation_seg.get_binary_evaluation_score(s_volume, g_volume, spacing, metric)

Evaluate the performance of binary segmentation using a specified metric. The metric options are {dice, iou, assd, hd95, rve, volume}.

Parameters:
  • s_volume – (numpy.array) a 2D or 3D binary image for segmentation.

  • g_volume – (numpy.array) a 2D or 2D binary image for ground truth.

  • spacing – (list) A list for image spacing, length should be 2 or 3.

  • metric – (str) The metric name.

Returns:

The metric value.

pymic.util.evaluation_seg.get_edge_points(img)

Get edge points of a binary segmentation result.

Parameters:

img – (numpy.array) a 2D or 3D array of binary segmentation.

Returns:

an edge map.

pymic.util.evaluation_seg.get_multi_class_evaluation_score(s_volume, g_volume, label_list, fuse_label, spacing, metric)

Evaluate the segmentation performance using a specified metric for a list of labels. The metric options are {dice, iou, assd, hd95, rve, volume}. If fuse_label is True, the labels in label_list will be merged as foreground and other labels will be merged as background as a binary segmentation result.

Parameters:
  • s_volume – (numpy.array) A 2D or 3D image for segmentation.

  • g_volume – (numpy.array) A 2D or 2D image for ground truth.

  • label_list – (list) A list of target labels.

  • fuse_label – (bool) Fuse the labels in label_list or not.

  • spacing – (list) A list for image spacing, length should be 2 or 3.

  • metric – (str) The metric name.

Returns:

The metric value list.

pymic.util.evaluation_seg.main()

Main function for evaluation of segmentation results. A configuration file is needed for runing. e.g.,

pymic_evaluate_cls config.cfg

The configuration file should have an evaluation section. See pymic.util.evaluation_seg.evaluation for details of the configuration required.

pymic.util.general module

pymic.util.general.get_one_hot_seg(label, class_num)

Convert a segmentation label to one-hot.

Parameters:
  • label – A tensor with a shape of [N, 1, D, H, W] or [N, 1, H, W]

  • class_num – Class number.

Returns:

a one-hot tensor with a shape of [N, C, D, H, W] or [N, C, H, W].

pymic.util.general.keyword_match(a, b)

Test if two string are the same when converted to lower case.

pymic.util.general.mixup(inputs, labels)

Shuffle a minibatch and do linear interpolation between images and labels. Both classification and segmentation labels are supported. The targets should be one-hot labels.

Parameters:
  • inputs – a tensor of input images with size N X C0 x H x W.

  • labels – a tensor of one-hot labels. The shape is N X C for classification tasks, and N X C X H X W for segmentation tasks.

pymic.util.general.tensor_shape_match(a, b)

Test if two tensors have the same shape

pymic.util.image_process module

pymic.util.image_process.convert_label(label, source_list, target_list)

Convert a label map based a source list and a target list of labels

Parameters:
  • label – (numpy.array) The input label map.

  • source_list – A list of labels that will be converted, e.g. [0, 1, 2, 4]

  • target_list – A list of target labels, e.g. [0, 1, 2, 3]

pymic.util.image_process.crop_ND_volume_with_bounding_box(volume, bb_min, bb_max)

Extract a subregion form an ND image.

Parameters:
  • volume – The input ND array.

  • bb_min – (list) The lower bound of the bounding box for each axis.

  • bb_max – (list) The upper bound of the bounding box for each axis.

Returns:

A croped ND image.

pymic.util.image_process.crop_and_pad_ND_array_to_desired_shape(image, out_shape, pad_mod)

Crop and pad an image to a given shape.

Parameters:
  • image – The input ND array.

  • out_shape – (list) The desired output shape.

  • pad_mod – (str) See numpy.pad

pymic.util.image_process.get_ND_bounding_box(volume, margin=None)

Get the bounding box of nonzero region in an ND volume.

Parameters:
  • volume – An ND numpy array.

  • margin – (list) The margin of bounding box along each axis.

Return bb_min:

(list) A list for the minimal value of each axis of the bounding box.

Return bb_max:

(list) A list for the maximal value of each axis of the bounding box.

pymic.util.image_process.get_euclidean_distance(image, dim=3, spacing=[1.0, 1.0, 1.0])

Get euclidean distance transform of 3D binary images. The output distance map is unsigned.

Parameters:
  • image – The input 3D array.

  • dim – (int) Using 2D (dim = 2) or 3D (dim = 3) distance transforms.

  • spacing – (list) The spacing along each axis.

pymic.util.image_process.get_largest_k_components(image, k=1)

Get the largest K components from 2D or 3D binary image.

Parameters:
  • image – The input ND array for binary segmentation.

  • k – (int) The value of k.

Returns:

An output array with only the largest K components of the input.

pymic.util.image_process.resample_sitk_image_to_given_spacing(image, spacing, order)

Resample an sitk image objct to a given spacing.

Parameters:
  • image – The input sitk image object.

  • spacing – (list/tuple) Target spacing along x, y, z direction.

  • order – (int) Order for interpolation.

Returns:

A resampled sitk image object.

pymic.util.image_process.set_ND_volume_roi_with_bounding_box_range(volume, bb_min, bb_max, sub_volume, addition=True)

Set the subregion of an ND image. If addition is True, the original volume is added by the given sub volume.

Parameters:
  • volume – The input ND volume.

  • bb_min – (list) The lower bound of the bounding box for each axis.

  • bb_max – (list) The upper bound of the bounding box for each axis.

  • sub_volume – The sub volume to replace the target region of the orginal volume.

  • addition – (optional, bool) If True, the sub volume will be added to the target region of the input volume.

pymic.util.parse_config module

pymic.util.parse_config.is_bool(var_str)
pymic.util.parse_config.is_float(val_str)
pymic.util.parse_config.is_int(val_str)
pymic.util.parse_config.is_list(val_str)
pymic.util.parse_config.logging_config(config)
pymic.util.parse_config.parse_bool(var_str)
pymic.util.parse_config.parse_config(filename)
pymic.util.parse_config.parse_list(val_str)
pymic.util.parse_config.parse_value_from_string(val_str)
pymic.util.parse_config.synchronize_config(config)

pymic.util.post_process module

class pymic.util.post_process.PostKeepLargestComponent(params)

Bases: PostProcess

Post process by keeping the largest component.

The arguments should be written in the params dictionary, and it has the following fields:

Parameters:

KeepLargestComponent_mode – (int) 1 means keep the largest component of the union of foreground classes. 2 means keep the largest component for each foreground class.

class pymic.util.post_process.PostProcess(params)

Bases: object

The abastract class for post processing.

pymic.util.preprocess module

pymic.util.preprocess.get_transform_list(trans_config_file)

Create a list of transforms given a configuration file.

pymic.util.preprocess.preprocess_with_transform(transforms, img_in_name, img_out_name, lab_in_name=None, lab_out_name=None)

Using a list of data transforms for preprocessing, such as image normalization, cropping, etc. TODO: support multip-modality preprocessing.

Parameters:
  • transforms – (list) A list of transform objects.

  • img_in_name – (str) Input file name.

  • img_out_name – (str) Output file name.

  • lab_in_name – (optional, str) If None, load the image’s corresponding label for preprocessing as well.

  • lab_out_name – (optional, str) The output label name.

pymic.util.ramps module

Functions for ramping hyperparameters up or down.

Each function takes the current training step or epoch, and the ramp length (start and end step or epoch), and returns a multiplier between 0 and 1.

pymic.util.ramps.get_rampdown_ratio(i, start, end, mode='linear')

Obtain the rampdown ratio.

Parameters:
  • i – (int) The current iteration.

  • start – (int) The start iteration.

  • end – (int) The end itertation.

  • mode – (str) Valid values are {linear, sigmoid, cosine}.

pymic.util.ramps.get_rampup_ratio(i, start, end, mode='linear')

Obtain the rampup ratio.

Parameters:
  • i – (int) The current iteration.

  • start – (int) The start iteration.

  • end – (int) The end itertation.

  • mode – (str) Valid values are {linear, sigmoid, cosine}.

Module contents