seqgra.evaluator.explainer.real_time.saliency_eval module

class RealTimeSaliencyExplainer(model_dir, cuda=True, return_classification_logits=False)[source]

Bases: object

explain(inp, ind)[source]
get_pretrained_saliency_fn(model_dir, cuda=True, return_classification_logits=False)[source]

returns a saliency function that takes images and class selectors as inputs. If cuda=True then places the model on a GPU. You can also specify model_confidence - smaller values (~0) will show any object in the image that even slightly resembles the specified class while higher values (~5) will show only the most salient parts. Params of the saliency function: images - input images of shape (C, H, W) or (N, C, H, W) if in batch. Can be either a numpy array, a Tensor or a Variable selectors - class ids to be masked. Can be either an int or an array with N integers. Again can be either a numpy array, a Tensor or a Variable model_confidence - a float, 6 by default, you may want to decrease this value to obtain more complete saliency maps.

returns a Variable of shape (N, 1, H, W) with one saliency maps for each input image.