Model Insights


  1. Question: How can we get more insights (as in the “Insights” menu in the experiment as documented here), specifically Grad-CAM visualization for all predictions (not only a sample)? Can this be customized/configured ?

    Answer: No, it can’t be customized and shows only a random sample of the validation data for GradCAMs


  2. Question: There are no insights available when using Image Vectorizer. Any settings required or is this only available for Image autoML? (‘interpret this model’ is disabled)

    Answer: Correct, we do not support any interpretations for the experiments with images. Only GradCAMs are available in ImageAuto model


  3. Question: How to get the Tensorboard as mentioned in this blog post - see ’Tensorboard integration’ section - is this currently integrated or made available?

    Answer: This blog post is very old, and was done even before the official ImageAuto launch in DAI. There is no opportunity to get the Tensorboard logs. And again, only Insights in ImageAuto mode could show losses/metrics by epoch.


Image transformer


  1. Question: In addition to the available pre-trained architectures, is it possible to load external/custom pre-trained model e.g. by extracting them into tensorflow_image_pretrained_models_dir ? Or using CustomTransformer ?

    Answer: No, we do not support any custom / other pre-trained architectures. Only ImageNet-pretrained architectures mentioned in our list.


  2. Question: Do we support Shapley values when using these images embeddings? ( i.e. model actions > Shapley values)?

    Answer: No, as I mentioned above, we do not have any interpretability for the experiments that have images inside.