Tiled inference

Tiled inference#

Tiled inference consists in running an algorithm tile-by-tile on individual subpatches (tiles) instead of the whole image at once. The results from each tiles are assembled to form the final result.

Tiled inference can be useful in these contexts:

  • The algorithm is embarrassingly parallel (Note: this is not the case for instance segmentation, for example).

  • Users need to see the results appear progressively (if needed, they can cancel the process before it finishes).

  • To apply a deep learning model with a fixed context window (input size) on images of varying sizes.

All Server Kit algorithms are tileable. You can control the tile size (in pixels), overlap (fraction), processing order of the tiles (random or not), and the time delay introduced between processing of consecutive tiles.

Note

Tiled inference is an experimental feature. We aren’t fully done implementing and testing it yet. Expect some improvements in the near future!

Try it in Napari#

Consider this simple threshold algorithm:

import imaging_server_kit as sk

@sk.algorithm
def threshold_algo(image, threshold=128):
    mask = image > threshold
    return sk.Mask(mask)

viewer = sk.to_napari(threshold_algo)

import skimage.data
viewer.add_image(skimage.data.coins())

Before running the algorithm in Napari, you can expand the Tiled inference menu and activate the tiling functionality form the user interface.

Summary#

  • You can run algorithms tile-by-tile on subparts of the algorithm inputs.

  • This is only meaningful for embarassingly parallel tasks.

Next steps#

In the next section, you will see how to serve your algorithm as an API.