You can view & download the original notebook on Github.

Or, click here to run these notebooks on Coiled with access to Dask clusters.

Live visualization on a map

With or stackstac.add_to_map, you can display your data on an interactive ipyleaflet map within your notebook. As you pan and zoom, the portion of the dask array that’s in view is computed on the fly.

By running a Dask cluster colocated with the data, you can quickly aggregate hundreds of gigabytes of imagery on the backend, then only send a few megabytes of pixels back to your browser. This gives you a very simplified version of the Google Earth Engine development experience, but with much more flexibility.


This functionality is still very much proof-of-concept, and there’s a lot to be improved in the future:

  1. Doesn’t work well on large arrays. Sadly, loading a giant global DataArray and using the map to just view small parts of it won’t work—yet. Plan on only using an array of the area you actually want to look at (passing bounds_latlon= to stackstac.stack helps with this).

  2. Resolution doesn’t change as you zoom in or out.

  3. Communication to Dask can be slow, or seem to hang temporarily.

import stackstac
import satsearch
import ipyleaflet
import xarray as xr
import IPython.display as dsp
import coiled
import distributed

You definitely will need a cluster near the data for this (or to run on a beefy VM in us-west-2).

You can sign up for a Coiled account and run clusters for free at — no credit card or username required, just sign in with your GitHub or Google account.

cluster = coiled.Cluster(
    backend_options={"region": "us-west-1"},
client = distributed.Client(cluster)
Found software environment build
/Users/gabe/Library/Caches/pypoetry/virtualenvs/stackstac-FdRcOknL-py3.8/lib/python3.8/site-packages/distributed/ VersionMismatchWarning: Mismatched versions found

| Package | client        | scheduler     | workers       |
| python  | | | |



  • Workers: 25
  • Cores: 100
  • Memory: 400.00 GiB

Search for Sentinel-2 data overlapping our map

[ ]:
m = ipyleaflet.Map() = 35.677153763176115, -105.8485489524901
m.zoom = 10
m.layout.height = "700px"
bbox=[m.west, m.south, m.east, m.north]
stac_items = satsearch.Search(
CPU times: user 65.1 ms, sys: 8.61 ms, total: 73.7 ms
Wall time: 4.08 s
[ ]:

Create the time stack

Important: the resolution you pick here is what the map will use, regardless of zoom level! When you zoom in/out on the map, the data won’t be loaded at lower or higher resolutions. (In the future, we hope to support this.)

Beware of zooming out on high-resolution data; you could trigger a massive amount of compute!

%time stack = stackstac.stack(stac_items, resolution=80)
CPU times: user 14.7 ms, sys: 1.48 ms, total: 16.2 ms
Wall time: 15.1 ms

Persist the data we want to view

By persisting all the RGB data, Dask will pre-load it and store it in memory, ready to use. That way, we can tweak what we show on the map (different composite operations, scaling, etc.) without having to re-fetch the original data every time. It also means tiles will load much faster as we pan around, since they’re already mostly computed.

It’s generally a good idea to persist somewhere before Typically you’d do this after a reduction step (like a temporal composite), but our data here is small, so it doesn’t matter much.

As a rule of thumb, try to persist after the biggest, slowest steps of your analysis, but before the steps you might want to tweak (like thresholds, scaling, etc.). If you want to tweak your big slow steps, well… be prepared to wait (and maybe don’t persist).

rgb = stack.sel(band=["B04", "B03", "B02"]).persist()
<xarray.DataArray 'stackstac-abf58c56690c9d257a569f7ef9290c29' (time: 24, band: 3, y: 2624, x: 2622)>
dask.array<getitem, shape=(24, 3, 2624, 2622), dtype=float64, chunksize=(1, 1, 1024, 1024), chunktype=numpy.ndarray>
Coordinates: (12/24)
  * time                        (time) datetime64[ns] 2020-04-01T18:03:50 ......
    id                          (time) <U24 'S2B_13SDA_20200401_0_L2A' ... 'S...
  * band                        (band) <U8 'B04' 'B03' 'B02'
  * x                           (x) float64 3e+05 3.001e+05 ... 5.097e+05
  * y                           (y) float64 4.1e+06 4.1e+06 ... 3.89e+06
    view:off_nadir              int64 0
    ...                          ...
    sentinel:sequence           <U1 '0'
    sentinel:data_coverage      (time) float64 58.22 100.0 33.85 ... 100.0 42.69
    data_coverage               (time) float64 58.22 100.0 33.85 ... 100.0 42.69
    created                     (time) <U24 '2020-09-05T12:39:12.865Z' ... '2...
    title                       (band) object 'Band 4 (red)' ... 'Band 2 (blue)'
    epsg                        int64 32613
    spec:        RasterSpec(epsg=32613, bounds=(300000, 3890160, 509760, 4100...
    crs:         epsg:32613
    transform:   | 80.00, 0.00, 300000.00|\n| 0.00,-80.00, 4100080.00|\n| 0.0...
    resolution:  80


stackstac.add_to_map displays a DataArray on an existing ipyleaflet map. You give it a layer name—if a layer with this name already exists, it’s replaced; otherwise, it’s added. This is nice for working in notebooks, since you can re-run an add_to_map cell to adjust it, without piling up new layers.

Before continuing, you should open the distributed dashboard in another window (or use the dask-jupyterlab extension) in order to watch its progress.

[ ]:
m.zoom = 10

Static screenshot for docs (delete this cell if running the notebook):


stackstac.server_stats is a widget showing some under-the-hood stats about the computations currently running to generate your tiles. It shows “work bars”—like the inverse of progress bars—indicating the tasks it’s currently waiting on.

[ ]:

Make a temporal median composite, and show that on the map m above! Pan around and notice how the dask dashboard shows your progress.

comp = rgb.median("time")
stackstac.add_to_map(comp, m, "s2", range=[0, 3000])

Try changing median to mean, min, max, etc. in the cell above, and re-run. The map will update with the new layer contents (since you reused the name "s2").

Showing computed values

You can display anything you can compute with dask and xarray, not just raw data. Here, we’ll compute NDVI (Normalized Difference Vegetation Index), which indicates the health of vegetation (and is kind of a “hello world” example for remote sensing).

nir, red = stack.sel(band="B08"), stack.sel(band="B04")
ndvi = (nir - red) / (nir + red)
ndvi = ndvi.persist()

We’ll show the temporal maximum NDVI (try changing to min, median, etc.)

ndvi_comp = ndvi.max("time") creates a new map for you, centers it on your array, and displays it. It’s very convenient.

[ ]:, range=(0, 0.6), cmap="YlGn")

Static screenshot for docs (delete this cell if running the notebook):


To demonstrate more derived quantities: show each pixel’s deviation from the mean NDVI of the whole array:

anomaly = ndvi_comp - ndvi.mean()
[ ]:, cmap="RdYlGn")

Static screenshot for docs (delete this cell if running the notebook):


Interactively explore data with widgets

Using ipywidgets.interact, you can interactively threshold the NDVI values by adjusting a slider. It’s a bit clunky, and pretty slow to refresh, but still a nice demonstration of the powerful tools that become available by integrating with the Python ecosystem.

[ ]:
import ipywidgets

ndvi_map = ipyleaflet.Map() =
ndvi_map.zoom = m.zoom

@ipywidgets.interact(threshold=(0.0, 1.0, 0.1))
def explore_ndvi(threshold=0.2):
    high_ndvi = ndvi_comp.where(ndvi_comp > threshold)
    stackstac.add_to_map(high_ndvi, ndvi_map, "ndvi", range=[0, 1], cmap="YlGn")
    return ndvi_map

Static screenshot for docs (delete this cell if running the notebook):