Note

You can view & download the original notebook on GitHub.

Live visualization on a map#

With stackstac.show or stackstac.add_to_map, you can display your data on an interactive ipyleaflet map within your notebook. As you pan and zoom, the portion of the dask array that’s in view is computed on the fly.

By running a Dask cluster colocated with the data, you can quickly aggregate hundreds of gigabytes of imagery on the backend, then only send a few megabytes of pixels back to your browser. This gives you a very simplified version of the Google Earth Engine development experience, but with much more flexibility.

Limitations#

This functionality is still very much proof-of-concept, and there’s a lot to be improved in the future:

  1. Doesn’t work well on large arrays. Sadly, loading a giant global DataArray and using the map to just view small parts of it won’t work—yet. Plan on only using an array of the area you actually want to look at (passing bounds_latlon= to stackstac.stack helps with this).

  2. Resolution doesn’t change as you zoom in or out.

  3. Communication to Dask can be slow, or seem to hang temporarily.

  4. Requires opening port 8000 (only accessible over localhost), which may be blocked in some restrictive environments.

[1]:
import stackstac
import pystac_client
import ipyleaflet
import xarray as xr
import IPython.display as dsp
[2]:
import coiled
import distributed

You definitely will need a cluster near the data for this (or to run on a beefy VM in us-west-2).

You can sign up for a Coiled account and run clusters for free at https://cloud.coiled.io/ — no credit card or username required, just sign in with your GitHub or Google account, then connect to your cloud provider account (AWS or GCP).

[3]:
cluster = coiled.Cluster(
    name="stackstac-show",
    n_workers=22,
    worker_cpu=4,
    worker_memory="16GiB",
    package_sync=True,
    backend_options={"region": "us-west-2"},  # where the data is
)
client = distributed.Client(cluster)
client
[3]:

Client

Client-6f5f3f1a-6f7e-11ed-9177-acde48001122

Connection method: Cluster object Cluster type: coiled.ClusterBeta
Dashboard: http://35.92.199.196:8787

Cluster Info

Search for Sentinel-2 data overlapping our map

[4]:
m = ipyleaflet.Map()
m.center = 35.677153763176115, -105.8485489524901
m.zoom = 10
m.layout.height = "700px"
m
[5]:
%%time
bbox=[m.west, m.south, m.east, m.north]
stac_items = pystac_client.Client.open(
    "https://earth-search.aws.element84.com/v1"
).search(
    bbox=bbox,
    collections=["sentinel-2-l2a"],
    datetime="2020-04-01/2020-04-15"
).item_collection()
len(stac_items)
CPU times: user 116 ms, sys: 11.8 ms, total: 128 ms
Wall time: 1.15 s
[5]:
24
[6]:
dsp.GeoJSON(stac_items.to_dict())
<IPython.display.GeoJSON object>

Create the time stack#

Important: the resolution you pick here is what the map will use, regardless of zoom level! When you zoom in/out on the map, the data won’t be loaded at lower or higher resolutions. (In the future, we hope to support this.)

Beware of zooming out on high-resolution data; you could trigger a massive amount of compute!

[7]:
%time stack = stackstac.stack(stac_items, resolution=80)
CPU times: user 42.8 ms, sys: 2.7 ms, total: 45.5 ms
Wall time: 43.7 ms

Persist the data we want to view#

By persisting all the RGB data, Dask will pre-load it and store it in memory, ready to use. That way, we can tweak what we show on the map (different composite operations, scaling, etc.) without having to re-fetch the original data every time. It also means tiles will load much faster as we pan around, since they’re already mostly computed.

It’s generally a good idea to persist somewhere before stackstac.show. Typically you’d do this after a reduction step (like a temporal composite), but our data here is small, so it doesn’t matter much.

As a rule of thumb, try to persist after the biggest, slowest steps of your analysis, but before the steps you might want to tweak (like thresholds, scaling, etc.). If you want to tweak your big slow steps, well… be prepared to wait (and maybe don’t persist).

[8]:
client.wait_for_workers(22)
[9]:
rgb = stack.sel(band=["red", "green", "blue"]).persist()
rgb
[9]:
<xarray.DataArray 'stackstac-04aaa9bf6fc59f590a424ab757077d03' (time: 24,
                                                                band: 3,
                                                                y: 2624, x: 2622)>
dask.array<getitem, shape=(24, 3, 2624, 2622), dtype=float64, chunksize=(1, 1, 1024, 1024), chunktype=numpy.ndarray>
Coordinates: (12/52)
  * time                                     (time) datetime64[ns] 2020-04-01...
    id                                       (time) <U24 'S2B_13SDA_20200401_...
  * band                                     (band) <U12 'red' 'green' 'blue'
  * x                                        (x) float64 3e+05 ... 5.097e+05
  * y                                        (y) float64 4.1e+06 ... 3.89e+06
    mgrs:utm_zone                            int64 13
    ...                                       ...
    raster:bands                             (band) object [{'nodata': 0, 'da...
    gsd                                      (band) object 10 10 10
    common_name                              (band) object 'red' 'green' 'blue'
    center_wavelength                        (band) object 0.665 0.56 0.49
    full_width_half_max                      (band) object 0.038 0.045 0.098
    epsg                                     int64 32613
Attributes:
    spec:        RasterSpec(epsg=32613, bounds=(300000, 3890160, 509760, 4100...
    crs:         epsg:32613
    transform:   | 80.00, 0.00, 300000.00|\n| 0.00,-80.00, 4100080.00|\n| 0.0...
    resolution:  80

stackstac.add_to_map#

stackstac.add_to_map displays a DataArray on an existing ipyleaflet map. You give it a layer name—if a layer with this name already exists, it’s replaced; otherwise, it’s added. This is nice for working in notebooks, since you can re-run an add_to_map cell to adjust it, without piling up new layers.

Before continuing, you should open the distributed dashboard in another window (or use the dask-jupyterlab extension) in order to watch its progress.

[10]:
m.zoom = 10
m

Static screenshot for docs (delete this cell if running the notebook):

../_images/show-s2-median.png

stackstac.server_stats is a widget showing some under-the-hood stats about the computations currently running to generate your tiles. It shows “work bars”—like the inverse of progress bars—indicating the tasks it’s currently waiting on.

[11]:
stackstac.server_stats

Make a temporal median composite, and show that on the map m above! Pan around and notice how the dask dashboard shows your progress.

[12]:
comp = rgb.median("time")
stackstac.add_to_map(comp, m, "s2", range=[0, 3000])

Try changing median to mean, min, max, etc. in the cell above, and re-run. The map will update with the new layer contents (since you reused the name "s2").

Showing computed values#

You can display anything you can compute with dask and xarray, not just raw data. Here, we’ll compute NDVI (Normalized Difference Vegetation Index), which indicates the health of vegetation (and is kind of a “hello world” example for remote sensing).

[13]:
nir, red = stack.sel(band="nir08"), stack.sel(band="red")
ndvi = (nir - red) / (nir + red)
ndvi = ndvi.persist()

We’ll show the temporal maximum NDVI (try changing to min, median, etc.)

[14]:
ndvi_comp = ndvi.max("time")

stackstac.show#

stackstac.show creates a new map for you, centers it on your array, and displays it. It’s very convenient.

[15]:
stackstac.show(ndvi_comp, range=(0, 0.6), cmap="YlGn")

Static screenshot for docs (delete this cell if running the notebook):

../_images/show-ndvi.png

To demonstrate more derived quantities: show each pixel’s deviation from the mean NDVI of the whole array:

[16]:
anomaly = ndvi_comp - ndvi.mean()
[17]:
stackstac.show(anomaly, cmap="RdYlGn")
/Users/gabe/dev/stackstac/.venv/lib/python3.9/site-packages/stackstac/show.py:484: UserWarning: Calculating 2nd and 98th percentile of the entire array, since no range was given. This could be expensive!
  warnings.warn(

Static screenshot for docs (delete this cell if running the notebook):

../_images/show-ndvi-anomaly.png

Interactively explore data with widgets#

Using ipywidgets.interact, you can interactively threshold the NDVI values by adjusting a slider. It’s a bit clunky, and pretty slow to refresh, but still a nice demonstration of the powerful tools that become available by integrating with the Python ecosystem.

[18]:
import ipywidgets

ndvi_map = ipyleaflet.Map()
ndvi_map.center = m.center
ndvi_map.zoom = m.zoom

@ipywidgets.interact(threshold=(0.0, 1.0, 0.1))
def explore_ndvi(threshold=0.2):
    high_ndvi = ndvi_comp.where(ndvi_comp > threshold)
    stackstac.add_to_map(high_ndvi, ndvi_map, "ndvi", range=[0, 1], cmap="YlGn")
    return ndvi_map

Static screenshot for docs (delete this cell if running the notebook):

../_images/show-ndvi-widget.png