This notebook is from EMIT-Data-Resources

Source: How to Find and Access EMIT Data

Imported on: 2025-01-22

How to: Find and Access EMIT Data

Summary

There are currently 4 ways to find EMIT data:

  1. EarthData Search
  2. NASA’s CMR API (earthaccess uses this)
  3. NASA’s CMR-STAC API
  4. Visions Open Access Data Portal

This notebook will explain how to access Earth Surface Mineral Dust Source Investigation (EMIT) data programmaticly using the earthaccess python library. earthaccess is an easy to use library that reduces finding and downloading or streaming data over https or s3 to only a few lines of code. earthaccess searches NASA’s Common Metadata Repository (CMR), a metadata system that catalogs Earth Science data and associated metadata records, then can be used to download granules or generate lists granule search result URLs.

Requirements: - A NASA Earthdata Login account is required to download EMIT data
- No Python setup requirements if connected to the workshop cloud instance! - Local Only Set up Python Environment - See setup_instructions.md in the /setup/ folder to set up a local compatible Python environment

Learning Objectives
- How to get information about data collections using earthaccess - How to search and access EMIT data using earthaccess

Setup

Import the required packages

import os
import earthaccess
import numpy as np
import pandas as pd
import geopandas as gp
from shapely.geometry.polygon import orient
import xarray as xr
import sys
sys.path.append('../modules/')
from emit_tools import emit_xarray

Authentication

earthaccess creates and leverages Earthdata Login tokens to authenticate with NASA systems. Earthdata Login tokens expire after a month. To retrieve a token from Earthdata Login, you can either enter your username and password each time you use earthaccess, or use a .netrc file. A .netrc file is a configuration file that is commonly used to store login credentials for remote systems. If you don’t have a .netrc or don’t know if you have one or not, you can use the persist argument with the login function below to create or update an existing one, then use it for authentication.

If you do not have an Earthdata Account, you can create one here.

auth = earthaccess.login(persist=True)
print(auth.authenticated)

If you receive a message that your token has expired, use refresh_tokens() like below to generate a new one.

# auth.refresh_tokens

Searching for Collections

The EMIT mission produces several collections or datasets available via the LP DAAC cloud archive.

To view what’s available, we can use the search_datasets function and with the keyword and and provider arguments. The provider is the data location, in this case LPCLOUD. Specifying the provider isn’t necessary, but the “emit” keyword can be found in metadata for some other datasets, and additional collections may be returned.

# Retrieve Collections
collections = earthaccess.search_datasets(provider='LPCLOUD', keyword='emit')
# Print Quantity of Results
print(f'Collections found: {len(collections)}')

If you print the collections object you can explore all of the json metadata.

# # Print collections
# collections

We can also create a list of the short-name, concept-id, and version of each result collection using list comprehension. These fields are important for specifying and searching for data within collections.

collections_info = [
    {
        'short_name': c.summary()['short-name'],
        'collection_concept_id': c.summary()['concept-id'],
        'version': c.summary()['version'],
        'entry_title': c['umm']['EntryTitle']
    }
    for c in collections
]
pd.set_option('display.max_colwidth', 150)
collections_info = pd.DataFrame(collections_info)
collections_info

The collection concept-id is the best way to search for data within a collection, as this is unique to each collection. The short-name can be used as well, however the version should be passed as well as there can be multiple versions available with the same short name. After finding the collection you want to search, you can use the concept-id to search for granules within that collection.

Searching for Granules

A granule can be thought of as a unique spatiotemporal grouping within a collection. To search for granules, we can use the search_data function from earthaccess and provide the arguments for our search. Its possible to specify search products using several criteria shown in the table below:

dataset origin and location spatio temporal parameters dataset metadata parameters
archive_center bounding_box concept_id
data_center temporal entry_title
daac point keyword
provider polygon version
cloud_hosted line short_name

Working with Search Results

All three of these examples will have the same result, since the spatiotemporal parameters fall within the same single granule. Results is a list, so we can use an index to view a single result.

result = results[0]
result

We can also retrieve specific metadata for a result using .keys() since this object also acts as a dictionary.

result.keys()

Look at each of the keys to see what is available.

result['meta']
result['size']

The umm metadata contains a lot of fields, so instead of printing the entire object, we can just look at the keys.

result['umm'].keys()

One important piece of info here is the Look at the cloud cover percentage.

result['umm']['CloudCover']

Another of note is the AdditionalAttributes key, which contains other useful information about the EMIT granule, like solar zenith and azimuth.

result['umm']['AdditionalAttributes']

From here, we can do other things, such as convert the results to a pandas dataframe, or filter down your results further using string matching and list comprehension.

pd.json_normalize(results)

Downloading or Streaming Data

After we have our results, there are 2 ways we an work with the data:

  1. Download All Assets
  2. Selectively Download Assets
  3. Access in place / Stream the data.

To download the data we can simply use the download function. This will retrieve all assets associated with a granule, and is nice if you plan to work with the data in this way and need all of the assets included with the product. For the EMIT L2A Reflectance, this includes the Uncertainty and Masks files.

# earthaccess.download(results, '../../data/')

If we want to stream the data or further filter the assets for download we want to first create a list of URLs nested by granule using list comprehesion.

emit_results_urls = [granule.data_links() for granule in results]
emit_results_urls

Now we can also split these into results for specific assets or filter out an asset using the following. In this example, we only want to access or download reflectance.

filtered_asset_links = []
# Pick Desired Assets - Use underscores to aid in stringmatching of the filenames (_RFL_, _RFLUNCERT_, _MASK_)
desired_assets = ['_RFL_']
# Step through each sublist (granule) and filter based on desired assets.
for n, granule in enumerate(emit_results_urls):
    for url in granule: 
        asset_name = url.split('/')[-1]
        if any(asset in asset_name for asset in desired_assets):
            filtered_asset_links.append(url)
filtered_asset_links

After we have our filtered list, we can stream the reflectance asset or download it. Start an https session then open it to stream the data, or download to save the file.

Stream Data

This may take a while to load the dataset.

# Get Https Session using Earthdata Login Info
fs = earthaccess.get_fsspec_https_session()
# Retrieve granule asset ID from URL (to maintain existing naming convention)
url = filtered_asset_links[0]
granule_asset_id = url.split('/')[-1]
# Define Local Filepath
fp = fs.open(url)
# Open with `emit_xarray` function
ds = emit_xarray(fp)
ds

Download Filtered

# Get requests https Session using Earthdata Login Info
fs = earthaccess.get_requests_https_session()
# Retrieve granule asset ID from URL (to maintain existing naming convention)
for url in filtered_asset_links:
    granule_asset_id = url.split('/')[-1]
    # Define Local Filepath
    fp = f'../../data/{granule_asset_id}'
    # Download the Granule Asset if it doesn't exist
    if not os.path.isfile(fp):
        with fs.get(url,stream=True) as src:
            with open(fp,'wb') as dst:
                for chunk in src.iter_content(chunk_size=64*1024*1024):
                    dst.write(chunk)

Contact Info:

Email: LPDAAC@usgs.gov
Voice: +1-866-573-3222
Organization: Land Processes Distributed Active Archive Center (LP DAAC)¹
Website: https://lpdaac.usgs.gov/
Date last modified: 11-06-2024

¹Work performed under USGS contract G15PD00467 for NASA contract NNG14HH33I.