import os
import earthaccess
import numpy as np
import pandas as pd
import geopandas as gp
from shapely.geometry.polygon import orient
import xarray as xr
import sys
'../modules/')
sys.path.append(from emit_tools import emit_xarray
This notebook is from EMIT-Data-Resources
Source: How to Find and Access EMIT Data
Imported on: 2025-01-22
How to: Find and Access EMIT Data
Summary
There are currently 4 ways to find EMIT data:
- EarthData Search
- NASA’s CMR API (
earthaccess
uses this) - NASA’s CMR-STAC API
- Visions Open Access Data Portal
This notebook will explain how to access Earth Surface Mineral Dust Source Investigation (EMIT) data programmaticly using the earthaccess python library. earthaccess
is an easy to use library that reduces finding and downloading or streaming data over https or s3 to only a few lines of code. earthaccess
searches NASA’s Common Metadata Repository (CMR), a metadata system that catalogs Earth Science data and associated metadata records, then can be used to download granules or generate lists granule search result URLs.
Requirements: - A NASA Earthdata Login account is required to download EMIT data
- No Python setup requirements if connected to the workshop cloud instance! - Local Only Set up Python Environment - See setup_instructions.md in the /setup/
folder to set up a local compatible Python environment
Learning Objectives
- How to get information about data collections using earthaccess
- How to search and access EMIT data using earthaccess
Setup
Import the required packages
Authentication
earthaccess
creates and leverages Earthdata Login tokens to authenticate with NASA systems. Earthdata Login tokens expire after a month. To retrieve a token from Earthdata Login, you can either enter your username and password each time you use earthaccess
, or use a .netrc
file. A .netrc
file is a configuration file that is commonly used to store login credentials for remote systems. If you don’t have a .netrc
or don’t know if you have one or not, you can use the persist
argument with the login
function below to create or update an existing one, then use it for authentication.
If you do not have an Earthdata Account, you can create one here.
= earthaccess.login(persist=True)
auth print(auth.authenticated)
If you receive a message that your token has expired, use refresh_tokens()
like below to generate a new one.
# auth.refresh_tokens
Searching for Collections
The EMIT mission produces several collections or datasets available via the LP DAAC cloud archive.
To view what’s available, we can use the search_datasets
function and with the keyword
and and provider
arguments. The provider
is the data location, in this case LPCLOUD
. Specifying the provider isn’t necessary, but the “emit” keyword can be found in metadata for some other datasets, and additional collections may be returned.
# Retrieve Collections
= earthaccess.search_datasets(provider='LPCLOUD', keyword='emit')
collections # Print Quantity of Results
print(f'Collections found: {len(collections)}')
If you print the collections
object you can explore all of the json metadata.
# # Print collections
# collections
We can also create a list of the short-name
, concept-id
, and version
of each result collection using list comprehension. These fields are important for specifying and searching for data within collections.
= [
collections_info
{'short_name': c.summary()['short-name'],
'collection_concept_id': c.summary()['concept-id'],
'version': c.summary()['version'],
'entry_title': c['umm']['EntryTitle']
}for c in collections
]'display.max_colwidth', 150)
pd.set_option(= pd.DataFrame(collections_info)
collections_info collections_info
The collection concept-id
is the best way to search for data within a collection, as this is unique to each collection. The short-name
can be used as well, however the version
should be passed as well as there can be multiple versions available with the same short name. After finding the collection you want to search, you can use the concept-id
to search for granules within that collection.
Searching for Granules
A granule
can be thought of as a unique spatiotemporal grouping within a collection. To search for granules
, we can use the search_data
function from earthaccess
and provide the arguments for our search. Its possible to specify search products using several criteria shown in the table below:
dataset origin and location | spatio temporal parameters | dataset metadata parameters |
---|---|---|
archive_center | bounding_box | concept_id |
data_center | temporal | entry_title |
daac | point | keyword |
provider | polygon | version |
cloud_hosted | line | short_name |
Point Search
In this case, we specify the shortname
, point
coordinates, temporal
range, and min and max cloud_cover
percentages, as well as count
, which limits the maximum number of results returned.
# Search example using a Point
= earthaccess.search_data(
results ='EMITL2ARFL',
short_name=(-62.1123,-39.89402),
point=('2022-09-03','2022-09-04'),
temporal=(0,90),
cloud_cover=100
count )
Bounding Box Search
You can also use a bounding box to search. To do this we will first open a geojson file containing our region of interest (ROI) then simplify it to a bounding box by getting the bounds and putting them into a Python object called a tuple. We will use the total_bounds
property to get the bounding box of our ROI, and add that to a Python tuple, which is the expected data type for the bounding_box parameter earthaccess
search_data
.
= gp.read_file('../../data/isla_gaviota.geojson')
geojson geojson.geometry
= tuple(list(geojson.total_bounds))
bbox bbox
Now we can search for granules using the a bounding box.
# Search example using bounding box
= earthaccess.search_data(
results ='EMITL2ARFL',
short_name=bbox,
bounding_box=('2022-09-03','2022-09-04'),
temporal=(0,90),
cloud_cover=100
count )
Polygon Search
A polygon can also be used to search. For a simple polygon without holes we can take the geojson we opened and grab the coordinates of the exterior ring vertices and place them in a list. Note that this list of vertices must be in counter-clockwise order to be accepted by the search_data
function. If necessary, the external ring vertices of your polygon can be reordered using the orient
function from the shapely library.
# Orient External Ring Vertices
= orient(geojson.geometry[0], sign=1.0)
oriented # Create List of External Ring vertices coordinates
= list(oriented.exterior.coords)
polygon polygon
With this list of coordinate pairs we can use the polygon
parameter for our search. > Note that we overwrote the results
object, because for all 3 types spatial search, the results
are the same for this example.
# Search Example using a Polygon
= earthaccess.search_data(
results ='EMITL2ARFL',
short_name=polygon,
polygon=('2022-09-03','2022-09-04'),
temporal=(0,90),
cloud_cover=100
count )
Working with Search Results
All three of these examples will have the same result, since the spatiotemporal parameters fall within the same single granule. Results is a list
, so we can use an index to view a single result.
= results[0]
result result
We can also retrieve specific metadata for a result using .keys()
since this object also acts as a dictionary.
result.keys()
Look at each of the keys to see what is available.
'meta'] result[
'size'] result[
The umm
metadata contains a lot of fields, so instead of printing the entire object, we can just look at the keys.
'umm'].keys() result[
One important piece of info here is the Look at the cloud cover percentage.
'umm']['CloudCover'] result[
Another of note is the AdditionalAttributes
key, which contains other useful information about the EMIT granule, like solar zenith and azimuth.
'umm']['AdditionalAttributes'] result[
From here, we can do other things, such as convert the results to a pandas
dataframe, or filter down your results further using string matching and list comprehension.
pd.json_normalize(results)
Downloading or Streaming Data
After we have our results, there are 2 ways we an work with the data:
- Download All Assets
- Selectively Download Assets
- Access in place / Stream the data.
To download the data we can simply use the download function. This will retrieve all assets associated with a granule, and is nice if you plan to work with the data in this way and need all of the assets included with the product. For the EMIT L2A Reflectance, this includes the Uncertainty and Masks files.
# earthaccess.download(results, '../../data/')
If we want to stream the data or further filter the assets for download we want to first create a list of URLs nested by granule using list comprehesion.
= [granule.data_links() for granule in results]
emit_results_urls emit_results_urls
Now we can also split these into results for specific assets or filter out an asset using the following. In this example, we only want to access or download reflectance.
= []
filtered_asset_links # Pick Desired Assets - Use underscores to aid in stringmatching of the filenames (_RFL_, _RFLUNCERT_, _MASK_)
= ['_RFL_']
desired_assets # Step through each sublist (granule) and filter based on desired assets.
for n, granule in enumerate(emit_results_urls):
for url in granule:
= url.split('/')[-1]
asset_name if any(asset in asset_name for asset in desired_assets):
filtered_asset_links.append(url) filtered_asset_links
After we have our filtered list, we can stream the reflectance asset or download it. Start an https session then open it to stream the data, or download to save the file.
Stream Data
This may take a while to load the dataset.
# Get Https Session using Earthdata Login Info
= earthaccess.get_fsspec_https_session()
fs # Retrieve granule asset ID from URL (to maintain existing naming convention)
= filtered_asset_links[0]
url = url.split('/')[-1]
granule_asset_id # Define Local Filepath
= fs.open(url)
fp # Open with `emit_xarray` function
= emit_xarray(fp)
ds ds
Download Filtered
# Get requests https Session using Earthdata Login Info
= earthaccess.get_requests_https_session()
fs # Retrieve granule asset ID from URL (to maintain existing naming convention)
for url in filtered_asset_links:
= url.split('/')[-1]
granule_asset_id # Define Local Filepath
= f'../../data/{granule_asset_id}'
fp # Download the Granule Asset if it doesn't exist
if not os.path.isfile(fp):
with fs.get(url,stream=True) as src:
with open(fp,'wb') as dst:
for chunk in src.iter_content(chunk_size=64*1024*1024):
dst.write(chunk)
Contact Info:
Email: LPDAAC@usgs.gov
Voice: +1-866-573-3222
Organization: Land Processes Distributed Active Archive Center (LP DAAC)¹
Website: https://lpdaac.usgs.gov/
Date last modified: 11-06-2024
¹Work performed under USGS contract G15PD00467 for NASA contract NNG14HH33I.