Deploy from Windows
Deploying Cumulus, CumulusDashboard, and ORCA from Windows brings some additional challenges. The goal of this page is to provide a set of modified instructions to get around common errors.
Notes
- Choose a PREFIX that will identify your installation when in AWS. This string will be used throughout deployment.
- Connect to the NASA VPN to be able to connect to AWS.
warning
The VPN drastically slows down Terraform operations, and limits what documentation can be viewed. Switch off when applicable.
- Commands here will use
us-west-2
for region because of the current state of our sandbox and ESDIS recommendations. Replace consistently as needed.- Make sure any operations in AWS are done under the correct region.
Application
This application will be used in future steps to authenticate users.
- Go to https://uat.urs.earthdata.nasa.gov/profile
- Applications -> My Applications
tip
If this option is not present, then you must get the "Application Developer" permission.
- Create a new Application. Remember to update with your own prefix.
- Application ID:
PREFIX_cumulus
- Application Name
PREFIX Cumulus
- Application Type:
OAuth 2
- Redirect URL: For now,
http://localhost:3000/
. Proper URLs will be defined in the ORCA deployment.
- Application ID:
Initial Setup
- Follow the deployment environment setup instructions.
- You may need to install Terraform manually.
- Only configure the default profile.
- Keep the access keys in plain-text. You will need to run
aws configure
in multiple environments.
- Create an AWS Key Value Pair by following the AWS instructions.
- Choose the '.pem' format.
- Naming convention is PREFIX-key-pair.pem
- Create buckets.
- Same OU and region would not be ideal for a real backup system, and is therefore not sufficient for testing.
- Required buckets are PREFIX-tf-state, PREFIX-orca-primary, PREFIX-internal, PREFIX-private, PREFIX-protected, PREFIX-public, and PREFIX-orca-reports
- PREFIX-orca-* buckets go in a separate DR account. Other buckets simulate Cumulus-managed buckets, and should be placed in the base account.
tipAn example command for creating a bucket in us-west-2. Remember to run
aws configure
for the proper account first.aws s3api create-bucket --bucket PREFIX-tf-state --profile default --region us-west-2 --create-bucket-configuration "LocationConstraint=us-west-2"
Cumulus
-
If creating a realistic setup with multiple OUs, apply Create the ORCA Archive Bucket to your PREFIX-orca-primary and PREFIX-orca-reports.
-
Run
aws dynamodb create-table --table-name PREFIX-tf-locks --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --billing-mode PAY_PER_REQUEST --region us-west-2
-
Run
aws s3api put-bucket-versioning --bucket PREFIX-tf-state --versioning-configuration Status=Enabled
tipVPC and Subnets are created by NGAP. It is recommended you copy values from an existing deployment setup.
- Go to this repo and clone it to your machine.
It is strongly recommended to use a tested release branch rather than master. These instructions have been tested with release/v9.4.0-v3.0.1
-
Unzip.
-
Remove the '.example' on terraform.tf and terraform.tfvars files in data-persistence-tf, cumulus-tf, and rds-cluster-tf.
-
Globally find and replace
postgres_user_pw
todb_admin_password
anddatabase_app_user_pw
todb_user_password
. -
In each terraform.tf and terraform.tfvars, use your own prefix, region, vpc id, and subnet ids.
warningThe region and prefix are not always in simple variables. Do a global search for 'PREFIX' and 'us-east-1'.
warningOnly use the non-lambda subnet id in the data-persistence-tf/terraform.tfvars. In cumulus-tf use both.
warningOverwrite the
orca-sandbox
inorca-sandbox-tf-locks
with your prefix as well. -
In rds-cluster-tf/terraform.tfvars
- Use values of your choice for
db_admin_username
anddb_admin_password
- Set
tags
to{ "Deployment" = "PREFIX" }
- Set
permissions_boundary_arn
toarn:aws:iam::YOUR ACCOUNT ID:policy/NGAPShRoleBoundary
- Add
rds_user_password = "CumulusD3faultPassw0rd"
and change as desired. - Set
provision_user_database
totrue
- Set
cluster_identifier
to"PREFIX-cumulus-db"
- Use values of your choice for
-
In rds-cluster-tf/terraform.tf
- Set
bucket
to"PREFIX-tf-state"
- Set
key
to"PREFIX/cumulus/terraform.tfstate"
- Set
dynamodb_table
to"PREFIX-tf-locks"
- Set
-
Go to https://github.com/asfadmin/CIRRUS-core/blob/master/Dockerfile and download the file to the same folder as your downloaded repo and orca folder.
tipMake sure that no extension is added.
-
Open a commandline in the same folder.
- Run
docker build -t orca .
anddocker run -it --rm -v pathToYourFolder:/CIRRUS-core orca /bin/bash
- The commandline should now be inside a docker container.
cd cumulus-orca-template-deploy/rds-cluster-tf/
aws configure
terraform init
terraform plan
terraform apply
- Run
-
In data-persistence-tf/terraform.tfvars
- Set
permissions_boundary_arn
toarn:aws:iam::12345:policy/NGAPShRoleBoundary
replacing the12345
with your Account Id. - Set
rds_user_access_secret_arn
to theuser_credentials_secret_urn
output fromterraform apply
. - Set
rds_security_group
to thesecurity_group_id
output fromterraform apply
. - Set
vpc_id
to your borrowed VPC.
- Set
cd ../data-persistence-tf/ aws configure terraform init terraform plan terraform apply
- In cumulus-tf/terraform.tfvars
- Replace 12345 in permissions_boundary_arn with the Account Id.
- Add to the buckets:
orca_default = { name = "PREFIX-orca-primary" type = "orca" }, provider = { name = "orca-sandbox-s3-provider" type = "provider" }
:::tip
The "orca-sandbox-s3-provider" bucket contains test data.
If creating a separate environment, you can create your own bucket.
It is recommended that all buckets include the same test data.
:::
- If the CMA is not deployed, follow [the deployment instructions](https://nasa.github.io/cumulus/docs/deployment/deployment-readme#deploy-the-cumulus-message-adapter-layer) and note the version used. Must match `cumulus_message_adapter_version`.
If you have already deployed your own CMA layer, it can be found using
aws lambda --profile default list-layers --query "Layers[?LayerName=='PREFIX-CMA-layer'].[LayerName, LayerArn, LatestMatchingVersion.LayerVersionArn]"
:::
- Comment out the `ecs_cluster_instance_image_id`. This will use the latest NGAP ECS image.
- `ecs_cluster_instance_subnet_ids` and `lambda_subnet_ids` should have the same two values.
- Set `urs_client_id` and `urs_client_password` to the values from your created application.
- Add an extra property `urs_url = "https://uat.urs.earthdata.nasa.gov"`
- Add your username to the `api_users`
- If you want all Orca developers to have access, set to
api_users = [ "bhazuka", "rizbi.hassan", "scott.saxon", ]
- Comment out the `archive_api_port` property and value.
- Uncomment the `key_name property` and set the value to `"PREFIX-key-pair"`
- Add this section to the bottom of the file and edit as desired:
=============================================================================
ORCA Variables
=============================================================================
REQUIRED TO BE SET
-----------------------------------------------------------------------------
ORCA application database user password.
db_user_password = "This1sAS3cr3t"
Default archive bucket to use
orca_default_bucket = PREFIX-orca-primary"
PostgreSQL database (root) user password
db_admin_password = "An0th3rS3cr3t"
:::warning
The instructions in the tfvars file suggest swapping '12345' with your account ID. This may not work, depending on how your dependencies were set up.
:::
- In cumulus-tf/orca.tf:
- Remove the `aws_profile` and `region` variables.
- Replace the `ORCA Variables` section with the following:
```json
## --------------------------
## ORCA Variables
## --------------------------
## REQUIRED
orca_default_bucket = var.orca_default_bucket
db_admin_password = var.db_admin_password
db_user_password = var.db_user_password
db_host_endpoint = var.db_host_endpoint
rds_security_group_id = var.rds_security_group_id
## OPTIONAL
db_admin_username = "postgres"
orca_delete_old_reconcile_jobs_frequency_cron = "cron(0 0 ? * SUN *)"
orca_ingest_lambda_memory_size = 2240
orca_ingest_lambda_timeout = 600
orca_internal_reconciliation_expiration_days = 30
orca_recovery_buckets = []
orca_recovery_complete_filter_prefix = ""
orca_recovery_expiration_days = 5
orca_recovery_lambda_memory_size = 128
orca_recovery_lambda_timeout = 300
orca_recovery_retry_limit = 3
orca_recovery_retry_interval = 1
orca_recovery_retry_backoff = 2
s3_inventory_queue_message_retention_time_seconds = 432000
s3_report_frequency = "Daily"
sqs_delay_time_seconds = 0
sqs_maximum_message_size = 262144
staged_recovery_queue_message_retention_time_seconds = 432000
status_update_queue_message_retention_time_seconds = 777600
- Set the value of
db_host_endpoint
to therds_endpoint
value output from the rds-cluster deployment. - Set the value of
rds_security_group_id
to thesecurity_group_id
value output from the rds-cluster deployment. - You may change
source
to an alternate release. If local, make sure it is within the scope of the container.
cd ../cumulus-tf
terraform init
terraform plan
terraform apply
- Go to https://uat.urs.earthdata.nasa.gov/profile
- Applications -> My Applications
- Click on the Edit button for your application.
- Click on Manage -> Redirect Uris
- Add http://localhost:3000/auth and the
archive_api_redirect_uri
anddistribution_redirect_uri
output fromterraform apply
.