Skip to main content

AWS Deployment using S3

[edit on GitHub]

Warning

We are currently working on making the setup and upgrade process to Automate HA a seamless experience. If you are already using Chef Automate HA, or are planning to use it, please contact your customer success manager or account manager for more information.

Note

  • If the user chooses backup_config as s3 in config.toml, backup is already configured during deployment, the below steps are not required. If we have kept the backup_config blank, then the configuration needs to be configured manually.

Overview

To Communicate with Amazon S3 we need an IAM Role with the required policy.

Attach the IAM Role to the All the OpenSearch Node and Frontend Node.

Note

In case of if you are using the Managed AWS Service you need to create a snapshot-role for OpenSearch.

Configuration in Provision host

  1. Create a .toml say, automate.toml.
  • Refer to the content for the automate.toml file below:

    [global.v1]
      [global.v1.external.opensearch.backup]
        enable = true
        location = "s3"
    
      [global.v1.external.opensearch.backup.s3]
    
        # bucket (required): The name of the bucket
        bucket = "bucket-name"
    
        # base_path (optional):  The path within the bucket where backups should be stored
        # If base_path is not set, backups will be stored at the root of the bucket.
        base_path = "opensearch"
    
        # name of an s3 client configuration you create in your opensearch.yml
        # see https://www.open.co/guide/en/opensearch/plugins/current/repository-s3-client.html
        # for full documentation on how to configure client settings on your
        # OpenSearch nodes
        client = "default"
    
      [global.v1.external.opensearch.backup.s3.settings]
        ## The meaning of these settings is documented in the S3 Repository Plugin
        ## documentation. See the following links:
        ## https://www.open.co/guide/en/opensearch/plugins/current/repository-s3-repository.html
    
        ## Backup repo settings
        # compress = false
        # server_side_encryption = false
        # buffer_size = "100mb"
        # canned_acl = "private"
        # storage_class = "standard"
        ## Snapshot settings
        # max_snapshot_bytes_per_sec = "40mb"
        # max_restore_bytes_per_sec = "40mb"
        # chunk_size = "null"
        ## S3 client settings
        # read_timeout = "50s"
        # max_retries = 3
        # use_throttle_retries = true
        # protocol = "https"
    
      [global.v1.backups]
        location = "s3"
    
      [global.v1.backups.s3.bucket]
        # name (required): The name of the bucket
        name = "bucket-name"
    
        # endpoint (required): The endpoint for the region the bucket lives in for Automate Version 3.x.y
        # endpoint (required): For Automate Version 4.x.y, use this https://s3.amazonaws.com
        endpoint = "https://s3.amazonaws.com"
    
        # base_path (optional):  The path within the bucket where backups should be stored
        # If base_path is not set, backups will be stored at the root of the bucket.
        base_path = "automate"
    
      [global.v1.backups.s3.credentials]
        access_key = "<Your Access Key>"
        secret_key = "<Your Seecret Key>"
    
  • Execute the command given below to trigger the deployment.

    chef-automate config patch --frontend automate.toml
    

Note

IAM Role: Assign the IAM Role to all the OpenSearch instances in the cluster created above.

Backup and Restore Commands

Backup

  • To create the backup, by running the backup command from bastion. The backup command is as shown below:

    chef-automate backup create
    

Restoring the Backed-up Data from Object Storage

To restore backed-up data of the Chef Automate High Availability (HA) using External AWS S3, follow the steps given below:

  • Check the status of all Chef Automate and Chef Infra Server front-end nodes by executing the chef-automate status command.

  • Log in to the same instance of Chef Automate front-end node from which backup is taken.

  • Execute the restore command from bastion chef-automate backup restore s3://bucket_name/path/to/backups/BACKUP_ID --skip-preflight --s3-access-key "Access_Key" --s3-secret-key "Secret_Key".

  • In case of Airgapped Environment, Execute this restore command from bastion chef-automate backup restore <object-storage-bucket-path>/backups/BACKUP_ID --airgap-bundle </path/to/bundle> --skip-preflight.

Troubleshooting

While running the restore command, If it prompts any error follow the steps given below.

  • check the chef-automate status in Automate node by running chef-automate status.
  • Also check the hab svc status in automate node by running hab svc status.
  • If the deployment services is not healthy then reload it using hab svc load chef/deployment-service.
  • Now, check the status of Automate node and then try running the restore command from bastion.

Was this page helpful?

×









Search Results