Existing A2HA to Automate HA
Warning
Warning
- A2HA user can be migrated to Automate HA with a minimum Chef Automate version 20201230192246
This page explains migrating the existing A2HA data to the newly deployed Chef Automate HA. This migration involves the following steps:
Prerequisites
Ability to mount the file system, which was mounted to A2HA Cluster for backup purpose, to Automate HA.
A2HA is configured to take backup on mounted network drive (location example :
/mnt/automate_backup
).
Migration
Run the following commands from any automate instance in A2HA Cluster.
sudo chef-automate backup create sudo chef-automate bootstrap bundle create bootstrap.abb
- The first command will take the backup at the mount file system. You can get the mount path from the file
/hab/a2_deploy_workspace/a2ha.rb
- The second command will create the bootstrap bundle, which needed to copy all the frontend nodes of Automate HA cluster.
- Once the backup is completed successfully, save the backup Id. For example:
20210622065515
.
sudo chef-automate backup create sudo chef-automate bootstrap bundle create bootstrap.abb
- The first command will take the backup at the mount file system. You can get the mount path from the file
/hab/a2_deploy_workspace/a2ha.rb
on bastion node. - The second command will create the bootstrap bundle, which we need to copy all the frontend nodes of Automate HA cluster.
- Once the backup is completed successfully, please save the backup Id. For example:
20210622065515
. - If you want to use backup created previously run the command on Automate node, to get the backup id
chef-automate backup list
Backup State Age 20180508201548 completed 8 minutes old 20180508201643 completed 8 minutes old 20180508201952 completed 4 minutes old
- The first command will take the backup at the mount file system. You can get the mount path from the file
Detach the File system from the old A2HA cluster.
Configure the backup at Automate HA cluster, in case if you have not configured, please refer this Doc: Pre Backup Configuration for File System Backup
From the Step 3, you will get the backup mount path.
Stop all the services at frontend nodes in Automate HA Cluster.
Get the Automate HA version number from the location
/var/tmp/
in Automate instance. Example :frontend-4.x.y.aib
.Run the command at the Chef-Automate node of Automate HA cluster to get the applied config:
sudo chef-automate config show > current_config.toml
- Run the below command to all the Automate and Chef Infra Server nodes
sudo chef-automate stop
To run the restore command we need the airgap bundle. Get the Automate HA airgap bundle from the location
/var/tmp/
in Automate instance. Example :frontend-4.x.y.aib
.- In case of airgap bundle is not present at
/var/tmp
, in that case we can copy the bundle from the bastion node to the Automate node.
- In case of airgap bundle is not present at
Run the command at the Chef-Automate node of Automate HA cluster to get the applied config
sudo chef-automate config show > current_config.toml
Add the OpenSearch credentials to the applied config.
If using Chef Managed Opensearch, then add the below config into
current_config.toml
(without any changes).[global.v1.external.opensearch.auth.basic_auth] username = "admin" password = "admin"
If using AWS Managed services, then add the below config into
current_config.toml
(change this with your actual credentials)
Warning
```bash
[global.v1.external.opensearch.auth]
scheme = "aws_os"
[global.v1.external.opensearch.auth.aws_os]
username = "THIS YOU GET IT FROM AWS Console"
password = "THIS YOU GET IT FROM AWS Console"
access_key = "<YOUR AWS ACCESS KEY>"
secret_key = "<YOUR AWS SECRET KEY>"
```
To restore the A2HA backup on Chef Automate HA, run the following command from any Chef Automate instance of the Chef Automate HA cluster:
sudo chef-automate backup restore /mnt/automate_backups/backups/20210622065515/ --patch-config current_config.toml --airgap-bundle /var/tmp/frontend-4.x.y.aib --skip-preflight
After the restore is successfully executed, you will see the below message:
Success: Restored backup 20210622065515
Copy the
bootstrap.abb
bundle to all the Frontend nodes of the Chef Automate HA cluster. Unpack the bundle using the below command on all the Frontend nodes.sudo chef-automate bootstrap bundle unpack bootstrap.abb
Start the Service in all the frontend nodes with the below command.
sudo chef-automate start
Warning
- After the restore command is successfully executed. If we run the `chef-automate config show`, we can see that both ElasticSearch and OpenSearch config are part of Automate Config. After restoring Automate HA talk to OpenSearch. - Remove the elaticsearch config from all Frontend nodes, to do that, redirect the applied config to the file and set the config again. For example: ```bash chef-automate config show > applied_config.toml ``` Remove the below field from the `applied_config.toml`. ```bash [global.v1.external] [global.v1.external.elasticsearch] enable = true nodes = [""] [global.v1.external.elasticsearch.auth] scheme = "" [global.v1.external.elasticsearch.auth.basic_auth] username = "" password = "" [global.v1.external.elasticsearch.ssl] root_cert = "" server_name = "" ``` Apply this modified config by running below command. ```bash chef-automate config set applied_config.toml ``` These steps should be executed on all the Frontend nodes.
Equivalent Commands
In Automate HA there are equivalent command which had been used in A2HA:
Commands | A2HA | Automate HA |
---|---|---|
init config existing infra | bash automate-cluster-ctl config init -a existing_nodes | bash chef-automate init-config-ha existing_infra |
deploy | bash automate-cluster-ctl deploy | bash chef-automate deploy config.toml |
info | bash automate-cluster-ctl info | bash chef-automate info |
status | bash chef-automate status | bash chef-automate status |
ssh | bash automate-cluster-ctl ssh <name> | bash chef-automate ssh --hostname <name> |
test | bash automate-cluster-ctl test | bash chef-automate test |
gather logs | bash automate-cluster-clt gather-logs | bash chef-automate gather-logs |
workspace | bash automate-cluster-clt workspace | bash chef-automate workspace [OPTIONS] SUBCOMMAND [ARG] ... |
Troubleshooting
In case of Restore failure from ElasticSearch to OpenSearch
Error: Failed to restore a snapshot
Get the basepath location from the A2HA Cluster using the curl request below.
REQUEST
curl -XGET http://localhost:10144/_snapshot/_all?pretty -k
RESPONSE
Look for the location
value in the response.
"settings" : {
"location" : "/mnt/automate_backups/automate-elasticsearch-data/chef-automate-es6-compliance-service",
}
location
value should be matched with the OpenSearch cluster. In case of location
value is different, use the below script to create the snapshot repo.
indices=(
chef-automate-es5-automate-cs-oc-erchef
chef-automate-es5-compliance-service
chef-automate-es5-event-feed-service
chef-automate-es5-ingest-service
chef-automate-es6-automate-cs-oc-erchef
chef-automate-es6-compliance-service
chef-automate-es6-event-feed-service
chef-automate-es6-ingest-service
)
for index in ${indices[@]}; do
curl -XPUT -k -H 'Content-Type: application/json' http://localhost:10144/_snapshot/$index --data-binary @- << EOF
{
"type": "fs",
"settings": {
"location" : "/mnt/automate_backups/automate-elasticsearch-data/$index"
}
}
EOF
done
Note
- After the restore command is successfully executed. If we run the
chef-automate config show
, we can see that both ElasticSearch and OpenSearch config are part of Automate Config. We can keep both the config; it won’t impact the functionality. After restoring Automate HA, talk to OpenSearch.
OR
- We can remove the ElasticSearch config from the automate. To do that, redirect the applied config to the file and set the config again.
chef-automate config show > applied_config.toml
Modify applied_config.toml
, remove elastic search config, and set the config. Set applied_config.toml
on all the frontend nodes manually. As the removal of config is not supported from the bastion. Use the below command to set the config manually.
chef-automate config set applied_config.toml
Was this page helpful?