Source URL: https://github.com/dwayn/aws-management-suite
This is currently very much a work in progress, and there is much that will be cleaned up and added over time. The goal of this suite is to abstract many of the common tasks related to managing cloud infrastructure in AWS and bridge the gap between raw infrastructure management tools like the EC2 command line tools, and configuration management tools. Initially, this tool is focused on EBS volume, raid and snapshot management of single and raid volumes, but going forward the goal is to cover other infrastructure management needs that are not fully addressed by other tools.
EBS Volumes (managed as groups of volumes)
- Manages groups of one or more ebs volumes as if they were a single volume and handles the underlying group operations for all of the following
- Create
- Delete
- Attach
- Detach
- Automatically creates and configures software raid for multi-volume groups when creating new volume groups
- Partition and format new volume
- Mount volume group
- Unmount volume group
- Internally handles all of the metadata associated with the raid volume configuration, so that when a volume group is attached to an instance, the raid is automatically assembled and system configured
- Raid support currently requires that
mdadm
package be installed on the destination system - Partitioning and formatting require that
mkfs.FILESYSTEM
is available on the system forFILESYSTEM
, eg.mkfs.ext3
normally exists on most distributions, butmkfs.xfs
is only available after installingxfsprogs
EBS Snapshots (managed as groups of snapshots)
- Pre/post snapshot hooks to enable running commands/scripts on target host before and after starting snapshot to ensure consistent point in time snapshot of all volumes in a raid group
- Copy snapshot group to another region (only handled internally currently for cloning a snapshot group into a volume group in another region)
- Clone snapshot group to new volume group and optionally attach/mount on a host
- Clone latest snapshot of a volume group or host/instance + mount point
- Schedule regular snapshots of volume/raid with managed grandfather/father/son expiration
- Automatable purging of expired snapshots
Instance Management
- Instances can be created using
ams host create
- Templates for host creation can be managed using
ams host template ...
functionalityams host create
accepts a template id or template name when creating an instance, allowing you to create a new instance or set of instance with only a single command line option
- Instance discovery has been implemented, allowing the host information to be automatically populated
- Regions and availability_zones information imported into AMS database
- Support for viewing the available key pairs for launching instances
- Support for viewing the available AMIs for instances (only currently pulls private AMIs)
Route53
- Discovery has been implemented to synchronize the local database with the current state of Route53 DNS records and health checks
- Create raw DNS record
- Create DNS record for a specific host without explicitly defining a number of the parameters that are on the host (optionally also configure a Route53 health check for the host)
- Create Route53 health checks
- Support for managing Simple, Weighted Round Robin, Failover, and Latency routing policies in Route53 records
- Delete DNS record
Instance Tagging
- Management of instance tags is supported with ability to add/edit/remove tags on single hosts or many with advanced tag based filtering
- Tags are used by
ams-inventory
to provide groups for hosts in ansible.
- Tags are used by
- A number of operations now allow actions of commands to be filtered by tags
- Integration of tagging into host creation and templates
Networking
- Discovery has been implemented to gather the information on security groups and their association across all regions
- Tools for viewing security groups
- Integration of security groups into host creation and templating
VPC
- Discovery has been implemented to gather data about VPCs and their subnets
- Tools for viewing VPCs and subnets
- Integration of VPCs and subnets into host creation and templates
- A dynamic inventory script has been added that uses the data in the AMS database to power your inventory needs for ansible
- Dynamic inventory supports managing server group hierarchies (groups of groups and groups of servers)
- Built in templating for combining tags on hosts into group names and adding hosts to these groups automatically
- Templating support includes filtering so that templates can apply to hosts with specific tag values
- Command line management of groups and templates using same script that ansible uses as inventory
SSH client
- Password or private key based login
- Handles sudo login (password or passwordless)
- Captures stdout, stderr and exit code from command run
Changes that are made are now being tracked in the CHANGELOG
- This tool will only work on systems with python 2.6+ (due to paramiko requirements), but to date has only been tested on 2.6.6 and 2.7.6 but should run on any 2.6.x or 2.7.x version (3.x compatibility is unlikely). If you find that it specifically does or does not work on any version please let me know and I will add it to this list.
- The tool requires ssh and sudo access to hosts in order to accomplish tasks like mounting volumes and running system commands to start/stop services (for snapshots)
- Copy defaults.ini to /etc/ams.ini or ~/ams.ini and edit AWS, SSH and SUDO access credentials
- A MySQL database needs to be setup for tracking state. The following statements assume that the mysql database and the tool are located on the same host:
CREATE DATABASE ams;
-- Create the schemaCREATE USER 'ams_user'@'localhost' IDENTIFIED WITH mysql_native_password BY 'ams_pass';
GRANT ALL PRIVILEGES ON ams.* TO 'ams_user'@'localhost';
-- (This will give access to the new schema created)
- Edit TRACKING_DB credentials in your ams.ini file with the proper credentials for your MySQL database (default settings are configured to match the above grant with standard mysql install)
sudo pip install argcomplete
if you would like to use the bash completions (highly recommend). If you do not install with sudo, then you will not have the binaries for argcomplete installed and tab completion from the shell will not work.pip install -r requirements.txt
will install the handful of external dependencies- You have option of either running pip install as root or if you have setup a virtualenv for this tool, then you you can run pip install without root in the virtual environment
- Documentation on setting the tool up with virtualenv is planned for the future
- Suggested: add the path to ams directory to your path or add symlink to
ams
script to a directory in the system path ams internals database install
will create the current full version of all of the tables
In order from highest to lowest priority:
-
Environment variables: AMS_*
-
Values in user's config file (~/ams.ini)
-
Values in global config file (/etc/ams.ini)
-
Legacy configuration values (settings.py)
-
Values in default config file (defaults.ini)
-
Database values
Note: User, global, and default configuration ini files are mutually exclusive so only one will be loaded. Priority order is user, global, default with user having highest priority. This may change to an override model in the future if a compelling reason is found.
This project makes use of the argcomplete library (https://github.com/kislyuk/argcomplete) to provide dynamic completion. As part of the pip installation,
the library will be installed, but completion will still need to be enabled. Due to some multi-platform issues I experienced trying to enable global completeion,
I opted to use specific completion. For bash you only need to add the following to your .bashrc, .profile or .bash_profile (depending on which your OS uses) and then reload
your terminal or source .bashrc
(or .profile/.bash_profile).
eval "$(register-python-argcomplete ams)"
eval "$(register-python-argcomplete ams-inventory)"
For zsh run the following: activate-global-python-argcomplete --user
Then add the following to the end of your .zshrc file:
autoload bashcompinit
bashcompinit
source ~/.bash_completion.d/python-argcomplete.sh
eval "$(register-python-argcomplete ams)"
eval "$(register-python-argcomplete ams-inventory)"
Note: You may need to install argcomplete globally using sudo pip install argcomplete
to support autocomplete from the shell.
If you have updated the code base, just run pip install -r requirements.txt
to install any new dependencies and run ams internals database upgrade
to run the update scripts
for the database tables. Upgrade scripts can be run as often as you like, as it will do nothing if the database version matches the code version. If the database version is not
in sync with the current version defined in the tool, the tool will not allow any operations to be done until the internals database is upgraded; this is avoid the possiblity of
corrupting data in the database due to expectations in the software.
All of the functionality is through the command line tool ams
. It has been implemented as a multi-level nested command parser using the argparse module.
If at any point you need help just add -h
or --help
flag to the command line and it will list all available sub-commands and options for the current command level.
There are still a few legacy command structures that need to be cleaned up, so there may be some minor changes to the syntax to a few of these, but I will attempt to keep
these to an absolute minimum. The option -q
or --scriptable-output
can be passed after ams
but before any of the subcommands to change the output into a tab delimited
format rather than a structured table that many functions display; this is to aid in writing shell scripts using AMS. Eg. ams -q host list
will output the same data as
ams host list
but it will be formatted as tab delimited, and no headers or footers will be displayed.
With no options this lists all host entries in the database. Filtering can be done on host, instance_id, and name properties by
providing a SEARCH_FIELD
and SEARCH_VALUE
. Wildcard matching can be done on these properties by passing --like
(SEARCH_FIELD
contains LIKE
) and --prefix
(SEARCH_FIELD
starts with PREFIX
). Filtering can also be done on the availability zone that the
instance is in using --zone
, this is a prefix match as well.
Hosts can also be filtered by the tags on the instances by passing one or more
--tag
options with the value of them in the form --tag tagname<OPERATOR>value
, eg. --tag env=prod
will match hosts that have a tag
named "env" with value of "prod; conversely --tag env!=prod
will match hosts that do not have an "env" tag with the value "prod". Support for
prefix and contains matching can be achieved with the =:
and =~
type operators respectively along with their negated versions.
Arguments:
--like LIKE string to find within 'search-field'
--prefix PREFIX string to prefix match against 'search-field'
--zone ZONE Availability zone to filter results by. This is a
prefix search so any of the following is valid with
increasing specificity: 'us', 'us-west', 'us-west-2',
'us-west-2a'
-x, --extended Show extended information on hosts
-a, --all Include terminated instances (that have been added via
discovery)
--terminated Show only terminated instances (that have been added
via discovery)
-s, --show-tags Display tags for instances
-t TAG, --tag TAG Filter instances by tag, in the form name<OPERATOR>value.
Valid operators:
= (equal)
!= (not equal)
=~ (contains/like)
!=~ (not contains/not like)
=: (prefixed by)
!=: (not prefixed by)
Eg. To match Name tag containing 'foo': --tag Name=~foo
Create new instance(s) with the givens settings. The cli completions for many of the options are contextual based on options that have been
provided up to the point that completion is being done. Providing region will help filter zone, ami-id, vpc-id and security-group; providing
zone will help filter subnet-id; providing vpc-id will help filter security-group and subnet-id. Example: ams host create --region us-west-2 --zone <TAB><TAB>
will give autocomplete options only for zones that are in region already given.
Using --template-id or --template-name, a template can be defined for the instance creation. The set of required arguments is the same, but depending on the configuration of the template, some or all of these requirements may be fulfilled such that the only option that is required to create an instance is a template-id or template-name. Any options passed in addition to the template identifier will override values in the template with the exception of tags and security groups. Security groups will be combined into the union of the security groups defined in the template and the ones provided as arguments. Tags will be combined as the union of all tag names along with their corresponding values, and in the case of a conflict on tag name the value that is passed as an argument will be used. Contextual command line autocomplete also takes into account the values for a template (overriding template context with passed arguments) when doing completions.
Required Arguments: --region, --ami-id, --instance-type
- VPC Required Arguments: --subnet-id
- EC2 Classic Required Arguments: --zone
Arguments:
-r REGION, --region REGION
Region to create the instance in
-y INSTANCE_TYPE, --instance-type INSTANCE_TYPE
EC2 instance type
-m AMI_ID, --ami-id AMI_ID
AMI ID for the new instance
-k KEY_NAME, --key-name KEY_NAME
Keypair name to use for creating instance
-z ZONE, --zone ZONE Availability zone to create the instance in
-o, --monitoring Enable detailed cloudwatch monitoring
-v VPC_ID, --vpc-id VPC_ID
VPC ID (Not required, used to aid autocomplete for
subnet id)
-s SUBNET_ID, --subnet-id SUBNET_ID
Subnet ID for VPC
-i PRIVATE_IP, --private-ip PRIVATE_IP
Private IP address to assign to instance (VPC only)
-g SECURITY_GROUP, --security-group SECURITY_GROUP
Security group to associate with instance (supports
multiple usage)
-e, --ebs-optimized Enable EBS optimization
-n NUMBER, --number NUMBER
Number of instances to create
-a NAME, --name NAME Set the name tag for created instance
-t TAG, --tag TAG Add tag to the instance in the form tagname=tagvalue,
eg: --tag my_tag=my_value (supports multiple usage)
--template-id TEMPLATE_ID
Set a host template id to use to create instance
--template-name TEMPLATE_NAME
Set a host template name to use to create instance
DEPRECATED: This has been deprecated in lieu of the completion of ams host discovery
and ams host create
for managing the data in the database.
Add a host to the hosts table so that resources on the host can be managed. This has effectively been replaced by the host discovery and host create functionality.
Required arguments: --instance, --host, --zone
Arguments:
-i INSTANCE, --instance INSTANCE
Instance ID of the instance to add
-u UNAME, --uname UNAME
Hostname to use when setting uname on the host
(default is to use instance hostname)
-H HOSTNAME, --hostname HOSTNAME
hostname of the host (used to reference the host for
management)
-z ZONE, --zone ZONE availability zone that the instance is in
--hostname-internal HOSTNAME_INTERNAL
internal hostname (stored but not currently used)
--hostname-external HOSTNAME_EXTERNAL
external hostname (stored but not currently used)
--ip-internal IP_INTERNAL
internal IP address (stored but not currently used)
--ip-external IP_EXTERNAL
external IP address (stored but not currently used)
--ami-id AMI_ID AMI ID (stored but not currently used)
--instance-type INSTANCE_TYPE
Instance type (stored but not currently used)
--notes NOTES Notes on the instance/host (stored but not currently
used)
-z ZONE, --zone ZONE availability zone that the instance is in
Edit a host's details in the database, particularly useful for editing the hostname which does not get overwritten on discovery passes. Also provides the option --configure-hostname which will ssh to the host and set the system hostname to the hostname that you have configured
Required arguments: --instance
Arguments:
-i INSTANCE, --instance INSTANCE
Instance ID of the instance to add
-u UNAME, --uname UNAME
Hostname to use when setting uname on the host
(default is to use instance hostname)
--hostname-internal HOSTNAME_INTERNAL
internal hostname (stored but not currently used)
--hostname-external HOSTNAME_EXTERNAL
external hostname (stored but not currently used)
--ip-internal IP_INTERNAL
internal IP address (stored but not currently used)
--ip-external IP_EXTERNAL
external IP address (stored but not currently used)
--ami-id AMI_ID AMI ID (stored but not currently used)
--instance-type INSTANCE_TYPE
Instance type (stored but not currently used)
--notes NOTES Notes on the instance/host (stored but not currently
used)
--name NAME Name of the host (should match the 'Name' tag in EC2
for the instance)
-H HOSTNAME, --hostname HOSTNAME
hostname of the host (used to reference the host for
management)
--configure-hostname Set the hostname on the host to the FQDN that is
currently the hostname or the uname that is currently
defined for the instance in AMS (uname will override
FQDN)
-z ZONE, --zone ZONE availability zone that the instance is in
Start, stop, reboot or terminate a host or set of hosts. Valid values for instance_action are start
, stop
, reboot
and terminate
,
and one or more instance ids must be provided for the action. If --execute
is not provided, a list of all instances that the action
would be applied to is shown but the action is not taken on the instances.
Required Arguments: instance_action, instance_id+
Arguments:
--execute Applies the action to the given instances, otherwise,
a list of instances that would be shut down is listed
Lists all available host templates, filtered by provided arguments.
Arguments:
--template-id TEMPLATE_ID
Filter by template ID
--template-name TEMPLATE_NAME
Filter by template name
-r REGION, --region REGION
Filter by region
-m AMI_ID, --ami-id AMI_ID
Filter by AMI ID
-z ZONE, --zone ZONE Filter by availability zone
-v VPC_ID, --vpc-id VPC_ID
Filter by VPC ID
-s SUBNET_ID, --subnet-id SUBNET_ID
Filter by VPC Subnet ID
-i PRIVATE_IP, --private-ip PRIVATE_IP
Filter by private IP
-a NAME, --name NAME Set the name tag for created instance
Create a new host creation template. The only absolutely required argument is a unique template name. Any of the fields that
are provided in the template, but are required for host creation, must be provided at when running ams host create
. --tag
and --security-group can be provided multiple times to add multiple tags and security groups respectively.
Required Arguments: --template-name
Arguments:
-n TEMPLATE_NAME, --template-name TEMPLATE_NAME
Unique name for the template
-r REGION, --region REGION
Region to create the instance in
-y INSTANCE_TYPE, --instance-type INSTANCE_TYPE
EC2 instance type
-m AMI_ID, --ami-id AMI_ID
AMI ID for the new instance
-k KEY_NAME, --key-name KEY_NAME
Keypair name to use for creating instance
-z ZONE, --zone ZONE Availability zone to create the instance in
-o, --monitoring Enable detailed cloudwatch monitoring
-v VPC_ID, --vpc-id VPC_ID
VPC ID (Not required, used to aid autocomplete for
subnet id)
-s SUBNET_ID, --subnet-id SUBNET_ID
Subnet ID for VPC
-i PRIVATE_IP, --private-ip PRIVATE_IP
Private IP address to assign to instance (VPC only)
-g SECURITY_GROUP, --security-group SECURITY_GROUP
Security group to associate with instance (supports
multiple usage)
-e, --ebs-optimized Enable EBS optimization
-a NAME, --name NAME Set the name tag for created instance
-t TAG, --tag TAG Add tag to the instance in the form tagname=tagvalue,
eg: --tag my_tag=my_value (supports multiple usage)
Comparable to create for most options, except that template-id or template-name is required to identify the template for editing
and the addition of the arguments: --remove, --remove-tag, --remove-security-group. --remove allows you to clear the value for the
given field so that the template no longer will provide a value for it. --remove-tag and --remove-security-group are for disassociating
host tags and security groups respectively from a template so that those will no longer be auto added to newly created instances using
the template. --remove* arguments can each be provided multiple times as well. Note: when applying edits to a template, remove operations
are applied before add operations, eg: ams host template edit --template-id 1 --remove-tag foo --tag foo=something
will remove tag foo
and then will create tag foo
with value something
(it is not required to remove a tag first before adding it; in this example the --tag
argument would be all that is required as it will update the value of the foo
tag).
Required Arguments: --template-id|--template-name
Arguments:
--template-id TEMPLATE_ID
Set a host template id to edit
--template-name TEMPLATE_NAME
Set a host template name to edit
-r REGION, --region REGION
Region to create the instance in
-y INSTANCE_TYPE, --instance-type INSTANCE_TYPE
EC2 instance type
-m AMI_ID, --ami-id AMI_ID
AMI ID for the new instance
-k KEY_NAME, --key-name KEY_NAME
Keypair name to use for creating instance
-z ZONE, --zone ZONE Availability zone to create the instance in
-o, --monitoring Enable detailed cloudwatch monitoring
-v VPC_ID, --vpc-id VPC_ID
VPC ID (Not required, used to aid autocomplete for
subnet id)
-s SUBNET_ID, --subnet-id SUBNET_ID
Subnet ID for VPC
-i PRIVATE_IP, --private-ip PRIVATE_IP
Private IP address to assign to instance (VPC only)
-g SECURITY_GROUP, --security-group SECURITY_GROUP
Security group to associate with instance (supports
multiple usage)
-e, --ebs-optimized Enable EBS optimization
-a NAME, --name NAME Set the name tag for created instance
-t TAG, --tag TAG Add tag to the instance in the form tagname=tagvalue,
eg: --tag my_tag=my_value (supports multiple usage)
--remove {instance-type,ami-id,key-name,zone,monitoring,vpc-id,subnet-id,private-ip,ebs-optimized,name}
Remove the value for one of the settings: instance-
type, ami-id, key-name, zone, monitoring, vpc-id,
subnet-id, private-ip, ebs-optimized, name (supports
multiple usage)
--remove-tag REMOVE_TAG
Remove a tag by name from the template (supports
multiple usage)
--remove-security-group REMOVE_SECURITY_GROUP
Remove a security group by id from the template
(supports multiple usage)
Deletes a host creation template by either name or id
Required Arguments: --template-id|--template-name
Arguments:
--template-id TEMPLATE_ID
Set a host template id to delete
--template-name TEMPLATE_NAME
Set a host template name to delete
Copy a host template to a new template
Required Arguments: --template-id|--template-name, --name
Arguments:
--template-id TEMPLATE_ID
Source template ID
--template-name TEMPLATE_NAME
Source template name
--name NAME Name for the new template
Lists the tags for an instance or group of instances. With no arguments, it will list all instances and their tags. Instances can be identified by host or name (with support for wildcard matching using --like or --prefix) or instance id. Furthermore, instances can be matched or filtered by tags using one or more --tag arguments.
Arguments:
--prefix For host/name identification, treats the given string
as a prefix
--like For host/name identification, searches for instances
that contain the given string
-t TAG, --tag TAG Filter instances by tag, in the form name<OPERATOR>value.
Valid operators:
= (equal)
!= (not equal)
=~ (contains/like)
!=~ (not contains/not like)
=: (prefixed by)
!=: (not prefixed by)
Eg. To match Name tag containing 'foo': --tag Name=~foo
-i INSTANCE, --instance INSTANCE
instance_id of an instance to manage tags
-H HOST, --host HOST hostname of an instance to manage tags
-e NAME, --name NAME name of an instance to manage tags
Adds a tag to an instance or group of instances. Tags can be standard tags (applied to the instance in AWS) or extended (only exist in AMS database and not applied in AWS). Instances can be identified by host or name (with support for wildcard matching using --like or --prefix) or instance id. Furthermore, instances can be matched or filtered by tags using one or more --tag arguments. Adding tags is disabled in the case where no filters (instance id, name, host, tag) are used to identify hosts to protect against accidental editing of tags globally.
Note: adding a tag to an instance that already exists on the instance will overwrite the value
Required arguments: tagname, tagvalue
Arguments:
--prefix For host/name identification, treats the given string
as a prefix
--like For host/name identification, searches for instances
that contain the given string
-t TAG, --tag TAG Filter instances by tag, in the form name<OPERATOR>value.
Valid operators:
= (equal)
!= (not equal)
=~ (contains/like)
!=~ (not contains/not like)
=: (prefixed by)
!=: (not prefixed by)
Eg. To match Name tag containing 'foo': --tag Name=~foo
-i INSTANCE, --instance INSTANCE
instance_id of an instance to manage tags
-H HOST, --host HOST hostname of an instance to manage tags
-e NAME, --name NAME name of an instance to manage tags
-m, --allow-multiple Allow updating tags on multiple identifed instances
(otherwise add/edit/delete operations will fail if
there is multiple instances)
-p {standard,extended}, --tag-type {standard,extended}
Type of tag, standard tags are applied to the instance
in AWS, extended tags only exist in the ams database
to give you the ability to add tags beyond AWS
limitations
Modifies a tag to an instance or group of instances. Tags can be standard tags (applied to the instance in AWS) or extended (only exist in AMS database and not applied in AWS). Instances can be identified by host or name (with support for wildcard matching using --like or --prefix) or instance id. Furthermore, instances can be matched or filtered by tags using one or more --tag arguments. Adding tags is disabled in the case where no filters (instance id, name, host, tag) are used to identify hosts to protect against accidental editing of tags globally.
This is currently a wrapper for add but these may diverge in the future
Required arguments: tagname, tagvalue
Arguments:
--prefix For host/name identification, treats the given string
as a prefix
--like For host/name identification, searches for instances
that contain the given string
-t TAG, --tag TAG Filter instances by tag, in the form name<OPERATOR>value.
Valid operators:
= (equal)
!= (not equal)
=~ (contains/like)
!=~ (not contains/not like)
=: (prefixed by)
!=: (not prefixed by)
Eg. To match Name tag containing 'foo': --tag Name=~foo
-i INSTANCE, --instance INSTANCE
instance_id of an instance to manage tags
-H HOST, --host HOST hostname of an instance to manage tags
-e NAME, --name NAME name of an instance to manage tags
-m, --allow-multiple Allow updating tags on multiple identifed instances
(otherwise add/edit/delete operations will fail if
there is multiple instances)
-p {standard,extended}, --tag-type {standard,extended}
Type of tag, standard tags are applied to the instance
in AWS, extended tags only exist in the ams database
to give you the ability to add tags beyond AWS
limitations
Removes a tag from an instance or group of instances. Instances can be identified by host or name (with support for wildcard matching using --like or --prefix) or instance id. Furthermore, instances can be matched or filtered by tags using one or more --tag arguments. Adding tags is disabled in the case where no filters (instance id, name, host, tag) are used to identify hosts to protect against accidental editing of tags globally.
Required arguments: tagname
Arguments:
--prefix For host/name identification, treats the given string
as a prefix
--like For host/name identification, searches for instances
that contain the given string
-t TAG, --tag TAG Filter instances by tag, in the form name<OPERATOR>value.
Valid operators:
= (equal)
!= (not equal)
=~ (contains/like)
!=~ (not contains/not like)
=: (prefixed by)
!=: (not prefixed by)
Eg. To match Name tag containing 'foo': --tag Name=~foo
-i INSTANCE, --instance INSTANCE
instance_id of an instance to manage tags
-H HOST, --host HOST hostname of an instance to manage tags
-e NAME, --name NAME name of an instance to manage tags
-m, --allow-multiple Allow updating tags on multiple identifed instances
(otherwise add/edit/delete operations will fail if
there is multiple instances)
Lists information about the ssh key pairs that are available
Arguments:
-r REGION, --region REGION
AWS region name
Lists information on the available AMIs
Arguments:
-r REGION, --region REGION
Filter by region
Runs host discovery to populate the hosts table, as well as other support tables, automatically
Arguments: None
-r REGION, --region REGION
Filter by region
With no options this lists all volume groups in the database
Arguments:
--zone ZONE Availability zone to filter results by. This is a prefix
search so any of the following is valid with increasing
specificity: 'us', 'us-west', 'us-west-2', 'us-west-2a'
Lists the volume groups for a host or hosts
If hostname
is given then it will match hostname exactly
Arguments:
--like LIKE wildcard matches hostname
--prefix PREFIX prefix matches hostname
--zone ZONE Availability zone to filter results by. This is a prefix
search so any of the following is valid with increasing
specificity: 'us', 'us-west', 'us-west-2', 'us-west-2a'
Lists the volume groups for an instance or instances
If instance id
is given then it will match instance_id exactly
Arguments:
--like LIKE wildcard matches instance id
--prefix PREFIX prefix matches instance id
--zone ZONE Availability zone to filter results by. This is a prefix
search so any of the following is valid with increasing
specificity: 'us', 'us-west', 'us-west-2', 'us-west-2a'
Creates a new volume group (single or multiple disk) and attaches to host. Optionally mounts the volume and configures automounting.
Required arguments: (--host | --instance), --numvols, --size
Defaults:
- stripe-block-size:
256
(256k chunk size recommended for performance of EBS stripes using xfs) - raid-level:
0
- filesystem:
xfs
(note: currently due to implementation constrictions filesystem must be one of the types that can be formatted using mkfs.*) - iops:
None
- mount-point:
None
(disk will not be mounted and automounting will not be configured if mount-point not provided) - no-automount:
false
(automounting of volumes/raids will be configured in fstab and mdadm.conf by default unless explicitly disabled)
Arguments:
-i INSTANCE, --instance INSTANCE
instance_id of an instance to attach new volume group
-H HOST, --host HOST hostname of an instance to attach new volume group
-n NUMVOLS, --numvols NUMVOLS
Number of EBS volumes to create for the new volume
group
-r {0,1,5,10}, --raid-level {0,1,5,10}
Set the raid level for new EBS raid
-b STRIPE_BLOCK_SIZE, --stripe-block-size STRIPE_BLOCK_SIZE
Set the stripe block/chunk size for new EBS raid
-m MOUNT_POINT, --mount-point MOUNT_POINT
Set the mount point for volume. Not required, but
suggested
-a, --no-automount Disable configuring the OS to automatically mount the
volume group on reboot
-f FILESYSTEM, --filesystem FILESYSTEM
Filesystem to partition new raid/volume
-s SIZE, --size SIZE Per EBS volume size in GiBs
-p IOPS, --iops IOPS Per EBS volume provisioned iops
Deletes provided volume_group_id. Volume group must not be currently attached to an instance.
Required arguments: volume_group_id
Attaches provided volume_group_id to a host. Optionally mounts the volume and configures automounting.
Required arguments: volume_group_id, (--host | --instance)
Defaults:
- mount-point:
None
(disk will not be mounted and automounting will not be configured if mount-point not provided) - no-automount:
false
(automounting of volumes/raids will be configured in fstab and mdadm.conf by default unless explicitly disabled)
Arguments:
-i INSTANCE, --instance INSTANCE
instance_id of an instance to attach new volume group
-H HOST, --host HOST hostname of an instance to attach new volume group
-m MOUNT_POINT, --mount-point MOUNT_POINT
Set the mount point for volume. Not required, but
suggested
-a, --no-automount Disable configuring the OS to automatically mount the
volume group on reboot
Detaches provided volume_group_id from the host it is currently attached. Removes the automounting configuration for the volume group.
Required arguments: volume_group_id
Arguments:
-u, --unmount Unmounts the volume group if it is mounted. If this
option is not included and the volume is mounted the
detach operation will fail
-f FORCE, --force FORCE
Force detach the volume group's EBS volumes
Detaches provided volume group that is mounted at mount_point
on hostname
. Removes the automounting configuration for the volume group.
Required arguments: hostname, mount_point
Arguments:
-u, --unmount Unmounts the volume group if it is mounted. If this
option is not included and the volume is mounted the
detach operation will fail
-f FORCE, --force FORCE
Force detach the volume group's EBS volumes
Detaches provided volume group that is mounted at mount_point
on instance_id
. Removes the automounting configuration for the volume group.
Required arguments: instance_id, mount_point
Arguments:
-u, --unmount Unmounts the volume group if it is mounted. If this
option is not included and the volume is mounted the
detach operation will fail
-f FORCE, --force FORCE
Force detach the volume group's EBS volumes
Mount a volume group on the host that it is currently attached. Supports mounting to a given mount point or the currently defined mount point for the volume group.
Required arguments: volume_group_id
Arguments:
-m MOUNT_POINT, --mount-point MOUNT_POINT
Set the mount point for volume. If not provided, will
attempt to use currently defined mount point
-a, --no-automount Disable configure the OS to automatically mount the
volume group on reboot
Unmount volume_group_id on the host that it is currently mounted. Does not make any changes to currently automount configuration.
Required arguments: volume_group_id
Configure automounting for the volume_group_id. If mount point is not provided then it will use the currently defined mount point for the volume. If neither of these exist then it will configure automounting of the volume where it is currently mounted, otherwise it will fail configuring automounting.
Required arguments: volume_group_id
Arguments:
-m MOUNT_POINT, --mount-point MOUNT_POINT
Set the mount point for volume. If not provided, will
attempt to use currently defined mount point
-r, --remove Remove the current automount configuration for a
volume group
List the snapshots of a specific volume_group_id.
Required arguments: volume_group_id
Arguments:
-r REGION, --region REGION
Filter the snapshots by region
-x, --extended Show more detailed information
List the snapshots for a specific host, or for hosts matching a search string. Optionally filter by mount point and/or region.
Arguments:
-m MOUNT_POINT, --mount-point MOUNT_POINT
Filter the snapshots by the mount point
-r REGION, --region REGION
Filter the snapshots by region
--like LIKE search string to use to filter hosts
--prefix PREFIX search string prefix to filter hosts
-x, --extended Show more detailed information
List the snapshots for a specific instance, or for instances matching a search string. Optionally filter by mount point and/or region.
Arguments:
-m MOUNT_POINT, --mount-point MOUNT_POINT
Filter the snapshots by the mount point
-r REGION, --region REGION
Filter the snapshots by region
--like LIKE search string to use to filter hosts
--prefix PREFIX search string prefix to filter hosts
-x, --extended Show more detailed information
Create a snapshot of a specific volume_group_id.
PRE and POST are commands that will be run before and after the snapshot, and provide a means to ensure that data is in a
consistent state before snapshotting and revert back to normal operation after snapshot has begun.
Description is written as metadata to the snapshot itself and will show up in the EC2 console.
Required arguments: volume_group_id
Arguments:
--pre PRE command to run on host to prepare for starting EBS
snapshot (will not be run if volume group is not
attached)
--post POST command to run on host after snapshot (will not be run
if volume group is not attached)
-d DESCRIPTION, --description DESCRIPTION
description to add to snapshot(s)
--freeze Issue an fsfreeze command to freeze and unfreeze the
filesystem of a volume when taking the snapshot
Create a snapshot of a specific volume that is on a host
PRE and POST are commands that will be run before and after the snapshot, and provide a means to ensure that data is in a
consistent state before snapshotting and revert back to normal operation after snapshot has begun.
Description is written as metadata to the snapshot itself and will show up in the EC2 console.
Required arguments: (--host | --instance), --mount-point
Arguments:
-i INSTANCE, --instance INSTANCE
instance_id of an instance to snapshot a volume group
-H HOST, --host HOST hostname of an instance to snapshot a volume group
-m MOUNT_POINT, --mount-point MOUNT_POINT
mount point of the volume group to snapshot
--pre PRE command to run on host to prepare for starting EBS
snapshot (will not be run if volume group is not
attached)
--post POST command to run on host after snapshot (will not be run
if volume group is not attached)
-d DESCRIPTION, --description DESCRIPTION
description to add to snapshot(s)
--freeze Issue an fsfreeze command to freeze and unfreeze the
filesystem of a volume when taking the snapshot
Delete all expired snapshots. This operation is intended to be able to be added to crontab for regular purging of expired snapshots
Required arguments: None
Arguments: None
Delete a specific snapshot_group_id. Use one of the snapshot list commands to find a snapshot_group_id.
Required arguments: snapshot_group_id
Clone a specific snapshot_group_id into a new volume group and optionally attach and mount the new volume.
This will manage copying snapshot to destination region if the destination region is not the same as where the snapshot group is held.
If iops is provided then the volumes in the new volume group will be created with the provided iops, otherwise the iops of the original volume group for the snapshot will be used. To create the volumes in the new volume group with no iops when the original volume group had iops, pass in 0 for iops to explicitly disable.
Required arguments: snapshot_group_id, (--zone | --host | --instance)
Arguments:
-z ZONE, --zone ZONE Availability zone to create the new volume group in
-i INSTANCE, --instance INSTANCE
instance id to attach the new volume group to
-H HOST, --host HOST hostname to attache the new volume group to
-m MOUNT_POINT, --mount_point MOUNT_POINT
directory to mount the new volume group to
-a, --no-automount Disable configuring the OS to automatically mount the
volume group on reboot
-p PIOPS, --iops PIOPS
Per EBS volume provisioned iops. Set to 0 to
explicitly disable provisioned iops. If not provided
then the iops of the original volumes will be used.
Clone the latest snapshot for a volume_group_id and optionally attach and mount the new volume.
This will manage copying snapshot to destination region if the destination region is not the same as where the snapshot group is held.
If iops is provided then the volumes in the new volume group will be created with the provided iops, otherwise the iops of the original volume group for the snapshot will be used. To create the volumes in the new volume group with no iops when the original volume group had iops, pass in 0 for iops to explicitly disable.
Required arguments: volume_group_id, (--zone | --host | --instance)
Arguments:
-z ZONE, --zone ZONE Availability zone to create the new volume group in
-i INSTANCE, --instance INSTANCE
instance id to attach the new volume group to
-H HOST, --host HOST hostname to attache the new volume group to
-m MOUNT_POINT, --mount_point MOUNT_POINT
directory to mount the new volume group to
-a, --no-automount Disable configuring the OS to automatically mount the
volume group on reboot
-p IOPS, --iops IOPS Per EBS volume provisioned iops. Set to 0 to
explicitly disable provisioned iops. If not provided
then the iops of the original volumes will be used.
Clone the latest snapshot for a host + mount-point and optionally attach and mount the new volume.
This will manage copying snapshot to destination region if the destination region is not the same as where the snapshot group is held.
If iops is provided then the volumes in the new volume group will be created with the provided iops, otherwise the iops of the original volume group for the snapshot will be used. To create the volumes in the new volume group with no iops when the original volume group had iops, pass in 0 for iops to explicitly disable.
Required arguments: hostname, src_mount_point, (--zone | --host | --instance)
Arguments:
-z ZONE, --zone ZONE Availability zone to create the new volume group in
-i INSTANCE, --instance INSTANCE
instance id to attach the new volume group to
-H HOST, --host HOST hostname to attache the new volume group to
-m MOUNT_POINT, --mount_point MOUNT_POINT
directory to mount the new volume group to
-a, --no-automount Disable configuring the OS to automatically mount the
volume group on reboot
-p IOPS, --iops IOPS Per EBS volume provisioned iops. Set to 0 to
explicitly disable provisioned iops. If not provided
then the iops of the original volumes will be used.
Clone the latest snapshot for an instance + mount-point and optionally attach and mount the new volume.
This will manage copying snapshot to destination region if the destination region is not the same as where the snapshot group is held.
If iops is provided then the volumes in the new volume group will be created with the provided iops, otherwise the iops of the original volume group for the snapshot will be used. To create the volumes in the new volume group with no iops when the original volume group had iops, pass in 0 for iops to explicitly disable.
Required arguments: instance_id, src_mount_point, (--zone | --host | --instance)
Arguments:
-z ZONE, --zone ZONE Availability zone to create the new volume group in
-i INSTANCE, --instance INSTANCE
instance id to attach the new volume group to
-H HOST, --host HOST hostname to attache the new volume group to
-m MOUNT_POINT, --mount_point MOUNT_POINT
directory to mount the new volume group to
-a, --no-automount Disable configuring the OS to automatically mount the
volume group on reboot
-p IOPS, --iops IOPS Per EBS volume provisioned iops. Set to 0 to
explicitly disable provisioned iops. If not provided
then the iops of the original volumes will be used.
Lists the snapshot schedules based on a specific resource id. resource
is one of the literals host
, instance
, volume
and resource_id
is either hostname, instance_id, or volume_group_d respectively.
If no arguments are provided, then it will list all snapshot schedules
Arguments:
--like LIKE search string to use when listing resources
--prefix PREFIX search string prefix to use when listing resources
Schedule snapshots for a host + mount point. This is the most flexible of the snapshot scheduling methods as it will resolve
the host and mount point to do snapshots and will not be affected if the instance or the volume group are changed on a host.
Interval settings affect how often a snapshot is performed/retained. eg. 2 for hourly will take an "hourly" snapshot every other hour,
2 for daily will take a "daily" snapshot every other day.
Retain settings affect how many of each type of snapshot to keep. eg. 24 for hours will keep the last 24 "hourly" snapshots
(not necessarily the last 24 hours if "hourly" interval is not 1). Setting the retain value for any of the types to 0 disables that one.
If --intervals
or --retentions
are set, they will override the single int_*
and ret_*
arguments
PRE and POST are commands that will be run before and after the snapshot, and provide a means to ensure that data is in a consistent state before
snapshotting and revert back to normal operation after snapshot has begun.
Description is written as metadata to the snapshot itself and will show up in the EC2 console.
Required arguments: hostname, --mount-point
Defaults:
- int_hour
1
- int_day
1
- int_week
1
- int_month
1
- ret_hour
24
- ret_day
14
- ret_week
4
- ret_month
12
- ret_year
3
Arguments:
-i HOUR DAY WEEK MONTH, --intervals HOUR DAY WEEK MONTH
Set all intervals at once
-r HOURS DAYS WEEKS MONTHS YEARS, --retentions HOURS DAYS WEEKS MONTHS YEARS
Set all retentions at once
--int_hour HOURS hourly interval for snapshots
--int_day DAYS daily interval for snapshots
--int_week WEEKS weekly interval for snapshots
--int_month MONTHS monthly interval for snapshots
--ret_hour HOURS number of hourly snapshots to keep
--ret_day DAYS number of daily snapshots to keep
--ret_week WEEKS number of weekly snapshots to keep
--ret_month MONTHS number of monthly snapshots to keep
--ret_year YEARS number of yearly snapshots to keep
--pre PRE_COMMAND command to run on host to prepare for starting EBS
snapshot (will not be run if volume group is not
attached)
--post POST_COMMAND command to run on host after snapshot (will not be run
if volume group is not attached)
-d DESCRIPTION, --description DESCRIPTION
description to add to snapshot
-m MOUNT_POINT, --mount-point MOUNT_POINT
mount point of the volume group to snapshot
Schedule snapshots for an instance + mount point. This is more flexible than the volume group id based snapshot, but if the instance
for a host is replaced, then the snapshot may not be able to run.
Interval settings affect how often a snapshot is performed/retained. eg. 2 for hourly will take an "hourly" snapshot every other hour,
2 for daily will take a "daily" snapshot every other day.
Retain settings affect how many of each type of snapshot to keep. eg. 24 for hours will keep the last 24 "hourly" snapshots
(not necessarily the last 24 hours if "hourly" interval is not 1). Setting the retain value for any of the types to 0 disables that one.
If --intervals
or --retentions
are set, they will override the single int_*
and ret_*
arguments
PRE and POST are commands that will be run before and after the snapshot, and provide a means to ensure that data is in a consistent state before
snapshotting and revert back to normal operation after snapshot has begun.
Description is written as metadata to the snapshot itself and will show up in the EC2 console.
Required arguments: instance_id, --mount-point
Defaults:
- int_hour
1
- int_day
1
- int_week
1
- int_month
1
- ret_hour
24
- ret_day
14
- ret_week
4
- ret_month
12
- ret_year
3
Arguments:
-i HOUR DAY WEEK MONTH, --intervals HOUR DAY WEEK MONTH
Set all intervals at once
-r HOURS DAYS WEEKS MONTHS YEARS, --retentions HOURS DAYS WEEKS MONTHS YEARS
Set all retentions at once
--int_hour HOURS hourly interval for snapshots
--int_day DAYS daily interval for snapshots
--int_week WEEKS weekly interval for snapshots
--int_month MONTHS monthly interval for snapshots
--ret_hour HOURS number of hourly snapshots to keep
--ret_day DAYS number of daily snapshots to keep
--ret_week WEEKS number of weekly snapshots to keep
--ret_month MONTHS number of monthly snapshots to keep
--ret_year YEARS number of yearly snapshots to keep
--pre PRE_COMMAND command to run on host to prepare for starting EBS
snapshot (will not be run if volume group is not
attached)
--post POST_COMMAND command to run on host after snapshot (will not be run
if volume group is not attached)
-d DESCRIPTION, --description DESCRIPTION
description to add to snapshot
-m MOUNT_POINT, --mount-point MOUNT_POINT
mount point of the volume group to snapshot
Schedule snapshots for a specific volume_group_id. Least flexible of all the schedule creations, as it will only snapshot
the volume group it is assigned to do. If the volume group is no longer in use, snapshots will continue to be created.
int_*
settings affect how often a snapshot is performed/retained. eg. 2 for hourly will take an "hourly" snapshot every other hour,
2 for daily will take a "daily" snapshot every other day.
ret_*
settings affect how many of each type of snapshot to keep. eg. 24 for hours will keep the last 24 "hourly" snapshots
(not necessarily the last 24 hours if "hourly" interval is not 1). Setting the retain value for any of the types to 0 disables that one.
If --intervals
or --retentions
are set, they will override the single int_*
and ret_*
arguments
PRE and POST are commands that will be run before and after the snapshot, and provide a means to ensure that data is in a consistent state before
snapshotting and revert back to normal operation after snapshot has begun.
Description is written as metadata to the snapshot itself and will show up in the EC2 console.
Required arguments: volume_group_id
Defaults:
- int_hour
1
- int_day
1
- int_week
1
- int_month
1
- ret_hour
24
- ret_day
14
- ret_week
4
- ret_month
12
- ret_year
3
Arguments:
-i HOUR DAY WEEK MONTH, --intervals HOUR DAY WEEK MONTH
Set all intervals at once
-r HOURS DAYS WEEKS MONTHS YEARS, --retentions HOURS DAYS WEEKS MONTHS YEARS
Set all retentions at once
--int_hour HOURS hourly interval for snapshots
--int_day DAYS daily interval for snapshots
--int_week WEEKS weekly interval for snapshots
--int_month MONTHS monthly interval for snapshots
--ret_hour HOURS number of hourly snapshots to keep
--ret_day DAYS number of daily snapshots to keep
--ret_week WEEKS number of weekly snapshots to keep
--ret_month MONTHS number of monthly snapshots to keep
--ret_year YEARS number of yearly snapshots to keep
--pre PRE_COMMAND command to run on host to prepare for starting EBS
snapshot (will not be run if volume group is not
attached)
--post POST_COMMAND command to run on host after snapshot (will not be run
if volume group is not attached)
-d DESCRIPTION, --description DESCRIPTION
description to add to snapshot
Edit an existing snapshot schedule by schedule_id.
int_*
settings affect how often a snapshot is performed/retained. eg. 2 for hourly will take an "hourly" snapshot every other hour,
2 for daily will take a "daily" snapshot every other day.
ret_*
settings affect how many of each type of snapshot to keep. eg. 24 for hours will keep the last 24 "hourly" snapshots
(not necessarily the last 24 hours if "hourly" interval is not 1). Setting the retain value for any of the types to 0 disables that one.
If --intervals
or --retentions
are set, they will override the single int_*
and ret_*
arguments as well as overwrite all the single
settings in the database. If you only want to update a single setting, use the single versions of the arguments
PRE and POST are commands that will be run before and after the snapshot, and provide a means to ensure that data is in a consistent state before
snapshotting and revert back to normal operation after snapshot has begun.
Description is written as metadata to the snapshot itself and will show up in the EC2 console. Changing the description does not update the
descriptions on snapshots that have already been created; it only changes the description for new snapshots going forward
At this time, changing a snapshot schedule from volume/host/instance type to any other type is not supported. Delete the current schedule
and add a new one with different type but the same settings to achieve this functionality.
Required arguments: schedule_id
Arguments:
-i HOUR DAY WEEK MONTH, --intervals HOUR DAY WEEK MONTH
Set all intervals at once
-r HOURS DAYS WEEKS MONTHS YEARS, --retentions HOURS DAYS WEEKS MONTHS YEARS
Set all retentions at once
--int_hour HOURS hourly interval for snapshots
--int_day DAYS daily interval for snapshots
--int_week WEEKS weekly interval for snapshots
--int_month MONTHS monthly interval for snapshots
--ret_hour HOURS number of hourly snapshots to keep
--ret_day DAYS number of daily snapshots to keep
--ret_week WEEKS number of weekly snapshots to keep
--ret_month MONTHS number of monthly snapshots to keep
--ret_year YEARS number of yearly snapshots to keep
--pre PRE_COMMAND command to run on host to prepare for starting EBS
snapshot (will not be run if volume group is not
attached)
--post POST_COMMAND command to run on host after snapshot (will not be run
if volume group is not attached)
-d DESCRIPTION, --description DESCRIPTION
description to add to snapshot
Deletes a specific snapshot schedule. Use ams snapshot schedule list
to find the schedule_id
of a specific schedule
Required arguments: schedule_id
This is intended to be dropped into a cron on a single host every hour with no arguments.
If a schedule_id
is provided then the snapshot for the schedule points to will be created immediately regardless of whether it is scheduled (with a best
effort to apply the retention rules so the snapshot will eventually be cleaned up). Take note that if a valid expiry time can be calculated the
snapshot will be automatically purged per the rules of the schedule. If you want a snapshot that will not expire use ams snapshot create
to create a snapshot.
Arguments:
--purge delete expired snapshots after running the schedule
Gathers information on security groups, security group ingress and egress rules, and security group associations with instances. With no arguments, discovery will run across all regions.
Arguments:
-r REGION, --region REGION
Limit discover to given region
Lists security groups. Results can be filtered on region, name, security group id, and/or vpc id; all security groups across all regions will be listed if no optional arguments are provided.
Arguments:
-r REGION, --region REGION
Filter security groups by region
-s SECURITY_GROUP, --security-group SECURITY_GROUP
Filter by security group id
-n NAME, --name NAME Filter by security group name
-v VPC, --vpc VPC Filter by VPC id
Lists allocated elastic IP addresses. Results can be filtered by region
Arguments:
-r REGION, --region REGION
Filter elastic IPs by region
Gathers information on VPCs and subnets. With no arguments discovery will run across all regions.
Arguments:
-r REGION, --region REGION
Limit discover to given region
List information for vpcs or subnets.
Arguments:
-v VPC_ID, --vpc-id VPC_ID
Filter by VPC ID
-s SUBNET_ID, --subnet-id SUBNET_ID
Filter by Subnet ID
-r REGION, --region REGION
Filter by region
Reads the Route53 dns configurations and maps the hostnames defined in dns to the hosts in the hosts table. Currently this will pull all the records from dns down to the database, but it only uses A and CNAME records to assign hostnames to hosts. This will not traverse recursive CNAMEs currently (or likely ever), and as a general rule it will prefer an A record over a CNAME (I am open to arguments for/against this and any suggestions).
Arguments:
--interactive Enable interactive mode for applying discovered host
names to hosts (not enabled yet)
--prefer {internal,external}
Sets which hostname gets preference if DNS records are
defined for an internal address and an external
address
--load-only Only load the route53 tables, but do not apply
hostname changes to hosts
Lists the DNS records that are currently in the database. You can run ams route53 discovery
to synchronize the database
with what is currently configured in Route53
Arguments:
Lists the hosted zones that are currently in the database. You can run ams route53 discovery
to synchronize the database
with what is currently configured in Route53
Arguments:
Lists the Route53 health checks that are currently in the database. You can run ams route53 discovery
to synchronize the
database with what is currently configured in Route53
Arguments:
Create a raw DNS record in Route53. Note that currently this tool only supports single value DNS entries (ie. no support
for multiple values in a single DNS record). fqdn is the fully qualified domain name for the entry. You can include the trailing dot(.)
or it will be added automatically. record_type is the dns record type. Currently only support values a
or cname
for A record or
CNAME record respectively.
Required arguments: fqdn, record_type, (--zone-id | --zone-name), --record-value
Arguments:
--zone-id ZONE_ID Zone id to add DNS record to
--zone-name ZONE_NAME
Zone name to add DNS record to
-t TTL, --ttl TTL TTL for the entry (default: 60)
-r {simple,weighted,latency,failover}, --routing-policy {simple,weighted,latency,failover}
The routing policy to use (default: simple)
-w WEIGHT, --weight WEIGHT
Weighted routing policy: weight to assign to the dns
resource
--region REGION Latency routing policy: assigns the region for the dns
resource for routing
--health-check HEALTH_CHECK
health check id to associate with the record (for IDs,
use: ams route53 list healthchecks)
--failover-role {primary,secondary}
Failover routing policy: defines whether resource is
primary or secondary
-v RECORD_VALUE, --record-value RECORD_VALUE
Value for the DNS record (Currently only has support
single value entries)
--identifier IDENTIFIER
Unique identifier to associate to a record that shares
a name/type with other records in weighted, latency,
or failover records
Create a DNS record for a running instance. Optionally you can also provide the parameters to create a health check for
the DNS entry. This enables easily adding records for hosts using weighted, latency, and failover DNS configurations.
fqdn is the fully qualified domain name for the entry. You can include the trailing dot(.) or it will be added automatically.
record_type is the dns record type. Currently only support values a
or cname
for A record or CNAME record respectively.
Required arguments: fqdn, record_type, (--zone-id | --zone-name), (--host | --instance)
Arguments:
--zone-id ZONE_ID Zone id to add DNS record to
--zone-name ZONE_NAME
Zone name to add DNS record to
-t TTL, --ttl TTL TTL for the entry (default: 60)
-r {simple,weighted,latency,failover}, --routing-policy {simple,weighted,latency,failover}
The routing policy to use (default: simple)
-w WEIGHT, --weight WEIGHT
Weighted routing policy: weight to assign to the dns
resource
--region REGION Latency routing policy: assigns the region for the dns
resource for routing
--health-check HEALTH_CHECK
health check id to associate with the record (for IDs,
use: ams route53 list healthchecks)
--failover-role {primary,secondary}
Failover routing policy: defines whether resource is
primary or secondary
-H HOST, --host HOST Hostname (to find current hostname use: ams host list)
-i INSTANCE, --instance INSTANCE
Instance ID
--use {public,private}
Define whether to use the public or private
hostname/IP
--identifier IDENTIFIER
Unique identifier to associate to a record that shares
a name/type with other records in weighted, latency,
or failover records. If not provided, one will be
created from the hostname or instance id
--update-hosts (routing_policy=simple only) Updates the hostname for
the host in the AMS hosts table (saving you from
having to run route53 discovery to update)
--configure-hostname (routing_policy=simple only) Set the hostname on the
host to the FQDN that was just added to the host or
the currently set uname (uname will override the
FQDN). Also applies the --update-hosts option (for
Ubuntu and Redhat flavors, it will also edit the
proper files to make this change permanent)
Delete a DNS record in Route53. fqdn is the fully qualified domain name for the entry. You can include the trailing dot(.)
or it will be added automatically. record_type is the dns record type. Currently only support values a
or cname
for A record or
CNAME record respectively.
Required arguments: fqdn, record_type, (--zone-id | --zone-name)
Arguments:
--identifier IDENTIFIER
Unique identifier for a record that shares a name/type
with other records in weighted, latency, or failover
records
--zone-id ZONE_ID Zone id to add DNS record to
--zone-name ZONE_NAME
Zone name to add DNS record to
Creates a health check in Route53 to be able to be used for weighted, latency and failover DNS entries. ip should be a public
ip address for the host, port is the port to health check and type is one of tcp
, http
, https
for their respective health check types.
Required arguments: ip, port, type
Arguments:
-i {10,30}, --interval {10,30}
Health check interval (10 or 30 second)
-f {1,2,3,4,5,6,7,8,9,10}, --failure-threshold {1,2,3,4,5,6,7,8,9,10}
Number of times health check fails before the host is
marked down by Route53
-a RESOURCE_PATH, --resource-path RESOURCE_PATH
HTTP/HTTPS: health check resource path
-d FQDN, --fqdn FQDN HTTP/HTTPS: health check fully qualified domain name
-s STRING_MATCH, --string-match STRING_MATCH
HTTP/HTTPS: health check response match string
This will install the database table for an initial install of AMS.
This should be run every time that the software is updated to ensure that database schema matches the application's expectation.
With no arguments, this will display the current value in the database for all of the config variables. Passing --active displays the same config variables, but the values for all of the variables are pulled from the active running configuration (after all config sources are processed).
Arguments:
-a, --active Show the full active config rather than the values that are in
the database
Update the value of the name
variable to value
. value
is required unless using the --clear option to clear the value.
Required Arguments: name, value|--clear
Arguments:
--clear Clear the value in the database for the config variable
==================
The goal of AMS is not to replace standard configuration management systems (like puppet, chef, ansible, salt, etc), but rather to augment these systems and provide tools that are missing or may be particularly cumbersome to use in the context of a CMS. CMSes are particularly well suited for managing software configurations on hosts, but management of hardware (or virtualized hardware) in these is limited. Most now support starting/stopping/terminating an instance but lack more advanced features for managing and tracking the other components of the virtualized hardware infrastructure (storage, networking, etc).
AMS is a growing set of tools for managing virtual hardware, and in the process, it is also developing into a virtual hardware CMDB. The AMS database now keeps track of enough information about EC2 infrastructure to use this data to begin integrating with other CMSes.
AMS now comes with a command line tool: ams-inventory
. This script implements the --list
and --host
options required for an
ansible dynamic inventory file and outputs json in ansible's dynamic inventory format.
To use ams-inventory with ansible commands just pass the path to ams-inventory with the -i/--inventory
flag. Eg: ansible -i /path/to/ams-inventory -m ping
Arguments:
-h, --help show this help message and exit
--list Lists all of the hosts in ansible dynamic inventory
format
--host HOST lists the hostvars for a single instance
--list-groups List the additional configured groups for dynamic
inventory
--list-tag-templates Lists the configured group tagging templates
--add-tag-template TEMPLATE
Add a new group tagging template. Eg. In the case of a
server that is tagged with the tags env=stage and
role=webserver and a template that is defined as
'{{env}}-{{role}}', the dynamic inventory will add the
host to a group with the name 'stage-webserver'. The
template tags can also be filtered using the syntax
'{{name=value}}'. Eg. a template
'{{env=stage}}-{{role}}' would be applied to a host
with env=stage and role=webserver, but not a host with
env=prod and role=webserver.
--edit-tag-template TEMPLATE_ID TEMPLATE
Edit an existing group tagging template
--delete-tag-template TEMPLATE_ID
Delete a tag template
--add-group GROUP_NAME [CHILD_NAME [CHILD_NAME] ...]
Add a new inventory group with name GROUP_NAME.
Optionally include the child groups for GROUP_NAME.
Note: this is an additive operation rather than
replacement operation.
--delete-group GROUP_NAME
Remove an inventory group and its mapping of children
--remove-group-children GROUP_NAME [CHILD_NAME [CHILD_NAME] ...]
Remove one or more children from a group
- Automatic addition of instances to groups based on the following:
- Tags on instances in the form
NAME_VALUE
, this includes AWS tags and AMS extended tags but not AMS hostvars type tags - AWS Region
- AWS Availability Zone
- VPC ID
- Subnet ID (VPC subnet)
- AMI ID
- Instance Type (m1.small, c3.xlarge, etc)
- Name of instance (value of the Name tag on an instance)
- Tags on instances in the form
- Automatic addition of mappings for Route53 entries to hosts
- Management of static group hierarchies that are included in the dynamic inventory
- Management of templates that are applied to an instance's tags to include instance in a group
Ansible passes the --list
option when executing ams-inventory to fetch the full dynamic inventory.Ansible typically
passes --host HOSTNAME
to a dynamic inventory script to retrieve the hostvars for a host, but since this data included in the
primary dynamic inventory document, ansible does not use it. It is included for completeness and as a user tool to look at the
variables for a host.
Static group hierarchies are equivalent to the [group:children]
constructs in ansible inventory files. These hierarchies
can be managed in ams-inventory, ansible static inventory files, or a mix of both using ansible's ability to use a directory
path as the value for -i/--inventory
option (there must be a script that executes ams-inventory or a symlink to ams-inventory
in the directory with the static inventory files). Groups can be a parent, a child, or both at once, as nested parent=>child
relationships are supported.
Displays a table of the currently configured group hierarchies
Example:
$> ams-inventory --list-groups
Inventory Groups:
+--------------------+--------------------+
| Group | Children |
+--------------------+--------------------+
| loadbalancer | prod-loadbalancer |
| | stage-loadbalancer |
| | |
| prod | prod-loadbalancer |
| | prod-webserver |
| | |
| prod-loadbalancer | --- |
| | |
| prod-webserver | --- |
| | |
| stage | stage-loadbalancer |
| | stage-webserver |
| | |
| stage-loadbalancer | --- |
| | |
| stage-webserver | --- |
| | |
| webserver | prod-webserver |
| | stage-webserver |
| | |
+--------------------+--------------------+
8 groups
The above is equivalent to having an ansible inventory file with these definitions in it:
[prod:children]
prod-loadbalancer
prod-webserver
[stage:children]
stage-loadbalancer
stage-webserver
[webserver:children]
prod-webserver
stage-webserver
[loadbalancer:children]
prod-loadbalancer
stage-loadbalancer
Add a new group to the groups table and optionally associates group with 1 or more child groups. The parent and child groups do not have
to already exist when creating the mapping as they will be created if needed. This operation is an additive operation so if you
already have a PARENT=>CHILD_1 relationship and execute ams-inventory --add-group PARENT CHILD_2
then you will now have both
mappings: PARENT=>CHILD_1 and PARENT=>CHILD_2
Required: GROUP_NAME
Optional: 1 or more CHILD_NAMEs
Example: Building the hierarchy defined in previous section using multiple different approaches to adding the relationships
$> # adding prod group with no children
$> ams-inventory --add-group prod
Inventory Groups:
+-------+--------------------+
| Group | Children |
+-------+--------------------+
| prod | --- |
| | |
+-------+--------------------+
1 groups
$> # adding prod-loadbalancer as child of prod
$> ams-inventory --add-group prod prod-loadbalancer
Inventory Groups:
+-------+--------------------+
| Group | Children |
+-------+--------------------+
| prod | prod-loadbalancer |
| | |
+-------+--------------------+
1 groups
$> # adding prod-webserver as child of prod
$> ams-inventory --add-group prod prod-webserver
Inventory Groups:
+-------+--------------------+
| Group | Children |
+-------+--------------------+
| prod | prod-loadbalancer |
| | prod-webserver |
| | |
+-------+--------------------+
1 groups
$> # adding loadbalancer group with stage-loadbalancer child
$> ams-inventory --add-group loadbalancer stage-loadbalancer
Inventory Groups:
+--------------+--------------------+
| Group | Children |
+--------------+--------------------+
| loadbalancer | stage-loadbalancer |
| | |
+--------------+--------------------+
1 groups
$> # add prod-loadbalancer group
$> ams-inventory --add-group prod-loadbalancer
Inventory Groups:
+-------------------+----------+
| Group | Children |
+-------------------+----------+
| prod-loadbalancer | --- |
| | |
+-------------------+----------+
1 groups
$> # add the prod-loadbalancer group as a child to the loadbalancer group
$> ams-inventory --add-group loadbalancer prod-loadbalancer
Inventory Groups:
+--------------+--------------------+
| Group | Children |
+--------------+--------------------+
| loadbalancer | prod-loadbalancer |
| | stage-loadbalancer |
| | |
+--------------+--------------------+
1 groups
$> # adding each of the webserver child groups, followed by the parent relationship (intermediate output omitted for terseness)
$> ams-inventory --add-group stage-webserver
$> ams-inventory --add-group prod-webserver
$> ams-inventory --add-group webserver prod-webserver stage-webserver
Inventory Groups:
+------------+------------------+
| Group | Children |
+------------+------------------+
| webserver | prod-webserver |
| | stage-webserver |
| | |
+------------+------------------+
1 groups
$> # adding stage group with both children (this is the easiest and fastest method when defining groups)
$> ams-inventory --add-group stage stage-webserver stage-loadbalancer
Inventory Groups:
+-------+--------------------+
| Group | Children |
+-------+--------------------+
| stage | stage-loadbalancer |
| | stage-webserver |
| | |
+-------+--------------------+
1 groups
Removes the relationship of 1 or more child groups with a parent group. This only removes the relationship, not the group entirely, use --delete-group to completely remove a group and all of its relationships.
Example:
$> # remove a single child group from a parent
$> ams-inventory --remove-group-children loadbalancer prod-loadbalancer
Removed prod-loadbalancer from group loadbalancer
Inventory Groups:
+--------------+--------------------+
| Group | Children |
+--------------+--------------------+
| loadbalancer | stage-loadbalancer |
| | |
+--------------+--------------------+
1 groups
$> # remove a multiple child groups from a parent
$> ams-inventory --remove-group-children loadbalancer prod-loadbalancer stage-loadbalancer
Removed prod-loadbalancer from group loadbalancer
Removed stage-loadbalancer from group loadbalancer
Inventory Groups:
+--------------+----------+
| Group | Children |
+--------------+----------+
| loadbalancer | --- |
| | |
+--------------+----------+
1 groups
Deletes a group and all of its parent and child associations. Does not delete parents or children of the deleted group, only the relationships.
Example:
$> # delete a group that is a child in multiple groups
$> ams-inventory --delete-group prod-loadbalancer
Group prod-loadbalancer deleted
$> ams-inventory --list-groups
Inventory Groups:
+--------------------+--------------------+
| Group | Children |
+--------------------+--------------------+
| loadbalancer | stage-loadbalancer |
| | |
| prod | prod-webserver |
| | |
| prod-webserver | --- |
| | |
| stage | stage-loadbalancer |
| | stage-webserver |
| | |
| stage-loadbalancer | --- |
| | |
| stage-webserver | --- |
| | |
| webserver | prod-webserver |
| | stage-webserver |
| | |
+--------------------+--------------------+
7 groups
$> # delete a group that is a parent of other groups
$> ams-inventory --delete-group webserver
Group webserver deleted
$> ams-inventory --list-groups
Inventory Groups:
+--------------------+--------------------+
| Group | Children |
+--------------------+--------------------+
| loadbalancer | stage-loadbalancer |
| | |
| prod | prod-webserver |
| | |
| prod-webserver | --- |
| | |
| stage | stage-loadbalancer |
| | stage-webserver |
| | |
| stage-loadbalancer | --- |
| | |
| stage-webserver | --- |
| | |
+--------------------+--------------------+
6 groups
Templates can be defined that are applied to instance tags to create dynamic group names and add the instance to the dynamic groups. Templates have two different forms: a basic form that is applied to any host that has all the tags defined in the template and filtered templates that only apply to hosts that have all the tags and meet the required filter value(s).
A tag template is simply a string that has any values that should be replaced by instance tags denoted by {{TAG_NAME}}
Example: Given some hosts with the tags "env", "role" and "type", some examples of templates you could define are:
{{env}}-{{role}}-{{type}}
{{env}}_{{role}}
foo-{{env}}-{{type}}
If you have 3 hosts with the following values for the tags:
- hostA
- env = stage
- role = webserver
- type = api
- hostB
- env = production
- role = database
- type = primary
- cluster = backup
- hostC
- env = dev
- role = webserver
These would be rendered into the following group names for the hosts:
- hostA
stage-webserver-api
stage_webserver
foo-stage-api
- hostB
production-database-primary
production_database
foo-production-primary
- hostC
- N/A (not all values for the template are present)
dev_webserver
- N/A (not all values for the template are present)
Each host will then be included in the groups for the templates that were fully rendered for that host.
Filtered tag templates enable filtering of what hosts the templates get applied to based on the tag values. The format for
filtered templates is {{TAG_NAME=TAG_VALUE}}
Example: Given some hosts with the tags "env", "role" and "type", some examples of filtered templates you could define are:
{{env=production}}-{{role}}-{{type}}
{{env=dev}}-{{role=database}}-combined
{{env}}_{{role}}_{{type=api}}_deprecated
If you have 3 hosts with the following values for the tags:
- hostA
- env = dev
- role = webserver
- type = api
- hostB
- env = production
- role = database
- type = primary
- cluster = backup
- hostC
- env = dev
- role = database
- type = primary
- cluster = backup
These would be rendered into the following group names for the hosts:
- hostA
- N/A (env != production)
- N/A (role != database)
dev_webserver_api_deprecated
- hostB
production-database-primary
- N/A (env != production)
- N/A (type != api)
- hostC
- N/A (env != production)
dev-database-combined
- N/A (type != api)
Each host will then be included in the groups for the templates that were fully rendered for that host.
Displays a table of the currently configured tag templates. This table contains the template ID and the template and is sorted lexicographically by the contents of the template.
Example:
$> ams-inventory --list-tag-templates
Inventory Templates:
+-------------+----------------------------------+
| Template ID | Template |
+-------------+----------------------------------+
| 1 | {{env}}-{{role}}-{{role_type}} |
| | |
| 12 | {{role=webserver}}-{{role_type}} |
| | |
+-------------+----------------------------------+
2 templates
Adds a new tag template to the database. TEMPLATE
should be in the form of the above descriptions for basic and filtered
templates. After the template is normalized and added, the final version of the template will be displayed.
Example:
$> ams-inventory --add-tag-template '{{env=production}}-{{role}}-{{type}}'
Template created
Inventory Templates:
+-------------+--------------------------------------+
| Template ID | Template |
+-------------+--------------------------------------+
| 13 | {{env=production}}-{{role}}-{{type}} |
| | |
+-------------+--------------------------------------+
1 templates
Adds a new tag template to the database. TEMPLATE_ID
can be found using ams-inventory --list-tag-templates
or from output of other operations. TEMPLATE
should be in the form of the above descriptions for basic and filtered
templates. After the template is normalized and added, the final version of the template will be displayed.
Example:
$> ams-inventory --edit-tag-template 13 '{{env=dev}}-{{role}}-{{type}}'
Template Updated
Inventory Templates:
+-------------+-------------------------------+
| Template ID | Template |
+-------------+-------------------------------+
| 13 | {{env=dev}}-{{role}}-{{type}} |
| | |
+-------------+-------------------------------+
1 templates
Deletes a tag template that is stored in the database. TEMPLATE_ID
can be found using ams-inventory --list-tag-templates
or from output of other operations.
Example:
$> ams-inventory --delete-tag-template 13
Template 13 deleted