Following document should describe how to deploy tcp cloud lab for testing purposes in existing OpenStack installation through heat template.
- OpenStack cloud
- SSH client + keypair
- Git client (Github account is optional)
- Python interpret with Pip and build tools
On underlying OpenStack cloud, you need to have at least following resources available when using default instance types:
- 20 vCPUs
- 40GB RAM
- 400GB root disk space
- Ubuntu Cloud 14.04 image or tcp cloud image registered in Glance
|ctl01||Openstack & Opencontrail controller||126.96.36.199|
|ctl02||Openstack & Opencontrail controller||188.8.131.52|
|ctl03||Openstack & Opencontrail controller||184.108.40.206|
|web01||Openstack Dashboard and API proxy||220.127.116.11|
All hosts are deployed in workshop.cloudlab.cz domain.
Lab setup consists of multiple Heat stacks.
|salt_single_public||Base stack which deploys network and single-node Salt master|
|openstack_cluster_public||Deploy OpenStack cluster with OpenContrail, requires salt_single_public|
|openvstorage_cluster_private||Deploy Open vStorage infrastructure on top of openstack_cluster_public|
Naming convention is following:
- name is short identifier describing main purpose of given stack
- cluster or single identifies topology (multi node vs. single node setup)
- public or private identifies network access. Public sets security group and assigns floating IP so provided services are visible from outside world.
For smallest clustered setup, we are going to use salt_single_public and openstack_cluster_public stacks.
First you need to clone heat templates from our Github repository.
git clone https://github.com/tcpcloud/heat-templates.git
To be able to create Python environment and install compatible OpenStack clients, you need to install build tools first. Eg. on Ubuntu:
apt-get install python-dev python-pip python-virtualenv build-essential
Now create and activate virtualenv venv-heat so you can install specific versions of OpenStack clients into completely isolated Python environment.
virtualenv venv-heat source ./venv-heat/bin/activate
To install tested versions of clients for OpenStack Juno and Kilo into activated environment, use requirements.txt file in repository cloned earlier:
pip install -r requirements.txt
If everything goes right, you should be able to use openstack clients, heat, nova, etc.
First source openrc credentials so you can use openstack clients. You can download openrc file from Openstack dashboard and source it or execute following commands with filled credentials:
export OS_AUTH_URL=https://<openstack_endpoint>:5000/v2.0 export OS_USERNAME=<username> export OS_PASSWORD=<password> export OS_TENANT_NAME=<tenant>
Now you need to customize env files for stacks, see examples in env directory and set required parameters.
parameters: # Following parameters are required to deploy workshop lab environment # Public net id can be found in Horizon or by running `nova net-list` public_net_id: f82ffadb-cd7b-4931-a2c1-f865c61edef2 # Public part of your SSH key key_name: my-key key_value: ssh-rsa xyz # Instance image to use, we recommend to grab latest tcp cloud image here: # http://apt.tcpcloud.eu/images/ # Lookup for image by running `nova image-list` instance_image: ubuntu-14-04-x64-1437486976
parameters: # Following parameters are required to deploy workshop lab environment # Net id can be found in Horizon or by running `nova net-list` public_net_id: f82ffadb-cd7b-4931-a2c1-f865c61edef2 private_net_id: 90699bd2-b10e-4596-99c6-197ac3fb565a # Your SSH key, deployed by salt_single_public stack key_name: my-key # Instance image to use, we recommend to grab latest tcp cloud image here: # http://apt.tcpcloud.eu/images/ # Lookup for image by running `nova image-list` instance_image: ubuntu-14-04-x64-1437486976
To see all available parameters, see template yaml files in templates directory.
Finally you can deploy common stack with Salt master, SSH key and private network.
If everything goes right, stack should be ready in a few minutes. You can verify by running following commands:
heat stack-list nova list
You should be also able to log in as root to public IP provided by nova list command.
Now you can deploy openstack cluster:
When cluster is deployed, you should be able to log in to the instances from Salt master node by forwarding your SSH agent.
Deploy Salt master
Login to cfg01 node and run highstate to ensure everything is set up correctly.
Then you should be able to see all Salt minions.
salt '*' grains.get ipv4
Deploy control nodes
First execute basic states on all nodes to ensure Salt minion, system and OpenSSH are set up.
salt '*' state.sls linux,salt,openssh,ntp
Next you can deploy basic services:
- keepalived - this service will set up virtual IP on controllers
- GlusterFS server service
salt 'ctl*' state.sls keepalived,rabbitmq,glusterfs.server.service
Now you can deploy Galera MySQL and GlusterFS cluster node by node.
salt 'ctl01*' state.sls glusterfs.server,galera salt 'ctl02*' state.sls glusterfs.server,galera salt 'ctl03*' state.sls glusterfs.server,galera
Next you need to ensure that GlusterFS is mounted. Permission errors are ok at this point, because some users and groups does not exist yet.
salt 'ctl*' state.sls glusterfs.client
Finally you can execute highstate to deploy remaining services. Again, run this node by node.
salt 'ctl01*' state.highstate salt 'ctl02*' state.highstate salt 'ctl03*' state.highstate
Everything should be up and running now. You should execute a few checks before continue. Execute following checks on one or all control nodes.
- Check GlusterFS status:
gluster peer status gluster volume status
- Check Galera status (execute on one of the controllers):
mysql -pworkshop -e'SHOW STATUS;'
- Check OpenContrail status:
- Check OpenStack services:
nova-manage service list cinder-manage service list
- Source keystone credentials and try Nova API:
source keystonerc nova list
Deploy compute nodes
Simply run highstate (better twice):
salt 'cmp*' state.highstate
Dashboard and support infrastructure
Web and metering nodes can be deployed by running highstate:
salt 'web*' state.highstate salt 'mtr*' state.highstate
On monitoring node, you need to setup git first:
salt 'mon*' state.sls git salt 'mon*' state.highstate