Skip to main content

Creating an OpenStack development environment with an existing External Network via Packstack

Packstack provides a very simple, and very well automated process for reading development environments for OpenStack. I’d like to document here just some reproducible steps I’ve been using to set up these sorts of environments.
The process for running this on a single node is very straightforward:
http://haidv204.blogspot.com/2018/06/how-to-install-openstack-using-rdo.html
and expanding this setup to multiple nodes is similarly straightforward; you can replace the flag--allinone with --install-hosts=${controller_node_ip},${compute_node_1_ip},${compute_node_2_ip}...and it fires off a multi-node setup. This, however, does not assume you have external networks or other parts of your network outside of OpenStack you’d like these resources to be able to connect to.
The servers in use in my environment are:
  1. Controller: 16GB RAM / 100GB Disk, 8 vCPUs Note: This will work with much fewer resources. I deployed this into a public cloud — for this reason, I also like to specify the virtual instance’s private/LAN IP address as the controller IP (which means it would be accessible only over that interface, so from a client machine, either something like an SSH tunnel to the controller, a VPN into that network could be used for access, or using the client tooling locally on the controller itself).
  2. Compute (2 Nodes): 32 GB RAM / 2 TB Disk, 8 Core. Things like Glance images and Cinder volumes will be stored here as well, so these should be provisioned with adequate storage. These, in my case, are physical machines, and not on the same network as my controller, so I specified the web-facing addresses for the hosts — typically, if on the same network, these can still be provisioned using the LAN address if accessible to the controller, and still bridge to the public facing interface to create these external networks.
  3. Network: Packstack, in our example, won’t do things like creating specific Neutron nodes, etc. but for each of my compute nodes, I had provisioned a /29 subnet for use in this example.
— — — —
Note: If your controller and the rest of your nodes are on different networks, it might also be helpful to have Packstack create, either, a dedicated Neutron node on the network, or to install the network services on the compute nodes themselves.
Before running PackStack, you can do this by generating the answers file:
packstack --gen-answer-file=openstack-$(date +%F)-answers
and modify the optionCONFIG_NETWORK_HOSTS to reflect either another node on that target network or one of the computer hosts.
The resulting answers file will contain a section like this:
# Server on which to install OpenStack services specific to the
# controller role (for example, API servers or dashboard).
CONFIG_CONTROLLER_HOST=${CONTROLLER}
# List the servers on which to install the Compute service.
CONFIG_COMPUTE_HOSTS=${COMPUTE_1},${COMPUTE-NETWORK} (or ${COMPUTE_2} and so on)
# List of servers on which to install the network service such as
# Compute networking (nova network) or OpenStack Networking (neutron).
CONFIG_NETWORK_HOSTS=${COMPUTE-NETWORK}
— — — —
My compute nodes run CentOS 7.4, and in this case, only have a single NIC, so I grab my interface name for use with the external network bridge enp0s25on each of the nodes (this should be the same on all of them) and append the following to the --install-hosts command:
--os-neutron-ovs-bridge-mappings=extnet:br-ex --os-neutron-ovs-bridge-interfaces=br-ex:enp0s25 --os-neutron-ml2-type-drivers=vxlan,flat
which allows us to bridge external networks to your instances (and also do things like creating a floating IP pool, etc.)
You’ll also, once the above has completed, want to set up a corresponding br-ext interface on your compute nodes (i.e in /etc/sysconfig/network-scripts/ifcfg-br-ext):
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=${YOUR IP}
NETMASK=${NETMASK}
GATEWAY=${GATEWAY} 
ONBOOT=yes
and modify the current interface (i.e. eth0 or enp0s25) to look like this:
DEVICE=${CURRENT DEVICE}
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
restart networking, and then verify that the IP has been bound to the correct interface, like so:
[jmarhee@compute-01 ~]$ sudo service network restart
Restarting network (via systemctl):                        [  OK  ]
[jmarhee@compute-01 ~]$ ip addr
...
33: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 00:23:8b:77:74:90 brd ff:ff:ff:ff:ff:ff
    inet ADDRESS/29 brd BROADCAST scope global br-ex
...
Creating the network, once Packstack has completed running, can be done by logging into the controller, and sourcing your keystonerc_admin file:
# source keystonerc_admin
retrieving the target tenant ID:
export SERVICES_TENANT_ID=$(openstack project list | grep services | awk '{print $2}')
in this case, I saved the services tenant (since I want to use this network on instances managed by a non-administrative user — replaces services with admin if you plan to just work as the admin user) to the variable,SERVICES_TENANT_ID and then you can create the network itself:
neutron net-create \
--tenant_id $SERVICES_TENANT_ID \
--router:external=True \
myNetwork
and a subnet:
neutron subnet-create \
--tenant-id $SERVICES_TENANT_ID \
--name mySubnet \
--allocation-pool start=88.23.24.3,end=88.23.24.6 \
--disable-dhcp \ 
myNetwork 88.23.24.1/29
The remaining work to be done to launch instances can be done via the UI (adding images, SSH keypairs, whatever customizations you’d like to your security groups), but the work for allocating a network to an instance you might create has been done here, if you’d like to use an external network address. Using such a network, if you use the CLI instead, might look something like:
MYNET_ID=$(neutron net-list | grep myNetwork | awk '{print $2}') \
nova boot \
 --flavor m1.small --image cirros \
 --nic net-id=$MY_NET_ID \
 --security-group default 
 --key-name mykey \
 myInstance
and it should come online with an address from that allocation pool you created above.
You can read more about your options for creating networks, creating floating IP pools from such a subnet as well here:
https://docs.openstack.org/liberty/install-guide-rdo/launch-instance-networks-public.html
https://www.rdoproject.org/networking/floating-ip-range/
The linked documentation has some excellent resources on branching out your networking and using your deployment in various different deployment scenarios, which can be enabled via Packstack upon deploy.

Comments

  1. Hello,

    Thank you for sharing.

    Please provide attribution to my original post:

    https://medium.com/@jmarhee/creating-an-openstack-development-environment-with-an-existing-external-network-via-packstack-651df38f4b57

    as well as to the linked RDO installation piece's (http://haidv204.blogspot.com/2018/06/how-to-install-openstack-using-rdo.html) original post as well (https://platform9.com/blog/install-openstack-using-rdo-packstack/?source=post_page-----651df38f4b57----------------------)

    Thanks again!

    ReplyDelete

Post a Comment

Popular posts from this blog

Merge AVHDX Hyper-V Checkpoints

When you create a snapshot of a virtual machine in Microsoft Hyper-V, a new file is created with the  .avhdx  file extension. The name of the file begins with the name of its parent VHDX file, but it also has a GUID following that, uniquely representing that checkpoint (sometimes called snapshots). You can see an example of this in the Windows Explorer screenshot below. Creating lots of snapshots will result in many  .avhdx  files, which can quickly become unmanageable. Consequently, you might want to merge these files together. If you want to merge the  .avhdx  file with its parent  .vhdx  file, it’s quite easy to accomplish. PowerShell Method Windows 10 includes support for a  Merge-VHD  PowerShell command, which is incredibly easy to use. In fact, you don’t even need to be running PowerShell “as Administrator” in order to merge VHDX files that you have access to. All you need to do is call  Merge-VHD  with the  -Path  parameter, pointing to the  .avhdx  file, and the  -Des

Openstack manila phần 4: Native GlusterFS Driver

Tiếp tục loạt bài về Openstack Manila hôm nay tôi sẽ cấu hình backend sử dụng GlusterFS Yêu cầu phiên bản GlusterFS >= 3.6. Với glusterfs nếu cluster của bạn không hỗ trợ snapshot thì trên manila cũng sẽ mất đi tính năng này. Để cấu hình snapshot ta sẽ cấu hình Thin Provision theo bài hướng dẫn link Với bài lab của mình có 2 node và chạy kiểu replicate. Mình sẽ tạo các thinly provisioned và tạo volume trên đó. Mô hình cài đặt Cài đặt glusterfs-v3.7 add-apt-repository ppa:gluster/glusterfs-3.7 -y apt-get update apt-get install glusterfs-server -y Tham khảo script tạo thin LV và gluster volume Script tạo thinly provisioned chạy trên 2 node apt-get install xfsprogs -y pvcreate /dev/sdb vgcreate myVG /dev/sdb lvcreate -L 8G -T myVG/thinpool for ((i = 1;i<= 5; i++ )) do mkdir -p /manila/manila-"$i" for (( j = 1; j<= 5; j++)) do lvcreate -V "${i}"Gb -T myVG/thinpool -n vol-"$i"-"$j" mkfs.xfs /dev/my

Zabbix, AWS and Auto Registration

One of the things I love the most with AWS is  auto-scaling . You choose an AMI, set some parameters and AWS will spin instances up and down whenever a threshold is breached. But with all these instances spinning up and down there are some unknowns. For example, what is the IP address of the new instance? Its host name? This can be critical when other components of your infrastructure are dependent on knowing these parameters. I had this problem when I started to use  Zabbix  as the monitoring system. At first it seemed like a complicated one, but Zabbix has a wonderful feature called  Auto Registration  which can be used exactly for this situation. I will try to show how to configure auto registration both on the client (EC2 instance running Ubuntu 14.04) and on the Zabbix server (Zabbix Server 2.4.2). Zabbix-agent Installation and Configuration Let’s start with installing zabbix-agent on the Ubuntu client: 1 2 $ sudo apt-get update $ sudo apt-get install -y zab