Openstack Rocky でプライベートクラウドを構築する!(1)

概要

Horizon (Openstack の WebUI) を触るだけなら Ubuntu コミュニティから提供されいている conjure-up が 1 ノード、1 NIC で動作確認できるのでおすすめです。1 ノード、2 枚 NIC なら devstack で使い始めるというのも手だと思います。本稿では、どちらかというと本番環境をノードを分けて構築することで各コンポーネントがどのように動作しているのかを知りながら設定していく向きを想定しています。
公式ドキュメントにあたりながら「コントローラー」ノード、「コンピュート」ノードを構築します。また、あとから冗長構成にしていく予定です。前提として MySQL を利用しますのでこのあたりを参照いただいて別途 Galera クラスタを構成しておくと良いです。

環境

ソフト

  • Ubuntu 18.04.1 Server 64bit
  • Openstack Rocky

ハード

※コントローラーは仮想マシンでもいけました。

  • CPUx2
  • MEM 8G
  • SSD 30G
  • NIC 2 枚

IP 構成

  • コントローラー ( vm-nfj-osctrln1 )
  • 公開用 ( 外部 ):10.1.55.11/16
  • 管理用 ( 内部 ):10.2.55.11/16
  • コンピュート ( vm-nfj-oscomp1 )
  • 公開用 ( 外部 ):10.1.55.21/16
  • 管理用 ( 内部 ):10.2.55.21/16

OS のインストール

Ubuntu 18.04 をインストールします。最小構成で OK です。

ネットワークの設定

分かりづらくなったような気もする netplan による固定 IP 設定をやっておきます。yaml のインデント注意。

/etc/netplan/50-cloud-init.yaml を編集

# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        ens160:
            addresses: [10.1.55.11/16]
            gateway4: 10.1.1.254
            dhcp4: no
            optional: true
            nameservers:
                search: [neoflow.lan]
                addresses: [10.1.1.1, 10.1.1.2]
        ens192:
            addresses: [10.2.55.11/16]
            dhcp4: no
            optional: true
    version: 2

/etc/hosts を編集

`ping -c 3 vm-nfj-osctrln1` のようにしてホスト名で通信できるかを確認しておきます。

$ cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

# controller
10.2.55.11      vm-nfj-osctrln1

# compute
10.2.55.21      vm-nfj-oscompn1

コントローラーの設定

時刻同期

chrony をインストールして設定します

$ sudo apt install chrony

以下の行を追記します。コンピュートノードから参照できるように接続許可を設定しておきます。

$ cat /etc/chrony/chrony.conf
~(略)~
# openstack controller
server ntp.nict.jp iburst
allow 10.2.55.0/16

サービスを再起動します。

$ sudo service chrony restart

同期確認

$ sudo chronyc sources
210 Number of sources = 9
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^- chilipepper.canonical.com     2   6    77    20  +2697us[+2697us] +/-  215ms
^- pugot.canonical.com           2   6    77    20  +3168us[+3168us] +/-  209ms
^- golem.canonical.com           2   6    77    22  +1771us[+1771us] +/-  209ms
^- alphyn.canonical.com          2   6    77    22  +7370us[+7370us] +/-  224ms
^+ ntp.paina.net                 2   6    77    25   -112us[  +95us] +/-   90ms
^+ ntp-a3.nict.go.jp             1   6    77    25   -479us[ -272us] +/-   60ms
^+ chobi.paina.net               2   6    77    22   +276us[ +276us] +/-   92ms
^+ sv1.localdomain1.com          2   6    77    23   -178us[ -178us] +/-   92ms
^* ntp-a2.nict.go.jp             1   6    77    24   +169us[ +376us] +/-   59ms

OpenStack Rockey のインストール

リポジトリの追加と python-openstackclient パッケージをインストール

これで openstack コマンドが利用できるようになります。

$ sudo apt install software-properties-common
$ sudo add-apt-repository cloud-archive:rocky
$ sudo apt update
$ sudo apt dist-upgrade
$ sudo apt install python-openstackclient

データベースの準備

コントローラーノード上にインストールする場合は公式ドキュメントを参照して必要なパッケージのインストールと設定をします。本稿では既存の Galera Cluster を利用します。

データベース用パスワードに関する配慮

セキュリティのため、pwgen や `openssl rand -hex 10` などの結果を利用してパスワード文字列を生成するのでインストールしておきます。10 文字でシンボル有りでセキュアなパスワードを1つ生成するなら 10 -1sy を引数にします。

$ sudo apt install pwgen
$ pwgen 10 -1sy
91b1z0Bm5/

メッセージキューの準備

OpenStack はメッセージキューを利用してサービスのステータス確認などをおこないます。メッセージキューにはいくつかありますが、RabbitMQ が多くのディストリビューションでサポートされていることから推奨されています。

RabbitMQ のインストール

$ sudo apt install rabbitmq-server

OpenStack ユーザの追加

$ sudo rabbitmqctl add_user openstack 91b1z0Bm5/
Creating user "openstack"

パーミッション設定

$ sudo rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

Memcached の準備

サービスの認証メカニズムに Memcached を利用してトークンをキャッシュします。通常はコントローラーノード上で Memcached を動かすのがベストプラクティスのようです。

Memcached 関連パッケージのインストール

$ sudo apt install memcached python-memcache

/etc/memcached.conf 設定ファイルの編集

memcached.conf に管理側の IP を付与します。既存の -l 127.0.0.1 があれば置き換えます。

$ cat /etc/memcached.conf 
# memcached default config file
# 2003 - Jay Bonci 
# This configuration file is read by the start-memcached script provided as
# part of the Debian GNU/Linux distribution.

# Run memcached as a daemon. This command is implied, and is not needed for the
# daemon to run. See the README.Debian that comes with this package for more
# information.
-d

# Log memcached's output to /var/log/memcached
logfile /var/log/memcached.log

# Be verbose
# -v

# Be even more verbose (print client commands as well)
# -vv

# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 64

# Default connection port is 11211
-p 11211

# Run the daemon as root. The start-memcached will default to running as root if no
# -u command is present in this config file
-u memcache

# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.
-l 10.2.55.11

# Limit the number of simultaneous incoming connections. The daemon default is 1024
# -c 1024

# Lock down all paged memory. Consult with the README and homepage before you do this
# -k

# Return error when memory is exhausted (rather than removing items)
# -M

# Maximize core file limit
# -r

# Use a pidfile
-P /var/run/memcached/memcached.pid

Memcached サービスの再起動

$ sudo service memcached restart

Etcd の準備

OpenStack サービス群は、分散鍵のロック、構成の保存、サービスの稼働状況の追跡、その他のシナリオの追跡に信頼性の高い key-value ストアである Etcd を使用できます。

Etcd パッケージのインストール

$ sudo apt install etcd

/etc/default/etcd ファイルの編集

ントローラーノードの管理用 IP からアクセスできるように設定します。

$ cat /etc/default/etcd 
## etcd(1) daemon options
## See "/usr/share/doc/etcd/Documentation/configuration.md.gz".

### Member Flags

##### -name
## Human-readable name for this member.
## default: host name returned by `hostname`.
## This value is referenced as this node's own entries listed in the `-initial-cluster`
## flag (Ex: `default=http://localhost:2380` or `default=http://localhost:2380,default=http://localhost:7001`).
## This needs to match the key used in the flag if you're using [static boostrapping](clustering.md#static).
# ETCD_NAME="hostname"
ETCD_NAME="vm-nfj-osctrln1"

##### -data-dir
## Path to the data directory.
# ETCD_DATA_DIR="/var/lib/etcd/default"
ETCD_DATA_DIR="/var/lib/etcd"

##### -wal-dir
## Path to the dedicated wal directory. If this flag is set, etcd will write the
## WAL files to the walDir rather than the dataDir. This allows a dedicated disk
## to be used, and helps avoid io competition between logging and other IO operations.
## default: ""
# ETCD_WAL_DIR

##### -snapshot-count
## Number of committed transactions to trigger a snapshot to disk.
## default: "10000"
# ETCD_SNAPSHOT_COUNT="10000"

##### -heartbeat-interval
## Time (in milliseconds) of a heartbeat interval.
## default: "100"
# ETCD_HEARTBEAT_INTERVAL="100"

##### -election-timeout
## Time (in milliseconds) for an election to timeout.
## See /usr/share/doc/etcd/Documentation/tuning.md
## default: "1000"
# ETCD_ELECTION_TIMEOUT="1000"

##### -listen-peer-urls
## List of URLs to listen on for peer traffic. This flag tells the etcd to accept
## incoming requests from its peers on the specified scheme://IP:port combinations.
## Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd
## listens to the given port on all interfaces. If an IP address is given as
## well as a port, etcd will listen on the given port and interface.
## Multiple URLs may be used to specify a number of addresses and ports to listen on.
## The etcd will respond to requests from any of the listed addresses and ports.
## example: "http://10.0.0.1:2380"
## invalid example: "http://example.com:2380" (domain name is invalid for binding)
## default: "http://localhost:2380,http://localhost:7001"
# ETCD_LISTEN_PEER_URLS="http://localhost:2380,http://localhost:7001"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

##### -listen-client-urls
## List of URLs to listen on for client traffic. This flag tells the etcd to accept
## incoming requests from the clients on the specified scheme://IP:port combinations.
## Scheme can be either http or https. If 0.0.0.0 is specified as the IP, etcd
## listens to the given port on all interfaces. If an IP address is given as
## well as a port, etcd will listen on the given port and interface.
## Multiple URLs may be used to specify a number of addresses and ports to listen on.
## The etcd will respond to requests from any of the listed addresses and ports.
## (ADVERTISE_CLIENT_URLS is required when LISTEN_CLIENT_URLS is set explicitly).
## example: "http://10.0.0.1:2379"
## invalid example: "http://example.com:2379" (domain name is invalid for binding)
## default: "http://localhost:2379,http://localhost:4001"
# ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://localhost:4001"
ETCD_LISTEN_CLIENT_URLS="http://10.2.55.11:2379"

##### -max-snapshots
## Maximum number of snapshot files to retain (0 is unlimited)
## default: 5
# ETCD_MAX_SNAPSHOTS="5"

##### -max-wals
## Maximum number of wal files to retain (0 is unlimited)
## default: 5
# ETCD_MAX_WALS="5"

##### -cors
## Comma-separated whitelist of origins for CORS (cross-origin resource sharing).
## default: none
# ETCD_CORS

### Clustering Flags
## For an explanation of the various ways to do cluster setup, see:
## /usr/share/doc/etcd/Documentation/clustering.md.gz
##
## The command line parameters starting with -initial-cluster will be
## ignored on subsequent runs of etcd as they are used only during initial
## bootstrap process.

##### -initial-advertise-peer-urls
## List of this member's peer URLs to advertise to the rest of the cluster.
## These addresses are used for communicating etcd data around the cluster.
## At least one must be routable to all cluster members.
## These URLs can contain domain names.
## example: "http://example.com:2380, http://10.0.0.1:2380"
## default: "http://localhost:2380,http://localhost:7001"
# ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380,http://localhost:7001"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.2.55.11:2380"

##### -initial-cluster
## initial cluster configuration for bootstrapping.
## The key is the value of the `-name` flag for each node provided.
## The default uses `default` for the key because this is the default for the `-name` flag.
## default: "default=http://localhost:2380,default=http://localhost:7001"
# ETCD_INITIAL_CLUSTER="default=http://localhost:2380,default=http://localhost:7001"
ETCD_INITIAL_CLUSTER="vm-nfj-osctrln1=http://10.2.55.11:2380"

##### -initial-cluster-state
## Initial cluster state ("new" or "existing"). Set to `new` for all members
## present during initial static or DNS bootstrapping. If this option is set to
## `existing`, etcd will attempt to join the existing cluster. If the wrong
## value is set, etcd will attempt to start but fail safely.
## default: "new"
# ETCD_INITIAL_CLUSTER_STATE="existing"
ETCD_INITIAL_CLUSTER_STATE="new"

##### -initial-cluster-token
## Initial cluster token for the etcd cluster during bootstrap.
## If you are spinning up multiple clusters (or creating and destroying a
## single cluster) with same configuration for testing purpose, it is highly
## recommended that you specify a unique initial-cluster-token for the
## different clusters.
## default: "etcd-cluster"
# ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"

##### -advertise-client-urls
## List of this member's client URLs to advertise to the rest of the cluster.
## These URLs can contain domain names.
## example: "http://example.com:2379, http://10.0.0.1:2379"
## Be careful if you are advertising URLs such as http://localhost:2379 from a
## cluster member and are using the proxy feature of etcd. This will cause loops,
## because the proxy will be forwarding requests to itself until its resources
## (memory, file descriptors) are eventually depleted.
## default: "http://localhost:2379,http://localhost:4001"
# ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://localhost:4001"
ETCD_ADVERTISE_CLIENT_URLS="http://10.2.55.11:2379"

##### -discovery
## Discovery URL used to bootstrap the cluster.
## default: none
# ETCD_DISCOVERY

##### -discovery-srv
## DNS srv domain used to bootstrap the cluster.
## default: none
# ETCD_DISCOVERY_SRV

##### -discovery-fallback
## Expected behavior ("exit" or "proxy") when discovery services fails.
## default: "proxy"
# ETCD_DISCOVERY_FALLBACK="proxy"

##### -discovery-proxy
## HTTP proxy to use for traffic to discovery service.
## default: none
# ETCD_DISCOVERY_PROXY

### Proxy Flags

##### -proxy
## Proxy mode setting ("off", "readonly" or "on").
## default: "off"
# ETCD_PROXY="on"

##### -proxy-failure-wait
## Time (in milliseconds) an endpoint will be held in a failed state before being
## reconsidered for proxied requests.
## default: 5000
# ETCD_PROXY_FAILURE_WAIT="5000"

##### -proxy-refresh-interval
## Time (in milliseconds) of the endpoints refresh interval.
## default: 30000
# ETCD_PROXY_REFRESH_INTERVAL="30000"

##### -proxy-dial-timeout
## Time (in milliseconds) for a dial to timeout or 0 to disable the timeout
## default: 1000
# ETCD_PROXY_DIAL_TIMEOUT="1000"

##### -proxy-write-timeout
## Time (in milliseconds) for a write to timeout or 0 to disable the timeout.
## default: 5000
# ETCD_PROXY_WRITE_TIMEOUT="5000"

##### -proxy-read-timeout
## Time (in milliseconds) for a read to timeout or 0 to disable the timeout.
## Don't change this value if you use watches because they are using long polling requests.
## default: 0
# ETCD_PROXY_READ_TIMEOUT="0"

### Security Flags

##### -ca-file [DEPRECATED]
## Path to the client server TLS CA file.
## default: none
# ETCD_CA_FILE=""

##### -cert-file
## Path to the client server TLS cert file.
## default: none
# ETCD_CERT_FILE=""

##### -key-file
## Path to the client server TLS key file.
## default: none
# ETCD_KEY_FILE=""

##### -client-cert-auth
## Enable client cert authentication.
## default: false
# ETCD_CLIENT_CERT_AUTH

##### -trusted-ca-file
## Path to the client server TLS trusted CA key file.
## default: none
# ETCD_TRUSTED_CA_FILE

##### -peer-ca-file [DEPRECATED]
## Path to the peer server TLS CA file. `-peer-ca-file ca.crt` could be replaced
## by `-peer-trusted-ca-file ca.crt -peer-client-cert-auth` and etcd will perform the same.
## default: none
# ETCD_PEER_CA_FILE

##### -peer-cert-file
## Path to the peer server TLS cert file.
## default: none
# ETCD_PEER_CERT_FILE

##### -peer-key-file
## Path to the peer server TLS key file.
## default: none
# ETCD_PEER_KEY_FILE

##### -peer-client-cert-auth
## Enable peer client cert authentication.
## default: false
# ETCD_PEER_CLIENT_CERT_AUTH

##### -peer-trusted-ca-file
## Path to the peer server TLS trusted CA file.
## default: none
# ETCD_PEER_TRUSTED_CA_FILE

### Logging Flags
##### -debug
## Drop the default log level to DEBUG for all subpackages.
## default: false (INFO for all packages)
# ETCD_DEBUG

##### -log-package-levels
## Set individual etcd subpackages to specific log levels.
## An example being `etcdserver=WARNING,security=DEBUG`
## default: none (INFO for all packages)
# ETCD_LOG_PACKAGE_LEVELS


#### Daemon parameters:
# DAEMON_ARGS=""

etdc サービスの有効化と開始

$ sudo systemctl enable etcd
Synchronizing state of etcd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable etcd
Created symlink /etc/systemd/system/etcd2.service → /lib/systemd/system/etcd.service.
$ sudo systemctl start etcd

トラブルシューティング

/etc/netplan/50-cloud-int.yaml を編集したけどIPが付与されない

sudo netplan apply でエラーがある場合はエラー箇所を行数で返してくれます。yaml なのでインデント注意。ちなみに私の場合スペルミスで address と書いてたけど、addresses が正解。

次回

次回は Keystone をコントローラーノードにインストールしていきます。

スポンサーリンク