Automation Ansible

Ansible 使用分享

尼爾森 2021/03/15 09:00:00
1334

環境

CentOS 7.7/k8s 1.18.3/cri-o/ansible
master01/k9s/helm elite-erp-ap3
master02/worker01 elite-erp-ap1
master03/worker02 elite-erp-ap2

結構說明

playbooks: Playbooks 是 Ansible 的腳本 (Script),而且還是個比傳統 Shell Script 還強大數百倍的腳本!透過事先寫好的劇本 (Playbooks) 來讓各個 Managed Node 進行指定的動作 (Plays) 和任務 (Tasks)。

teamplates: 事先定義變數和模板 (Templates),即可用它動態產生遠端的 Shell Scripts、設定檔 (Configure) 等。

inventory:就單字本身有詳細目錄、清單和列表的意思。在這裡我們可以把它當成是一份主機列表,我們可透過它對定義每個 Managed Node 的代號、IP 位址、連線相關資訊和群組。

├── inventory
│   ├── group_vars
│   │   └── all.yaml
│   └── hosts
├── playbooks
│   ├── crio.yaml
│   ├── flannel.yaml
│   ├── helm-install.yaml
│   ├── install-all.yaml
│   ├── install-haproxy.yaml
│   ├── install-keepalived.yaml
│   ├── install-new-worker-node.yaml
│   ├── k9s-install.yaml
│   ├── kubeadm-init-master.yaml
│   ├── kubeadm-join-masters.yaml
│   ├── #kubeadm-join-workers.yaml#
│   ├── kubeadm-join-workers.yaml
│   ├── kubeadm-prerequisite.yaml
│   ├── kubernetes-dashboard.yaml
│   ├── repos.yaml
│   ├── roles
│   │   ├── crio
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       ├── crio.conf.j2
│   │   │       ├── kubelet.j2
│   │   │       └── registries.conf.j2
│   │   ├── flannel
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       └── kube-flannel.yaml.j2
│   │   ├── helm-install
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       └── helm-rbac.yaml
│   │   ├── install-haproxy
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       └── haproxy.cfg.j2
│   │   ├── install-keepalived
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       └── keepalived.conf.j2
│   │   ├── k9s-install
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       └── helm-rbac.yaml
│   │   ├── kubeadm-init-master
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       └── kubeadm-config.yaml.j2
│   │   ├── kubeadm-join-masters
│   │   │   └── tasks
│   │   │       ├── #main.yaml#
│   │   │       └── main.yaml
│   │   ├── kubeadm-join-workers
│   │   │   └── tasks
│   │   │       └── main.yaml
│   │   ├── kubeadm-prerequisite
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       ├── chrony.conf
│   │   │       └── kubelet
│   │   ├── kubernetes-dashboard
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       ├── admin-user-binding.yaml
│   │   │       ├── dashboard-user.yaml
│   │   │       └── kubernetes-dashboard.yaml.j2
│   │   ├── repos
│   │   │   └── tasks
│   │   │       └── main.yaml
│   │   └── update-hosts
│   │       ├── tasks
│   │       │   └── main.yaml
│   │       └── templates
│   │           └── hosts.j2
│   └── update-hosts.yaml

舉例說明

inventory:

inventory/group_vars/all.yaml

這邊定義了我們的各個k8s的hosts information, 及其他變數的值。這樣的撰寫好處在於,變數名稱可以是固定的值可以靈活的修改

k8s_version: 1.18.3

 imageRepository: k8s.gcr.io

 keepalived_VIP: 10.20.30.178
 keepalived_interface: eth0

 master01: elite-erp-ap3
 master01_ip: 10.20.30.168

 master02: elite-erp-ap1
 master02_ip: 10.20.30.165

 master03: elite-erp-ap2
 master03_ip: 10.20.30.166

 workers01: elite-erp-ap1
 workers01_ip: 10.20.30.165

 workers02: elite-erp-ap2
 workers02_ip: 10.20.30.166

inventory/hosts

這份文件定義了我們的kubernetes cluster內每個host的角色,和相關參數

 [k8s-master-primary]
 elite-erp-ap3 keepalived_state=MASTER keepalived_priority=100

 [k8s-master-replicas]
 elite-erp-ap1 keepalived_state=BACKUP keepalived_priority=50
 elite-erp-ap2 keepalived_state=BACKUP keepalived_priority=50

 [k8s-masters:children]
 k8s-master-primary
 k8s-master-replicas

 [k8s-workers]
 elite-erp-ap[1:2]

 [k8s-nodes:children]
 k8s-masters
 k8s-workers

 [new-worker-node]

playbooks

EXAPMLE 1: playbooks/crio.yaml

此playbooks 將會執行crio的安裝,至於執行的對象hosts 則是以變數 "{{ target }}" 帶入,roles 則對應到,playbooks/role/crio/這一層所指定的作業。

這個playbook 的 target 基本上所有的hosts都會用到,所以在設計上以變數的方式帶入。較有彈性

crio.yaml                                                                                                                                                       X
 - hosts: "{{ target }}"
   become: yes
   roles:
     - crio

EXAPMLE 2: playbooks/kubeadm-init-master.yaml

此playbooks 將會執行crio的安裝,至於執行的對象hosts 則是指定為k8s-master-primary ,roles 則對應到,playbooks/role/kubeadm-init-master/這一層所指定的作業

這個playbook指定了kubernetes 的primary-master-role
會執行,所以在我們一開始設計inventory​的時候就要先有構思整個cluster的架構

 kubeadm-init-master.yaml                                                                                                                                        
 - hosts: k8s-master-primary                                                      import_playbook: update-hosts.yaml
   become: yes                                                                    import_playbook: repos.yaml
   roles:                                                                         import_playbook: crio.yaml
     - kubeadm-init-master 

EXAPMLE 3: playbooks/install-all.yaml

這個playbook 並沒有類似上述的playbooks的寫法,而是直接以引入 import 的方式,將許多撰寫完畢的playbooks匯入到install-all 的作業流程。

通常我們撰寫ansible時,會先有單個playbook,最後將多個playbooks整合起來則為一個work-flow;亦即一齣戲不會只有一幕,多個分鏡和角色一起參與演出。剪輯為一齣完整的戲

install-all.yaml                                                                                                                                                X
 - import_playbook: update-hosts.yaml
 - import_playbook: repos.yaml
 - import_playbook: crio.yaml
 - import_playbook: install-haproxy.yaml
 - import_playbook: install-keepalived.yaml
 - import_playbook: kubeadm-prerequisite.yaml
 - import_playbook: kubeadm-init-master.yaml
 - import_playbook: flannel.yaml
 - import_playbook: kubeadm-join-masters.yaml
 - import_playbook: kubeadm-join-workers.yaml
 - import_playbook: kubernetes-dashboard.yaml
 - import_playbook: helm-install.yaml

roles:

role/crio/tasks/main.yaml

這邊定義了安裝crio的時候的所有tasks,
1.先把crio從ansible host的sourceDIR copy到指定的host(s)並解壓縮到destDIR
2.把預先寫好的kubelet template (kubelet.j2)複製到指定的host(s)的/etc/default/kubelet
3.設定crio的pull image repo
4.啟動crio

main.yaml                                                                                                                                           X
 - name: untar crio.tar
   unarchive:
     src: /opt/k8s-playbooks/source/crio.tar
     dest: /

 
 - name: config kubelet
   template:
     src: kubelet.j2
     dest: /etc/default/kubelet

 - name: change pause image url in /etc/crio.conf
   template:
     src: crio.conf.j2
     dest: /etc/crio/crio.conf

 - name: start cri-o
   systemd:
     state: started
     name: cri-o
     enabled: yes

kubeadm-init-master

按步驟安裝kubernetes-master

 - name: "Create kubeadm init config yaml"

   template:

     src: kubeadm-config.yaml.j2

     dest: /tmp/kubeadm-config.yaml

     mode: 0644



 - name: restart haproxy and keepalived

   shell: "systemctl restart \"{{ item }}\"; sleep 10"

   with_items:

     - haproxy

     - keepalived



 - name: wait for port 8443 become LISTEN state

   wait_for:

     port: 8443

     delay: 10

     timeout: 30



 - name: Kubeadm init

   shell: kubeadm init --cri-socket=/var/run/crio/crio.sock --config=/tmp/kubeadm-config.yaml --upload-certs --v=5 > /tmp/kubeadm.log

   register: rslt

   ignore_errors: yes



 - name: Store init output

   action: copy content="{{ rslt.stdout }}" dest="/etc/kubernetes/kubeadm-init.stdout"



 - name: Create .kube folder

   file:

     path: "/root/.kube"

     state: directory

     owner: "root"



 - name: Copy admin.conf to .kube folder

   copy:

     src: /etc/kubernetes/admin.conf

     dest: "/root/.kube/config"

     owner: "root"

     remote_src: yes



 - name: "Fetching Kubernetes Master PKI files from primary master"

   fetch:

     src: /etc/kubernetes/pki/{{item}}

     dest: /tmp/kubeadm-ha/pki/{{item}}

     flat: yes

   with_items:

     - ca.crt

     - ca.key

     - sa.key

     - sa.pub

     - front-proxy-ca.crt

     - front-proxy-ca.key



 - name: "Fetching Kubernetes Master ETCD files from primary master"

   fetch:

     src: /etc/kubernetes/pki/etcd/{{item}}

     dest: /tmp/kubeadm-ha/pki/etcd/{{item}}

     flat: yes

   with_items:

     - ca.crt

     - ca.key



 - name: "Fetching Kubernetes Master Admin files from primary master"

   fetch:

     src: /etc/kubernetes/{{item}}

     dest: /tmp/kubeadm-ha/{{item}}

     flat: yes

   with_items:

     - admin.conf

演練範例

我們在不影響運作中服務下運作一些playbook,讓大家可以感受一下使用ansible 做自動化部署與以往自己一個一個慢慢作業的差異

測試連線

ansible -i inventory/hosts all -m ping

我們使用這行指令使用module ping 看看我們的target hosts 跟ansible 跳板的連線狀況,結果有收到pong 表示ansible跟target hosts的連線是暢通的,接著會使用一個firstplaybook.yaml 去做同樣的事情

[nilsson@nilsson offline-ansible-kubernetes-kubeadm-ha-master]$ cat playbooks/firstplaybook.yaml
---
 - name: "Get ping response"
   hosts: all
   tasks:
   - action: ping
     register: hello
   - debug: msg="{hello.stdout}"

playbooks/kubeadm-prerequisite.yaml

範例程式:

- name: Remove swapfile from /etc/fstab
  mount:
    name: swap
    fstype: swap
    state: absent

- name: Turn swap off
  shell: swapoff -a

- name: disable firewalld
  systemd:
    name: firewalld
    state: stopped
    enabled: no

- name: Set Enforce
  command: setenforce 0
  ignore_errors: True

- name: copy chronyd config
  template:
    src: chrony.conf
    dest: /etc/chrony.conf

- name: Start chronyd
  systemd:
    name: chronyd
    state: started
    enabled: true

- name: Install k8s packages
  become: yes
  yum:
    name: "{{ packages }}"
    enablerepo: k8s-repo
    state: present
  vars:
    packages:
    - kubeadm
    - kubectl
    - kubelet

- name: Add vm swappiness
  lineinfile:
    path: /etc/sysctl.d/k8s.conf
    line: 'vm.swappiness = 0'
    state: present
    create: yes

- name: Add vm overcommit_memory
  lineinfile:
    path: /etc/sysctl.d/k8s.conf
    line: 'vm.overcommit_memory = 1'
    state: present
    create: yes

- name: Load br_netfilter module
  modprobe:
    name: br_netfilter
    state: present
  register: br_netfilter

- name: Add netbridge config ip4
  lineinfile:
    path: /etc/sysctl.d/k8s.conf
    line: 'net.bridge.bridge-nf-call-iptables = 1'
    state: present
    create: yes

- name: Add net.ipv4.ip_forward
  lineinfile:
    path: /etc/sysctl.d/k8s.conf
    line: 'net.ipv4.ip_forward = 1'
    state: present
    create: yes

- name: Increase net ipv4 tcp_max_syn_backlog
  lineinfile:
    path: /etc/sysctl.d/k8s.conf
    line: 'net.ipv4.tcp_max_syn_backlog = 2621440'
    state: present
    create: yes

- name: update sysctl
  command: sysctl --system

- name: copy kubelet config
  template:
    src: kubelet
    dest: /etc/sysconfig/kubelet

- name: Start kubelet
  systemd:
    name: kubelet
    state: started
    enabled: true

ansible 因為是跳板機作業,所以會需要建立ssh 連線,我們可以使用--extra-vars "ansible_sudo_pass=yourPassword" 來帶入pass, 或是--ask-become-pass 也可以使用加密的文件帶入喔

補充說明


templates:
├── roles
│   │   ├── crio
│   │   │   ├── tasks
│   │   │   │   └── main.yaml
│   │   │   └── templates
│   │   │       ├── crio.conf.j2
│   │   │       ├── kubelet.j2
│   │   │       └── registries.conf.j2

templates: 內存放已經撰寫完成的模板,下面以kubelet.j2說明, 這份文件我們已經定義好啟動kubelet所需要的args,以j2作為附檔名,所以內容如下

KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m

在yaml文件中,我們會使用到許多的module

become: 會讓這個task嘗試使用sudoer
fetch: 比較兩地的檔案一致性,可以是ansible 對remote host,也可以是remote-to-remote
unarchive: 將本地端的壓縮到copy到遠端的指定目錄後解壓縮
systemd: 目前Gnu/Linux上對system service的控制程式,我們可以使用name,stateenabled;分別帶入所希望控制的system service,啟動(started)和enabled/disabled
lineinfile: 我們希望修改的遠端檔案 ex,設定檔,指定path和需要修改的line,做出相應的更動。
shell: 執行遠端命令
command: 類似上面shell的命令
modprobe: 需要載入的linux kernel modules

尼爾森
不設限制看文章寫文章...因為不太會表達=.=