在 k8s 中部署 ELK 日志系统

本文演示如何在 K8s 集群中安装 ELK 日志框架, 实现对集群中节点日志的统一处理。

建立一个新的 NFS 目录

建立一个共享目录

1
sudo mkdir /var/nfs/elasticsearch -p

改变目录所有者

1
sudo chown nobody:nogroup /var/nfs/elasticsearch
1
sudo chmod 777 /var/nfs/elasticsearch

配置 nfs

1
sudo vi /etc/exports

添加如下内容

1
/var/nfs/elasticsearch    192.168.11.0/24(rw,sync,no_subtree_check)

注意:用你实际的IP替换上面IP

保存以后执行

1
sudo exportfs -arvf
1
sudo systemctl restart nfs-kernel-server

查看列表

1
showmount -e

部署 Elasticsearch

新建名为: elasticsearch.yaml 的文件,内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elasticsearch
subjects:
- kind: ServiceAccount
name: elasticsearch
namespace: ns-monitor
roleRef:
kind: ClusterRole
name: elasticsearch
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "elasticsearch-data-pv"
labels:
name: elasticsearch-data-pv
release: stable
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /var/nfs/elasticsearch
server: 192.168.11.16

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data-pvc
namespace: ns-monitor
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
name: elasticsearch-data-pv
release: stable

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: ns-monitor
labels:
k8s-app: elasticsearch
spec:
serviceName: elasticsearch
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
containers:
- image: elasticsearch:7.5.0
name: elasticsearch
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: "discovery.type"
value: "single-node"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx2g"
ports:
- containerPort: 9200
name: db
protocol: TCP
volumeMounts:
- name: elasticsearch-data-volume
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-data-volume
persistentVolumeClaim:
claimName: elasticsearch-data-pvc

---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: ns-monitor
spec:
clusterIP: None
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch
1
kubectl apply -f elasticsearch.yaml

验证

1
kubectl get pods -n ns-monitor elasticsearch-0

部署 Kibana

新建名为: kibana.yaml 的文件,内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: ns-monitor
labels:
k8s-app: kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: kibana:7.5.0
resources:
limits:
cpu: 1
memory: 500Mi
requests:
cpu: 0.5
memory: 200Mi
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch-0.elasticsearch.kube-system:9200
- name: I18N_LOCALE
value: zh-CN
ports:
- containerPort: 5601
name: ui
protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: ns-monitor
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30601
selector:
k8s-app: kibana
1
kubectl apply -f kibana.yaml

验证

1
kubectl get pods -n ns-monitor

部署 Filebeat

新建名为: filebeat.yaml 的文件,内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: ns-monitor
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: ns-monitor
labels:
k8s-app: filebeat

---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: ns-monitor
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false

# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true

output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']

---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: ns-monitor
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: ns-monitor
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: elastic/filebeat:7.5.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-0.elasticsearch.ns-monitor
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
emptyDir: {}
1
kubectl apply -f filebeat.yaml
1
kubectl get pods -n ns-monitor

进入 Kibana

访问

1
http://ip:30601/

创建索引模式 “filebeat-*”

本文标题:在 k8s 中部署 ELK 日志系统

文章作者:Morning Star

发布时间:2021年12月16日 - 21:12

最后更新:2021年12月17日 - 07:12

原始链接:https://www.mls-tech.info/microservice/k8s/k8s-install-elk/

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。