목차 | ||||
---|---|---|---|---|
|
...
1. 서비스
1.1 클러스터 외부 트래픽 받는 방법
...
서비스를 클러스터 내부 IP에 노출합니다.
클러스터 내에서만 서비스에 도달할 수 있으며, 서비스 타입의 기본값입니다.
6/00-svc-clusterip.yaml
코드 블럭 |
---|
apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:stable ports: - containerPort: 80 name: http-web-svc --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc |
...
고정포트로 각 노드의 IP에 서비스를 노출합니다.
NodePort 서비스가 라우팅되는 ClusterIP가 자동으로 생성되며 NodeIP:NodePort를 요청하며, 서비스 외수에서 NodePort 서비스에 접속할 수 있습니다.
6/01-svc-nodeport.yaml
코드 블럭 |
---|
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: my-service-nodeport spec: type: NodePort selector: app: nginx ports: # 기본적으로 그리고 편의상 `targetPort` 는 `port` 필드와 동일한 값으로 설정된다. - port: 80 targetPort: 80 # 선택적 필드 # 기본적으로 그리고 편의상 쿠버네티스 컨트롤 플레인은 포트 범위에서 할당한다(기본값: 30000-32767) nodePort: 30007 |
(3) LoadBalancer
클라우드 공급자의 로드 밸런서를 사용하여 서비스를 외부에 노출시킵니다.
클러스터 외부 IP를 가지고 있기 때문에, 클러스터 외부에서 접근 가능합니다.
외부 로드 밸런서가 라우팅되는 NodePort와 ClusterIP 서비스가 자동으로 생성됩니다.
프라이빗 클라우드 환경에서는 MetalLB를 이용하여 LoadBalancer 타입의 서비스를 이용 할 수 있습니다.
6/02-svc-loadbalancer.yaml
코드 블럭 |
---|
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx2-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx2-service spec: type: LoadBalancer selector: app: nginx ports: - protocol: TCP port: 8088 targetPort: 80 |
...
값과 함께 CNAME 레코드를 리턴하여, 서비스를 externalName 필드의 콘텐츠에 매핑합니다.
클러스터 내에서 외부 서비스를 이름으로 참조할 수 있게 해주는 서비스 타입입니다.
내부 클러스터에서 외부 DNS 이름을 내부 서비스로 변환하여 접근 가능하게 됩니다.
6/03-svc-externalname.yaml
코드 블럭 |
---|
apiVersion: v1
kind: Service
metadata:
name: my-database
spec:
type: ExternalName
externalName: db.example.com |
이 서비스는
my-database
라는 이름으로 정의되며, 실제로는db.example.com
에 대한 DNS 쿼리로 변환됩니다.클러스터 내부의 애플리케이션에서
my-database
라는 이름을 사용하여 외부의db.example.com
에 접근할 수 있습니다.
...
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl create ns metallb-system |
(2) metallb 설치
6/04-metallb-install.txt
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml namespace/metallb-system configured customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created serviceaccount/controller created serviceaccount/speaker created role.rbac.authorization.k8s.io/controller created role.rbac.authorization.k8s.io/pod-lister created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/controller created rolebinding.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created configmap/metallb-excludel2 created |
...
외부로 노출할 VIP에 대한 범위를 지정합니다.
아이피 대역은 실제 External IP로 사용할 IP를 적어줍니다. (노드에서 할당이 가능하여야 함)
6/05-metallb-ipaddresspool.yaml
코드 블럭 |
---|
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.168.1.150-192.168.1.200 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: first-pool-advertisement namespace: metallb-system spec: ipAddressPools: - first-pool |
...
만든 configamap 생성
코드 블럭 |
---|
kubectl apply -f 008.metalLB-addresspool05-metallb-ipaddresspool.yaml configmap/config created |
...
Private 환경에서 인그레스를 사용하기 위해 컨트롤러를 설치합니다.
6/06-ingress-nginx-install.txt
코드 블럭 |
---|
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created serviceaccount/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created configmap/ingress-nginx-controller created service/ingress-nginx-controller created service/ingress-nginx-controller-admission created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created |
...
테스트를 위하여 총 2가지의 pod와 ClusterIP 타입의 Service를 생성합니다.
아래 deployment는 podname을 확인하도록 nginx 가 아닌 echoserver 이미지를 사용 합니다.
009.nginx6/07-ingress-backend.yml
코드 블럭 | ||
---|---|---|
| ||
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx1-deployment spec: selector: matchLabels: app: nginx1 replicas: 1 template: metadata: labels: app: nginx1 spec: containers: - name: my-echo image: jmalloc/echo-server --- apiVersion: v1 kind: Service metadata: name: nginx-service-clusterip labels: name: nginx-service-clusterip spec: type: ClusterIP ports: - port: 80 # Cluster IP targetPort: 8080 # Application port protocol: TCP name: http selector: app: nginx1 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx2-deployment spec: selector: matchLabels: app: nginx2 replicas: 1 template: metadata: labels: app: nginx2 spec: containers: - name: my-echo image: jmalloc/echo-server --- apiVersion: v1 kind: Service metadata: name: nginx2-service-clusterip labels: name: nginx2-service-clusterip spec: type: ClusterIP ports: - port: 80 # Cluster IP targetPort: 8080 # Application port protocol: TCP name: http selector: app: nginx2 |
...
코드 블럭 |
---|
[root@m-k8s vagrant]# k apply -f 009.nginx07-ingress-backend.yml deployment.apps/nginx1-deployment created service/nginx-service-clusterip created deployment.apps/nginx2-deployment created service/nginx2-service-clusterip created |
...
pod와 Service가 잘 생성되었으며, 생성된 Service로 통신시도 통신 시도 시 연결된 Pod 이름을 확인합니다.
코드 블럭 |
---|
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service-clusterip ClusterIP 10.105.151.217 <none> 80/TCP 20s
nginx2-service-clusterip ClusterIP 10.106.195.22 <none> 80/TCP 20s
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx1-deployment-545749bf4d-h7qfx 1/1 Running 0 29s
nginx2-deployment-56d6f87fc9-9m7h2 1/1 Running 0 29s
[root@m-k8s vagrant]# curl 10.105.151.217
Request served by nginx1-deployment-8458b98748-75hlx
GET / HTTP/1.1
Host: 10.105.151.217
Accept: */*
User-Agent: curl/7.29.0
curl 10.98.154.210
[root@m-k8s vagrant]# curl 10.106.195.22
Request served by nginx2-deployment-767fbbfc95-g42jr
GET / HTTP/1.1
Host: 10.106.195.22
Accept: */*
User-Agent: curl/7.29.0 |
...
ingress svc로 들어온 패킷의 L7레이어 즉 도메인 주소를 체크하여 트래픽을 전달합니다.
도메인 주소가 a.com이라면 nginx-service-clusterip로 연결합니다.
도메인 주소가 b.com이라면 nginx2-service.clusterip로 연결합니다.
009.ing6/08-ingress.yaml
코드 블럭 | ||
---|---|---|
| ||
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress spec: ingressClassName: nginx rules: - host: "a.com" http: paths: - pathType: Prefix path: "/" backend: service: name: nginx-service-clusterip port: number: 80 - host: "b.com" http: paths: - pathType: Prefix path: "/" backend: service: name: nginx2-service-clusterip port: number: 80 |
(8) 009.ing.yaml
을 이용하여 ingress정책을 생성 합니다
코드 블럭 |
---|
[root@m-k8s ~]# kubectl create -f 009.ing08-ingress.yaml ingress.networking.k8s.io/ingress created |
...
코드 블럭 |
---|
$curl a.com
Hostname: nginx1-deployment-545749bf4d-h7qfx
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=10.233.235.153
method=GET
real path=/
query=
request_version=1.1
request_uri=http://a.com:8080/
Request Headers:
accept=*/*
host=a.com
user-agent=curl/7.71.1
x-forwarded-for=192.168.9.237
x-forwarded-host=a.com
x-forwarded-port=80
x-forwarded-proto=http
x-real-ip=192.168.9.237
x-request-id=6b0169dcff0fd35fa780b600967dffb1
x-scheme=http
Request Body:
-no body in request-
$ curl b.com
Hostname: nginx2-deployment-56d6f87fc9-9m7h2
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=10.233.235.153
method=GET
real path=/
query=
request_version=1.1
request_uri=http://b.com:8080/
Request Headers:
accept=*/*
host=b.com
user-agent=curl/7.71.1
x-forwarded-for=192.168.9.237
x-forwarded-host=b.com
x-forwarded-port=80
x-forwarded-proto=http
x-real-ip=192.168.9.237
x-request-id=2a6b6ffa72efe6b80cae87dcaa51db98
x-scheme=http
Request Body:
-no body in request-
|
nginx ip로 직접 호출을 해봅니다
레이어상 도메인정보가 없으므로 분기되지 않고 404 페이지가 나타 납니다.
...
코드 블럭 |
---|
curl a.com/a Hostname: nginx1-deployment-545749bf4d-mgnt8 Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.233.228.72 method=GET real path=/a query= request_version=1.1 request_uri=http://a.com:8080/a Request Headers: accept=*/* host=a.com user-agent=curl/7.71.1 x-forwarded-for=192.168.9.38 x-forwarded-host=a.com x-forwarded-port=80 x-forwarded-proto=http x-real-ip=192.168.9.38 x-request-id=6c98f42fba35104849f57ce30a57b2c3 x-scheme=http Request Body: -no body in request- curl a.com/b Hostname: nginx2-deployment-56d6f87fc9-55gsg Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.233.228.72 method=GET real path=/b query= request_version=1.1 request_uri=http://a.com:8080/b Request Headers: accept=*/* host=a.com user-agent=curl/7.71.1 x-forwarded-for=192.168.9.38 x-forwarded-host=a.com x-forwarded-port=80 x-forwarded-proto=http x-real-ip=192.168.9.38 x-request-id=b5c8a4dfef21d5acc50763232a7f02c1 x-scheme=http Request Body: -no body in request- |
...
ingress의 기능은 msa아키텍처에 많은 도움을 줄 수 있습니다
ingress를 잘 활용 하면 웹서버에
...
페이지 별로 다른 deploy나 pod그룹을 이용하여 효율적으로 자원을 분배하고 서비스를 배치 하여 관리 할 수 있습니다
정보 |
---|
ingress의 기능은 msa아키텍처에 많은 도움을 줄 수 있으며, 웹서버에 페이지 별로 다른 deploy나 pod그룹을 이용하여 효율적으로 자원을 분배하고 서비스를 배치 하여 관리 할 수 있습니다. |
...
3. 파드 네트워크
...
. 파드 네트워크
참고) 문서 상 노드 이름 및 역할
CNI 종류 및 구성방식에 따라 트래픽 전송 방식에 차이가 있습니다.
어떤 식으로 트래픽이 전달되는지 확인하는 방법을 설명합니다.
sung-ubuntu01 - Control Plane #1
sung-ubuntu02 - Control Plane #2
sung-ubuntu03 - Control Plane #3
sung-ubuntu04 - Worker Node #1
sung-ubuntu05 - Worker Node #2
...
Pod (Single Container)의 네트워크 설정을 확인합니다.
...
6/09-pod-network-ubuntu.yaml
...
코드 블럭 |
---|
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-test
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
restartPolicy: Always
dnsConfig:
nameservers:
- 8.8.8.8
|
파드 네트워크 구성에 대해 알아봅니다.
코드 블럭 |
---|
root@sung-ubuntu01:~/tmp# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ubuntu-test 1/1 Running 0 6m20s 10.233.99.1 sung-ubuntu04 <none> <none> ###POD 접속 root@sung-ubuntu01:~/tmp# kubectl exec -it ubuntu-test -- bash root@ubuntu-test:/# root@ubuntu-test:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=39.6 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=38.1 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=38.8 ms 64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=39.3 ms # apt update # apt install -y net-tools iputils-ping **# 컨테이너의 네트워크 인터페이스 확인** # root@ubuntu-test:/# ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480 inet 10.233.99.1 netmask 255.255.255.255 broadcast 0.0.0.0 ether 06:55:84:5a:ac:6b txqueuelen 0 (Ethernet) RX packets 5718 bytes 24026416 (24.0 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3690 bytes 250168 (250.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #노드와 파드 사이를 연결하는 #설명추가인터페이스입니다. tunl0: flags=128<NOARP> mtu 1480 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 * loopback ip : host 자신을 가리키는 논리적인 인터페이스 **#노드 네트워크 확인** root@sung-ubuntu04:~# ifconfig -a ... tunl0: flags=193<UP,RUNNING,NOARP> mtu 1480 inet 10.233.99.0 netmask 255.255.255.255 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 60 bytes 8528 (8.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 66 bytes 4476 (4.4 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 *mtu 1480인 이유? IPIP 터널을 사용하고 있기 때문에, 캡슐화된 패킷의 크기는 원래 패킷보다 더 크기 때문에 MTU 조절이 필요하다 1480인 이유는 캡슐화된 패킷이 Ethernet 패킷에 포함될 때 전체 크기가 1500을 초과하지 않도록 하기 위해서이다. |
3.3 멀티 컨테이너 파드
Pod (Multi Container)의 네트워크 설정 확인을 확인합니다.
...
6/10-pod-network-multicon.yaml
코드 블럭 |
---|
apiVersion: v1
kind: Pod
metadata:
name: multi-container
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /cache
name: cache-volume
- name: nginx
image: nginx
ports:
- containerPort: 80
volumes:
- name: cache-volume
emptyDir: {}
restartPolicy: Always
dnsConfig:
nameservers:
- 8.8.8.8
root@sung-ubuntu01:~/tmp# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
multi-container 2/2 Running 0 25m 10.233.78.3 sung-ubuntu05 <none> <none>
ubuntu-test 1/1 Running 0 57m 10.233.99.1 sung-ubuntu04 <none> <none>
|
컨테이너 내부에서 네트워크 흐름을 알아봅니다.
코드 블럭 |
---|
**#ubuntu 컨테이너 접속**
root@sung-ubuntu01:~/tmp# kubectl exec -it multi-container -c ubuntu -- bash
### POD 안에서
# apt update
# apt install -y net-tools iputils-ping
root@multi-container:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480
inet 10.233.78.3 netmask 255.255.255.255 broadcast 0.0.0.0
ether ce:de:b3:90:c1:a7 txqueuelen 0 (Ethernet)
RX packets 5206 bytes 23989810 (23.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3160 bytes 213900 (213.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
**# nginx 컨테이너 접속**
root@sung-ubuntu01:~/tmp# kubectl exec -it multi-container -c nginx -- bash
### POD 안에서 실행
# apt update
# apt install -y net-tools iputils-ping
root@multi-container:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480
inet 10.233.78.3 netmask 255.255.255.255 broadcast 0.0.0.0
ether ce:de:b3:90:c1:a7 txqueuelen 0 (Ethernet)
RX packets 6287 bytes 33013014 (31.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3935 bytes 267591 (261.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
...
노드 네트워크 확인
코드 블럭 |
---|
root@sung-ubuntu05:~# ifconfig -a calib4cfe5eb958: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480 inet6 fe80::ecee:eeff:feee:eeee prefixlen 64 scopeid 0x20<link> ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet) RX packets 3935 bytes 267591 (267.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6287 bytes 33013014 (33.0 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:6a:17:c5:80 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.110.104 netmask 255.255.0.0 broadcast 192.168.255.255 inet6 fe80::f816:3eff:fe54:bc4 prefixlen 64 scopeid 0x20<link> ether fa:16:3e:54:0b:c4 txqueuelen 1000 (Ethernet) RX packets 1353299 bytes 1304887824 (1.3 GB) RX errors 0 dropped 88603 overruns 0 frame 0 TX packets 191206 bytes 20789350 (20.7 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 kube-ipvs0: flags=130<BROADCAST,NOARP> mtu 1500 inet 10.233.0.1 netmask 255.255.255.255 broadcast 0.0.0.0 ether 66:2d:b3:6c:50:9a txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 162061 bytes 22298211 (22.2 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 162061 bytes 22298211 (22.2 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 nodelocaldns: flags=130<BROADCAST,NOARP> mtu 1500 inet 169.254.25.10 netmask 255.255.255.255 broadcast 0.0.0.0 ether 16:84:53:46:fe:65 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 tunl0: flags=193<UP,RUNNING,NOARP> mtu 1480 #터널 인터페이스 inet 10.233.78.0 netmask 255.255.255.255 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 69 bytes 9380 (9.3 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 76 bytes 5125 (5.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
...
multi-container 관련 컨테이너 확인
코드 블럭 |
---|
root@sung-ubuntu05:~# docker ps | grep multi-container 64c1938850a2 nginx "/docker-entrypoint.…" 26 minutes ago Up 25 minutes k8s_nginx_multi-container_default_1d0e0776-18b1-4c7f-b05f-20b8c54fb230_0 b4c4045ac777 ubuntu "/bin/sleep 3650d" 26 minutes ago Up 26 minutes k8s_ubuntu_multi-container_default_1d0e0776-18b1-4c7f-b05f-20b8c54fb230_0 1eaedb9c9d55 k8s.gcr.io/pause:3.5 "/pause" 27 minutes ago Up 26 minutes k8s_POD_multi-container_default_1d0e0776-18b1-4c7f-b05f-20b8c54fb230_0 |
정보 |
---|
Pause Container
|
...
DNS 응답을 라운드 로빈 방식으로 로드 밸런싱합니다.
CoreDNS 기본 설정
6/11-coredns-cm.yaml
코드 블럭 |
---|
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } |
...
코드 블럭 |
---|
kubectl -n kube-system edit configmap coredns |
(2) 외부 DNS 서버 추가
...
특정 도메인 목적지 추가
host {} 항목을 추가합니다.
코드 블럭 |
---|
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance hosts { 172.16.0.3 webservice1.com fallthrough } } |
...