...
1.1 클러스터 외부 트래픽 받는 방법
Nodeport | |
Load Balancer | |
Ingress |
1.2 서비스 개념
파드를 통해 사용자에게 서비스를 제공을 할 수 있으나, 파드에 장애가 발생했을 경우 서비스에 가용성을 보장할 수가 없기 때문에, Service를 통해 가용성을 보장합니다
...
코드 블럭 |
---|
apiVersion: v1 kind: Pod metadata: name: nginx labels: app.kubernetes.io/name: proxy spec: containers: - name: nginx image: nginx:stable ports: - containerPort: 80 name: http-web-svc --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app.kubernetes.io/name: proxy ports: - name: name-of-service-port protocol: TCP port: 80 targetPort: http-web-svc |
(2) NodePort
고정포트로 각 노드의 IP에 서비스를 노출합니다.
NodePort 서비스가 라우팅되는 ClusterIP가 자동으로 생성되며 NodeIP:NodePort를 요청하며, 서비스 외수에서 NodePort 서비스에 접속할 수 있습니다.
6/01-svc-nodeport.yaml
코드 블럭 |
---|
코드 블럭 |
root@cp-k8s:~/2024_k8s/edu/6# k get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h16m nginx-service ClusterIP 10.103.98.179 <none> kind: Deployment 80/TCP 99s root@cp-k8s:~/2024_k8s/edu/6# curl metadata: 10.103.98.179 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> |
(2) NodePort
고정포트로 각 노드의 IP에 서비스를 노출합니다.
NodePort 서비스가 라우팅되는 ClusterIP가 자동으로 생성되며 NodeIP:NodePort를 요청하며, 서비스 외수에서 NodePort 서비스에 접속할 수 있습니다.
6/01-svc-nodeport.yaml
코드 블럭 |
---|
apiVersion: apps/v1 name: nginx-deployment kind: Deployment labels: metadata: app: nginx name: nginx-deployment spec: labels: replicasapp: nginx 1 spec: selector: matchLabelsreplicas: 1 appselector: nginx template: matchLabels: metadata: app: nginx template: labels: metadata: app: nginx speclabels: containersapp: nginx spec: - name: nginx image: nginx containers: ports: - name: nginx - containerPortimage: 80nginx --- ports: apiVersion: v1 - containerPort: 80 kind:--- Service metadata: apiVersion: v1 name: my-service-nodeport kind: Service spec: metadata: type: NodePort name: my-service-nodeport selectorspec: app: nginx type: NodePort portsselector: app: nginx # 기본적으로 그리고 편의상 `targetPort` 는 `port` 필드와 동일한 값으로 설정된다. - port: 80 ports: targetPort: 80 # 기본적으로 그리고 편의상 `targetPort` 는 `port` 필드와 동일한 값으로 설정된다. # 선택적- 필드port: 80 # 기본적으로 그리고targetPort: 편의상80 쿠버네티스 컨트롤 플레인은 포트 범위에서 할당한다(기본값: 30000-32767) nodePort: 30007 |
(3) LoadBalancer
클라우드 공급자의 로드 밸런서를 사용하여 서비스를 외부에 노출시킵니다.
클러스터 외부 IP를 가지고 있기 때문에, 클러스터 외부에서 접근 가능합니다.
외부 로드 밸런서가 라우팅되는 NodePort와 ClusterIP 서비스가 자동으로 생성됩니다.
프라이빗 클라우드 환경에서는 MetalLB를 이용하여 LoadBalancer 타입의 서비스를 이용 할 수 있습니다.
6/02-svc-loadbalancer.yaml
코드 블럭 |
---|
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx2-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata:# 선택적 필드 labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx2-service spec: type:# LoadBalancer기본적으로 그리고 편의상 selector:쿠버네티스 컨트롤 플레인은 포트 범위에서 app할당한다(기본값: nginx30000-32767) ports: - protocol: TCP port: 8088 targetPortnodePort: 80 |
(4) ExternalName
값과 함께 CNAME 레코드를 리턴하여, 서비스를 externalName 필드의 콘텐츠에 매핑합니다.
클러스터 내에서 외부 서비스를 이름으로 참조할 수 있게 해주는 서비스 타입입니다.
내부 클러스터에서 외부 DNS 이름을 내부 서비스로 변환하여 접근 가능하게 됩니다.
6/03-svc-externalname.yaml
코드 블럭 |
---|
apiVersion: v1
kind: Service
metadata:
name: my-database
spec:
type: ExternalName
externalName: db.example.com |
이 서비스는
my-database
라는 이름으로 정의되며, 실제로는db.example.com
에 대한 DNS 쿼리로 변환됩니다.클러스터 내부의 애플리케이션에서
my-database
라는 이름을 사용하여 외부의db.example.com
에 접근할 수 있습니다.
1.3 MetalLB 설치(Private 환경에서의 로드 밸런서)
(1) metallb 네임스페이스 생성
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl create ns metallb-system |
(2) metallb 설치
6/04-metallb-install.txt
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml
namespace/metallb-system configured
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created |
(3) 설치 확인
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-5567fb94fd-mn6jg 1/1 Running 0 2m4s pod/speaker-2pxpd 1/1 Running 0 2m3s pod/speaker-lpnmf 1/1 Running 0 2m3s pod/speaker-q8hvp 30007 |
코드 블럭 |
---|
root@cp-k8s:~/2024_k8s/edu# curl localhost:30007
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto; |
코드 블럭 |
---|
root@cp-k8s:~/2024_k8s/edu# cat /etc/hosts
127.0.0.1 localhost
192.168.1.10 cp-k8s
192.168.1.101 w1-k8s
192.168.1.102 w2-k8s
192.168.1.103 w3-k8s
root@cp-k8s:~/2024_k8s/edu# curl 192.168.1.102:30007
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> |
...
(3) LoadBalancer
클라우드 공급자의 로드 밸런서를 사용하여 서비스를 외부에 노출시킵니다.
클러스터 외부 IP를 가지고 있기 때문에, 클러스터 외부에서 접근 가능합니다.
외부 로드 밸런서가 라우팅되는 NodePort와 ClusterIP 서비스가 자동으로 생성됩니다.
프라이빗 클라우드 환경에서는 MetalLB를 이용하여 LoadBalancer 타입의 서비스를 이용 할 수 있습니다.
6/02-svc-loadbalancer.yaml
코드 블럭 |
---|
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx2-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: 1/1 - containerPort: Running80 --- apiVersion: 0v1 kind: Service metadata: name: nginx2-service spec: 2m3s type: NAMELoadBalancer selector: app: nginx ports: - protocol: TCP TYPE port: 8088 CLUSTER-IP EXTERNAL-IP PORT(S)targetPort: 80 |
코드 블럭 |
---|
root@cp-k8s:~/2024_k8s/edu/6# k get svc NAME AGE service/webhook-service ClusterIP 10.108.47.177 <none> TYPE 443/TCP 2m4s NAME CLUSTER-IP EXTERNAL-IP PORT(S) DESIRED CURRENTAGE kubernetes READY UP-TO-DATE AVAILABLE NODE SELECTORClusterIP AGE daemonset.apps/speaker10.96.0.1 3 <none> 3 443/TCP 3 3h22m 3my-service-nodeport NodePort 10.100.38.132 3 <none> kubernetes.io/os=linux80:30007/TCP 2m4s NAME5m1s nginx-service ClusterIP 10.103.98.179 <none> READY UP-TO-DATE 80/TCP AVAILABLE AGE deployment.apps/controller 1/1 8m1s nginx2-service 1 LoadBalancer 10.100.170.165 192.168.1.11 2m4s NAME DESIRED CURRENT READY AGE replicaset.apps/controller-5567fb94fd 1 1 1 2m4s 8088:30843/TCP 7s |
(4) ExternalName
값과 함께 CNAME 레코드를 리턴하여, 서비스를 externalName 필드의 콘텐츠에 매핑합니다.
클러스터 내에서 외부 서비스를 이름으로 참조할 수 있게 해주는 서비스 타입입니다.
내부 클러스터에서 외부 DNS 이름을 내부 서비스로 변환하여 접근 가능하게 됩니다.
6/03-svc-externalname.yaml
코드 블럭 |
---|
apiVersion: v1
kind: Service
metadata:
name: my-database
spec:
type: ExternalName
externalName: db.example.com |
이 서비스는
my-database
라는 이름으로 정의되며, 실제로는db.example.com
에 대한 DNS 쿼리로 변환됩니다.클러스터 내부의 애플리케이션에서
my-database
라는 이름을 사용하여 외부의db.example.com
에 접근할 수 있습니다.
1.3 MetalLB 설치(Private 환경에서의 로드 밸런서)
(1) metallb 네임스페이스 생성 (이미 생성해 두었음)
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl getcreate podns -n metallb-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES controller-5567fb94fd-mn6jg 1/1 Running 0 2m13s 172.16.221.164 w1-k8s <none> metallb-system |
(2) metallb 설치 (이미 생성해 두었음)
6/04-metallb-install.txt
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml
namespace/metallb-system configured
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created |
(3) 설치 확인
코드 블럭 |
---|
[root@m-k8s vagrant]# kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-5567fb94fd-mn6jg 1/1 Running 0 <none>2m4s pod/speaker-2pxpd 1/1 Running 0 2m3s pod/speaker-lpnmf 2m12s 192.168.1.1021/1 w2-k8s Running <none> 0 <none>2m3s pod/speaker-lpnmfq8hvp 1/1 Running 0 2m12s2m3s NAME 192.168.1.10 m-k8s <none> <none> speaker-q8hvpTYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 1/1service/webhook-service ClusterIP Running10.108.47.177 0<none> 443/TCP 2m12s 2m4s 192.168.1.101NAME w1-k8s <none> <none> |
(4) IPAddressPool 생성
외부로 노출할 VIP에 대한 범위를 지정합니다.
아이피 대역은 실제 External IP로 사용할 IP를 적어줍니다. (노드에서 할당이 가능하여야 함)
6/05-metallb-ipaddresspool.yaml
코드 블럭 |
---|
apiVersion: metallb.io/v1beta1 kind: IPAddressPoolDESIRED metadata: CURRENT name: first-pool READY namespace: metallb-system spec: addresses: - 192.168.1.150-192.168.1.200 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: first-pool-advertisement namespace: metallb-system spec: ipAddressPools: - first-pool |
만든 configamap 생성
코드 블럭 |
---|
kubectl apply -f 05-metallb-ipaddresspool.yaml
configmap/config created |
...
UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 3 3 3 3 3 kubernetes.io/os=linux 2m4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 2m4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-5567fb94fd 1 1 1 2m4s
[root@m-k8s vagrant]# kubectl get pod -n metallb-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-5567fb94fd-mn6jg 1/1 Running 0 2m13s 172.16.221.164 w1-k8s <none> <none>
speaker-2pxpd 1/1 Running 0 2m12s 192.168.1.102 w2-k8s <none> <none>
speaker-lpnmf 1/1 Running 0 2m12s 192.168.1.10 m-k8s <none> <none>
speaker-q8hvp 1/1 Running 0 2m12s 192.168.1.101 w1-k8s <none> <none> |
(4) IPAddressPool 생성
외부로 노출할 VIP에 대한 범위를 지정합니다.
아이피 대역은 실제 External IP로 사용할 IP를 적어줍니다. (노드에서 할당이 가능하여야 함)
6/05-metallb-ipaddresspool.yaml
코드 블럭 |
---|
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.150-192.168.1.200
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- first-pool |
코드 블럭 |
---|
root@cp-k8s:~/2024_k8s/edu/6# k get IPAddressPool -n metallb-system
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
first-pool true false ["192.168.1.150-192.168.1.200"] |
생성
코드 블럭 |
---|
kubectl apply -f 05-metallb-ipaddresspool.yaml
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/first-pool-advertisement created |
정보 |
---|
Loadbalancer 타입으로 서비스를 생성해보고, External IP가 할당되었는지 확인해봅니다. |
...
2. Ingress
...
코드 블럭 | ||
---|---|---|
| ||
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx1-deployment spec: selector: matchLabels: app: nginx1 replicas: 1 template: metadata: labels: app: nginx1 spec: containers: - name: my-echo image: jmalloc/echo-server --- apiVersion: v1 kind: Service metadata: name: nginxnginx1-service-clusterip labels: name: nginxnginx1-service-clusterip spec: type: ClusterIP ports: - port: 80 # Cluster IP targetPort: 8080 # Application port protocol: TCP name: http selector: app: nginx1 --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx2-deployment spec: selector: matchLabels: app: nginx2 replicas: 1 template: metadata: labels: app: nginx2 spec: containers: - name: my-echo image: jmalloc/echo-server --- apiVersion: v1 kind: Service metadata: name: nginx2-service-clusterip labels: name: nginx2-service-clusterip spec: type: ClusterIP ports: - port: 80 # Cluster IP targetPort: 8080 # Application port protocol: TCP name: http selector: app: nginx2 |
...
코드 블럭 |
---|
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginxnginx1-service-clusterip ClusterIP 10.105.151.217 <none> 80/TCP 20s nginx2-service-clusterip ClusterIP 10.106.195.22 <none> 80/TCP 20s kubectl get pod NAME READY STATUS RESTARTS AGE nginx1-deployment-545749bf4d-h7qfx 1/1 Running 0 29s nginx2-deployment-56d6f87fc9-9m7h2 1/1 Running 0 29s [root@m-k8s vagrant]# curl 10.105.151.217 Request served by nginx1-deployment-8458b98748-75hlx GET / HTTP/1.1 Host: 10.105.151.217 Accept: */* User-Agent: curl/7.29.0 curl 10.98.154.210 [root@m-k8s vagrant]# curl 10.106.195.22 Request served by nginx2-deployment-767fbbfc95-g42jr GET / HTTP/1.1 Host: 10.106.195.22 Accept: */* User-Agent: curl/7.29.0 |
...
코드 블럭 | ||
---|---|---|
| ||
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress spec: ingressClassName: nginx rules: - host: "a.com" http: paths: - pathType: Prefix path: "/" backend: service: name: nginxnginx1-service-clusterip port: number: 80 - host: "b.com" http: paths: - pathType: Prefix path: "/" backend: service: name: nginx2-service-clusterip port: number: 80 |
...
코드 블럭 |
---|
[root@m-k8s vagrant]# k get ing NAME CLASS HOSTS ADDRESS PORTS AGE ingress nginx a.com,b.com 172192.20168.2001.101150 80 5m23s |
(10) 클러스터 외부에서 host파일을 변조하여 a.com과 b.com의 주소를 LB external ip로 지정한 후 접속을 시도하여 L7 기반의 로드밸런싱+라우팅이 제대로 작동되는지 확인 합니다.
...
코드 블럭 |
---|
cat /etc/hosts |grep com 192.168.1.101150 a.com 192.168.1.101150 b.com |
이제 노트북에서 a.com와 b.com을 호출 합니다/
코드 블럭 |
---|
$curl a.com Hostname: nginx1-deployment-545749bf4d-h7qfx Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.233.235.153 method=GET real path=/ query= request_version=1.1 request_uri=http://a.com:8080/ Request Headers: accept=*/* host=a.com user-agent=curl/7.71.1 x-forwarded-for=192.168.9.237 x-forwarded-host=a.com x-forwarded-port=80 x-forwarded-proto=http$ curl a.com Request served by nginx1-deployment-98c84c874-hfb7t GET / HTTP/1.1 Host: a.com Accept: */* User-Agent: curl/8.7.1 X-Forwarded-For: 192.168.1.1 X-Forwarded-Host: a.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Scheme: http X-Real-Ip: 192.168.1.1 X-Request-Id: 36bbde3cd8e429b0139f6cb71041e293 X-Scheme: http $ curl b.com Request served by nginx2-deployment-6cb7564d4f-5gkph GET / HTTP/1.1 Host: b.com Accept: */* User-Agent: curl/8.7.1 X-Forwarded-For: 192.168.1.1 X-Forwarded-Host: b.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Scheme: http X-Real-Ip: 192.168.1.1 X-Request-Id: ef73f4cc0d95cb6ea3678abdae8eceec X-Scheme: http |
nginx ip로 직접 호출을 해봅니다
레이어상 도메인정보가 없으므로 분기되지 않고 404 페이지가 나타 납니다.
코드 블럭 |
---|
curl 192.168.1.150
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
|
ingress 설정 시 아래와 같이 서브도메인으로 분기도 가능하므로 활용하도록 합니다
6/08-ingress2.yaml
코드 블럭 | ||
---|---|---|
| ||
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress2 namespace: default spec: ingressClassName: nginx rules: - host: a.com http: paths: - pathType: Prefix x-real-ip=192.168.9.237 path: /a x-request-id=6b0169dcff0fd35fa780b600967dffb1 backend: x-scheme=http Request Bodyservice: -no body in request- $ curl b.comname: nginx-service-clusterip Hostname: nginx2-deployment-56d6f87fc9-9m7h2 Pod Information: port: -no pod information available- Server values: number: 80 server_version=nginx: 1.13.3 - luapathType: 10008Prefix Request Information: path: /b client_address=10.233.235.153 backend: method=GET realservice: path=/ query= name: nginx2-service-clusterip request_version=1.1 request_uri=http://b.com:8080/port: Request Headers: accept=*/* number: 80 |
코드 블럭 |
---|
kubectl apply -f 08-ingress2.yaml |
위와같이 설정한다면 a.com/a 로 들어온 패킷은 nginx-service-clusterip로 라우팅 됩니다.
a.com/b 로 들어온 패킷은 nginx2.service-clusterip로 라우팅 됩니다
코드 블럭 |
---|
$ curl host=ba.com/a Request served by nginx1-deployment-98c84c874-hfb7t GET user-agent=curl/7.71/a HTTP/1.1 Host: a.com Accept: */* User-Agent: x-forwarded-for=curl/8.7.1 X-Forwarded-For: 192.168.91.237 x-forwarded-host=b.com x-forwarded-port=80 x-forwarded-proto=http x-real-ip=192.168.9.237 x-request-id=2a6b6ffa72efe6b80cae87dcaa51db98 x-scheme=http Request Body: -no body in request- |
nginx ip로 직접 호출을 해봅니다
레이어상 도메인정보가 없으므로 분기되지 않고 404 페이지가 나타 납니다.
코드 블럭 |
---|
curl 172.20.200.101
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
|
ingress 설정 시 아래와 같이 서브도메인으로 분기도 가능하므로 활용하도록 합니다
009.ing2.yaml
코드 블럭 | ||
---|---|---|
| ||
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress2
namespace: default
spec:
ingressClassName: nginx
rules:
- host: a.com
http:
paths:
- pathType: Prefix
path: /a
backend:
service:
name: nginx-service-clusterip
port:
number: 80
- pathType: Prefix
path: /b
backend:
service:
name: nginx2-service-clusterip
port:
number: 80 |
코드 블럭 |
---|
kubectl apply -f 009.ing2.yaml |
위와같이 설정한다면 a.com/a 로 들어온 패킷은 nginx-service-clusterip로 라우팅 됩니다.
a.com/b 로 들어온 패킷은 nginx2.service-clusterip로 라우팅 됩니다
코드 블럭 |
---|
curl a.com/a Hostname: nginx1-deployment-545749bf4d-mgnt8 Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=10.233.228.72 method=GET real path=/a1 X-Forwarded-Host: a.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Scheme: http X-Real-Ip: 192.168.1.1 X-Request-Id: 72079ecd9d108f28bc735853731298f8 X-Scheme: http $ curl a.com/b Request served by nginx2-deployment-6cb7564d4f-5gkph GET /b HTTP/1.1 Host: a.com Accept: */* User-Agent: curl/8.7.1 X-Forwarded-For: 192.168.1.1 X-Forwarded-Host: a.com X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Scheme: http X-Real-Ip: 192.168.1.1 X-Request-Id: d7ee6e9f70b8caa258e727fb1fa942ee X-Scheme: http |
ingress의 기능은 MSA 아키텍처에 많은 도움을 줄 수 있습니다
ingress를 잘 활용 하면 웹서버에 페이지 별로 다른 deploy나 pod그룹을 이용하여 효율적으로 자원을 분배하고 서비스를 배치 하여 관리 할 수 있습니다
정보 |
---|
ingress의 기능은 msa아키텍처에 많은 도움을 줄 수 있으며, 웹서버에 페이지 별로 다른 deploy나 pod그룹을 이용하여 효율적으로 자원을 분배하고 서비스를 배치 하여 관리 할 수 있습니다. |
다음 실습을 위해서 pod를 정리합니다.
코드 블럭 |
---|
# k delete -f 08-ingress2.yaml
ingress.networking.k8s.io "ingress2" deleted
# k delete -f 08-ingress.yaml
ingress.networking.k8s.io "ingress" deleted
# k delete -f 07-ingress-backend.yaml
deployment.apps "nginx1-deployment" deleted
service "nginx-service-clusterip" deleted
deployment.apps "nginx2-deployment" deleted
service "nginx2-service-clusterip" deleted |
...
3. 파드 네트워크
참고) 문서 상 노드 이름 및 역할
CNI 종류 및 구성방식에 따라 트래픽 전송 방식에 차이가 있습니다.
어떤 식으로 트래픽이 전달되는지 확인하는 방법을 설명합니다.
192.168.1.10 cp-k8s
192.168.1.101 w1-k8s
192.168.1.102 w2-k8s
192.168.1.103 w3-k8s
3.1 파드 네크워크
...
도커에서 POD 네트워크 설명할 경우와 인터페이스 이름이 다릅니다.
도커의 역할
참고1) https://www.docker.com/products/container-runtime/
3.2 단일 컨테이너 파드
Pod (Single Container)의 네트워크 설정을 확인합니다.
6/09-pod-network-ubuntu.yaml
코드 블럭 |
---|
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-test
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
restartPolicy: Always
dnsConfig:
nameservers:
- 8.8.8.8 |
파드 네트워크 구성에 대해 알아봅니다.
코드 블럭 |
---|
root@cp-k8s# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE query=NOMINATED NODE READINESS GATES ubuntu-test request_version=1./1 Running 0 request_uri=http://a.com:8080/a Request Headers: accept=*/* 150m 172.16.103.134 w2-k8s <none> host=a.com <none> ###POD 접속 root@cp-k8s## kubectl exec -it user-agent=curl/7.71.1 x-forwarded-for=192.168.9.38 x-forwarded-host=a.com x-forwarded-port=80 x-forwarded-proto=http x-real-ip=192.168.9.38 x-request-id=6c98f42fba35104849f57ce30a57b2c3 x-scheme=http Request Body: ubuntu-test -- bash # apt update # apt install -y net-tools iputils-ping root@ubuntu-test:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=39.6 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=38.1 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=38.8 ms 64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=39.3 ms **# 컨테이너의 네트워크 인터페이스 확인** # root@ubuntu-test:/# ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480 -no body in request-inet 172.16.103.134 curl a.com/bnetmask 255.255.255.255 Hostname: nginx2-deployment-56d6f87fc9-55gsg Pod Information:broadcast 0.0.0.0 ether 36:d7:04:b0:6a:0b txqueuelen 1000 -no pod(Ethernet) information available- Server values: RX packets 9667 server_version=nginx: 1.13.3 - lua: 10008 Request Information:bytes 32353517 (32.3 MB) RX errors 0 client_address=10.233.228.72dropped 0 overruns 0 frame 0 method=GET TX packets 8976 real path=/bbytes 492421 (492.4 KB) query= TX errors 0 dropped request_version=1.1 0 overruns 0 carrier 0 collisions 0 request_uri=http://a.com:8080/b Request Headers: lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 accept=*/* inet 127.0.0.1 netmask 255.0.0.0 host=a.com loop user-agent=curl/7.71.1txqueuelen 1000 (Local Loopback) x-forwarded-for=192.168.9.38 RX packets 0 bytes x-forwarded-host=a.com 0 (0.0 B) x-forwarded-port=80 RX errors 0 dropped 0 x-forwarded-proto=httpoverruns 0 frame 0 x-real-ip=192.168.9.38 TX packets 0 bytes x-request-id=b5c8a4dfef21d5acc50763232a7f02c1 0 (0.0 B) x-scheme=http TX Requesterrors Body:0 dropped 0 overruns 0 carrier 0 -no bodycollisions in request- |
ingress의 기능은 msa아키텍처에 많은 도움을 줄 수 있습니다
ingress를 잘 활용 하면 웹서버에 페이지 별로 다른 deploy나 pod그룹을 이용하여 효율적으로 자원을 분배하고 서비스를 배치 하여 관리 할 수 있습니다
정보 |
---|
ingress의 기능은 msa아키텍처에 많은 도움을 줄 수 있으며, 웹서버에 페이지 별로 다른 deploy나 pod그룹을 이용하여 효율적으로 자원을 분배하고 서비스를 배치 하여 관리 할 수 있습니다. |
3. 파드 네트워크
참고) 문서 상 노드 이름 및 역할
CNI 종류 및 구성방식에 따라 트래픽 전송 방식에 차이가 있습니다.
어떤 식으로 트래픽이 전달되는지 확인하는 방법을 설명합니다.
sung-ubuntu01 - Control Plane #1
sung-ubuntu02 - Control Plane #2
sung-ubuntu03 - Control Plane #3
sung-ubuntu04 - Worker Node #1
sung-ubuntu05 - Worker Node #2
3.1 파드 네크워크
...
도커에서 POD 네트워크 설명할 경우와 인터페이스 이름이 다릅니다.
도커의 역할
참고1) https://www.docker.com/products/container-runtime/
3.2 단일 컨테이너 파드
Pod (Single Container)의 네트워크 설정을 확인합니다.
6/09-pod-network-ubuntu.yaml
코드 블럭 |
---|
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-test
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
restartPolicy: Always
dnsConfig:
nameservers:
- 8.8.8.8 |
파드 네트워크 구성에 대해 알아봅니다.
코드 블럭 |
---|
root@sung-ubuntu01:~/tmp# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ubuntu-test 1/1 Running 0 6m20s 10.233.99.1 sung-ubuntu04 <none> <none> ###POD 접속 root@sung-ubuntu01:~/tmp# kubectl exec -it ubuntu-test -- bash root@ubuntu-test:/# root@ubuntu-test:/# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=39.6 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=38.1 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=38.8 ms 64 bytes from 8.8.8.8: icmp_seq=4 ttl=54 time=39.3 ms # apt update # apt install -y net-tools iputils-ping **# 컨테이너의 네트워크 인터페이스 확인** # root@ubuntu-test:/# ifconfig -a eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480 inet 10.233.99.1 netmask 255.255.255.255 broadcast 0.0.0.0 ether 06:55:84:5a:ac:6b txqueuelen 0 (Ethernet) RX packets 5718 bytes 24026416 (24.0 MB) RX errors 0 dropped 0 overruns 0 frame 00 #노드와 파드 사이를 연결하는 인터페이스입니다. tunl0: flags=128<NOARP> mtu 1480 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ons 0 * loopback ip : host 자신을 가리키는 논리적인 인터페이스 **#노드 네트워크 확인** root@sung-ubuntu04:~# ifconfig -a ... tunl0: flags=193<UP,RUNNING,NOARP> mtu 1480 inet 172.16.103.128 netmask 255.255.255.255 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 111 bytes 15872 (15.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 91 bytes 14596 (14.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 *mtu 1480인 이유? IPIP 터널을 사용하고 있기 때문에, 캡슐화된 패킷의 크기는 원래 패킷보다 더 크기 때문에 MTU 조절이 필요하다 1480인 이유는 캡슐화된 패킷이 Ethernet 패킷에 포함될 때 전체 크기가 1500을 초과하지 않도록 하기 위해서이다. |
3.3 멀티 컨테이너 파드
Pod (Multi Container)의 네트워크 설정 확인을 확인합니다.
6/10-pod-network-multicon.yaml
코드 블럭 |
---|
apiVersion: v1 kind: Pod metadata: name: multi-container spec: containers: - name: ubuntu image: ubuntu:20.04 command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /cache name: cache-volume - name: nginx image: nginx ports: - containerPort: 80 volumes: - name: cache-volume emptyDir: {} restartPolicy: Always dnsConfig: nameservers: - 8.8.8.8 root@cp-k8s:~# k get pod -o wide NAME TX packets 3690 bytes 250168 (250.1 KB) TX errors 0 READY dropped 0 overrunsSTATUS 0 carrier 0RESTARTS collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> AGE mtu 65536 IP inet 127.0.0.1 netmask 255.0.0.0 NODE loop NOMINATED txqueuelenNODE 1000 (LocalREADINESS Loopback)GATES multi-container RX packets 0 bytes 0 (0.0 B) RX errors 0 2/2 dropped 0 overruns 0Running frame 0 TX packets 0 bytes 015m (0.0 B) 172.16.132.6 w3-k8s TX errors<none> 0 dropped 0 overruns 0 carrier 0 collisions 0<none> nfs-client-provisioner-5cf87f6995-vg6fq #노드와 파드1/1 사이를 연결하는 인터페이스입니다. tunl0: flags=128<NOARP>Running mtu 148028 (10m ago) 7h4m 172.16.221.133 tunnel w1-k8s txqueuelen 1000 <none> (IPIP Tunnel) RX packets 0<none> ubuntu-test bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 1/1 TX packets 0Running bytes 0 (0.0 B) TX errors 0 36m dropped 0 overruns 0 172.16.103.134 carrier 0w2-k8s collisions 0<none> * loopback ip : host 자신을 가리키는 논리적인 인터페이스 <none> |
컨테이너 내부에서 네트워크 흐름을 알아봅니다.
코드 블럭 |
---|
**#노드#ubuntu 네트워크컨테이너 확인접속** root@sungroot@cp-ubuntu04k8s:~# ifconfig -a ... tunl0 kubectl exec -it multi-container -c ubuntu -- bash ### POD 안에서 # apt update # apt install -y net-tools iputils-ping root@multi-container:/# ifconfig eth0: flags=193<UP4163<UP,BROADCAST,RUNNING,NOARP>MULTICAST> mtu 1480 inet 10172.23316.99132.06 netmask 255.255.255.255 broadcast 0.0.0.0 tunnel ether da:6c:e7:bc:8b:f9 txqueuelen 1000 (IPIP TunnelEthernet) RX packets 604920 bytes 852841607447 (841.56 KBMB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 663877 bytes 4476 (4.4 KB) 217509 (217.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 TX errors 0loop droppedtxqueuelen 01000 overruns 0(Local Loopback) carrier 0 collisions 0 *mtu 1480인 이유?RX IPIPpackets 터널을0 사용하고 있기bytes 때문에, 캡슐화된 패킷의 크기는 원래 패킷보다 더 크기 때문에 MTU 조절이 필요하다 1480인 이유는 캡슐화된 패킷이 Ethernet 패킷에 포함될 때 전체 크기가 1500을 초과하지 않도록 하기 위해서이다. |
3.3 멀티 컨테이너 파드
Pod (Multi Container)의 네트워크 설정 확인을 확인합니다.
6/10-pod-network-multicon.yaml
코드 블럭 |
---|
apiVersion: v1 kind: Pod metadata: name: multi-container spec: containers: - name: ubuntu image: ubuntu:20.04 command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /cache name: cache-volume - name: nginx image: nginx ports: - containerPort: 80 volumes: - name: cache-volume emptyDir: {} restartPolicy: Always dnsConfig: nameservers:0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 **# nginx 컨테이너 접속** root@cp-k8s:~# kubectl exec -it multi-container -c nginx -- bash ### POD 안에서 실행 # apt update # apt install -y net-tools iputils-ping root@multi-container:/# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480 inet 172.16.132.6 netmask 255.255.255.255 broadcast 0.0.0.0 ether da:6c:e7:bc:8b:f9 txqueuelen 1000 (Ethernet) - 8.8.8.8 root@sung-ubuntu01:~/tmp# kubectl get pod -o wide NAMERX packets 590 bytes 9590376 (9.1 MiB) RX errors 0 dropped READY0 overruns STATUS0 frame 0 RESTARTS AGE IP TX packets 355 bytes 21258 (20.7 KiB) NODE TX errors NOMINATED0 NODE dropped 0 READINESSoverruns GATES0 multi-container carrier 0 2/2 collisions 0 Runninglo: flags=73<UP,LOOPBACK,RUNNING> mtu 065536 25minet 127.0.0.1 netmask 10255.2330.780.30 sung-ubuntu05 <none> loop txqueuelen 1000 (Local Loopback) <none> ubuntu-test RX 1/1packets 0 bytes 0 Running (0.0 B) 0 RX errors 0 57m dropped 0 10.233.99.1 overruns 0 sung-ubuntu04 frame 0 <none> TX packets <none> |
컨테이너 내부에서 네트워크 흐름을 알아봅니다.
코드 블럭 |
---|
**#ubuntu 컨테이너 접속** root@sung-ubuntu01:~/tmp# kubectl exec -it multi-container -c ubuntu -- bash ### POD 안에서 # apt update # apt install -y net-tools iputils-ping root@multi-container:/# ifconfig eth00 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
노드 네트워크 확인
코드 블럭 |
---|
root@w3-k8s:~# ifconfig -a cali9794446aa53: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1480 inet 10.233.78.3 netmask 255.255.255.255 broadcast 0.0.0.0 ether ceee:deee:b3ee:90ee:c1ee:a7ee txqueuelen 01000 (Ethernet) RX packets 520634654 bytes 239898105118573 (235.91 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 316041396 bytes 2139009084184 (2139.90 KBMB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 localib4cfe5eb958: flags=73<UP4163<UP,BROADCAST,LOOPBACKRUNNING,RUNNING>MULTICAST> mtu 655361480 inet 127.0.0.1 netmask 255.0.0.0 loop ether ee:ee:ee:ee:ee:ee txqueuelen 1000 (Local LoopbackEthernet) RX packets 03885 bytes 0218205 (0218.02 BKB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 05122 bytes 041619671 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 **# nginx 컨테이너 접속** root@sung-ubuntu01:~/tmp# kubectl exec -it multi-container -c nginx -- bash ### POD 안에서 실행 # apt update # apt install -y net-tools iputils-ping root@multi-container:/# ifconfig41.6 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 14801500 inet 10.2330.782.315 netmask 255.255.255.2550 broadcast 010.0.02.0255 ether ce08:de00:b300:9000:c100:a700 txqueuelen 01000 (Ethernet) RX packets 6287196682 bytes 33013014287889664 (31287.48 MiBMB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 393513578 bytes 2675911005482 (2611.30 KiBMB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 loeth1: flags=73<UP4163<UP,BROADCAST,LOOPBACKRUNNING,RUNNING>MULTICAST> mtu 655361500 inet 127192.0168.01.1103 netmask 255.255.255.0.0.0 broadcast 192.168.1.255 loopether 08:00:27:3f:6d:d5 txqueuelen 1000 (Local LoopbackEthernet) RX packets 0189996 bytes 054057708 (054.0 BMB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0171448 bytes 022285772 (022.02 BMB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
노드 네트워크 확인
코드 블럭 |
---|
root@sung-ubuntu05:~# ifconfig -a calib4cfe5eb958lo: flags=4163<UP73<UP,BROADCASTLOOPBACK,RUNNING,MULTICAST>RUNNING> mtu 148065536 inet6 fe80::ecee:eeff:feee:eeee prefixlen 64 scopeid 0x20<link>inet 127.0.0.1 netmask 255.0.0.0 loop ether ee:ee:ee:ee:ee:ee txqueuelen 01000 (EthernetLocal Loopback) RX packets 3935246745 bytes 26759167681709 (26767.56 KBMB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6287246745 bytes 3301301467681709 (3367.06 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0tunl0: flags=4099<UP193<UP,BROADCASTRUNNING,MULTICAST>NOARP> mtu 15001480 inet 172.16.17132.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:6a:17:c5:80tunnel txqueuelen 01000 (EthernetIPIP Tunnel) RX packets 129 bytes 24080 (24.0 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 137 bytes 024502 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 24.5 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
multi-container 관련 컨테이너 확인
코드 블럭 |
---|
root@w3-k8s:~# crictl ps | grep multi-container 7e77cac0a9ec8 e784f4560448b 20 TXminutes packetsago 0 bytes 0 (0.0 B) Running TX errors 0 droppednginx 0 overruns 0 carrier 0 collisions 0 ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 15000 inet 192.168.110.104 netmask 255.255.0.0 broadcast 192.168.255.255 2c8ad4a63a674 inet6 fe80::f816:3eff:fe54:bc4 prefixlen 64 scopeid 0x20<link> multi-container 2e4672f196a4e 2abc4dfd83182 ether fa:16:3e:54:0b:c4 txqueuelen 1000 (Ethernet)20 minutes ago Running RX packets 1353299 bytesubuntu 1304887824 (1.3 GB) RX errors 0 dropped 88603 overruns 0 frame 0 TX2c8ad4a63a674 packets 191206 bytes 20789350 (20.7 MB) multi-container root@w3-k8s:~# crictl ps CONTAINER TX errors 0 dropped 0 overruns 0IMAGE carrier 0 collisions 0 kube-ipvs0: flags=130<BROADCAST,NOARP> mtu 1500 CREATED inet 10.233.0.1 netmask 255.255.255.255 broadcast 0.0.0.0 STATE ether 66:2d:b3:6c:50:9a txqueuelen 0 (Ethernet)NAME RX packets 0 bytes 0 (0.0 B) ATTEMPT RX errors 0 droppedPOD 0ID overruns 0 frame 0 POD TX7e77cac0a9ec8 packets 0 bytes 0 (0.0 B) e784f4560448b 20 minutes TXago errors 0 dropped 0 overrunsRunning 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536nginx inet 127.0.0.1 netmask 255.0.0.0 0 inet6 ::1 prefixlen 128 scopeid 0x10<host> 2c8ad4a63a674 loop txqueuelen 1000multi-container 2e4672f196a4e (Local Loopback) 2abc4dfd83182 RX packets 162061 20 bytesminutes 22298211ago (22.2 MB) Running RX errors 0 dropped 0 overruns 0ubuntu frame 0 TX packets 1620610 bytes 22298211 (22.2 MB) TX errors 0 dropped 02c8ad4a63a674 overruns 0 carrier 0 collisionsmulti-container 03041e3871fce9 nodelocaldns: flags=130<BROADCAST,NOARP> mtu 1500 825aff16c20cc 3 inet 169.254.25.10hours ago netmask 255.255.255.255 broadcast 0.0.0.0 Running ether 16:84:53:46:fe:65 txqueuelen 0 (Ethernet) controller RX packets 0 bytes 0 (0.0 B) RX errors 0 droppedf13095e35bd52 0 overruns 0 frame 0 ingress-nginx-controller-568fb54f96-mr8p4 ab79b29f18c4d 44f52c09decec TX packets 0 bytes 0 (0.0 B) 6 hours ago TX errorsRunning 0 dropped 0 overruns 0 carrier 0 collisions 0 calico-node tunl0: flags=193<UP,RUNNING,NOARP> mtu 1480 #터널 인터페이스1 inet 10.233.78.0 netmask 255.255.255.255 69f307f64ef92 tunnel txqueuelen 1000calico-node-9dmp8 6a36a93a5e2fc (IPIP Tunnel) 738c5d221d601 RX packets 69 6 byteshours 9380ago (9.3 KB) Running RX errors 0 dropped 0 overruns 0 frame 0speaker TX packets 76 bytes2 5125 (5.1 KB) TX errors 0 dropped 0 overruns 01b868a54d4c49 carrier 0 collisions 0 |
multi-container 관련 컨테이너 확인
코드 블럭 |
---|
root@sung-ubuntu05:~# docker ps | grep multi-container 64c1938850a2 nginx speaker-6hc8s e1450ce254e91 2019bbea5542a 6 hours ago Running "/docker-entrypoint.…" 26 minutes agokube-proxy Up 25 minutes 1 k8s_nginx_multi-container_default_1d0e0776-18b1-4c7f-b05f-20b8c54fb230_0 b4c4045ac777 ubuntu 424a7426434a9 kube-proxy-wwdpz |
정보 |
---|
리눅스 namespace
|
3.4 파드 간 통신
Pod 간 route 경로 확인
코드 블럭 |
---|
root@cp-k8s:~# kubectl get "/bin/sleep 3650d" pod -o wide NAME 26 minutes ago Up 26 minutes k8s_ubuntu_multi-container_default_1d0e0776-18b1-4c7f-b05f-20b8c54fb230_0 1eaedb9c9d55 k8s.gcr.io/pause:3.5 "/pause" READY STATUS RESTARTS 27AGE minutes ago UpIP 26 minutes k8s_POD_multi-container_default_1d0e0776-18b1-4c7f-b05f-20b8c54fb230_0 |
정보 |
---|
Pause Container
|
정보 |
---|
리눅스 namespace
|
3.4 파드 간 통신
Pod 간 route 경로 확인
코드 블럭 |
---|
root@sung-ubuntu01:~/tmp# kubectl get pod -o wide NAMENODE NOMINATED NODE READINESS GATES multi-container READY STATUS RESTARTS AGE 2/2 IP Running 0 NODE 23m NOMINATED NODE 172.16.132.6 READINESS GATES multi-containerw3-k8s <none> 2/2 <none> nfs-client-provisioner-5cf87f6995-vg6fq Running1/1 0 Running 28 (18m ago) 25m7h13m 10172.23316.78.3 sung-ubuntu05 <none>221.133 w1-k8s <none> <none> ubuntu-test <none> ubuntu-test 1/1 Running 0 57m 45m 10172.23316.99103.1134 sungw2-ubuntu04k8s <none> <none> ; #ubuntu-test root@ubuntu-test:/# apt install traceroute root@ubuntu-test:/# traceroute 10172.23316.78132.36 traceroute to 10172.23316.78132.36 (10172.23316.78132.36), 30 hops max, 60 byte packets 1 19210.1680.1102.10315 (19210.1680.1102.10315) 01.202308 ms 0.032025 ms 0.028020 ms #sung-ubuntu04 ens3 2 10172.23316.78132.0 (10172.23316.78132.0) 12.169099 ms 03.990995 ms 03.928182 ms #sung-ubuntu05 tunl0 3 10172.23316.78132.36 (10172.23316.78132.36) 12.096967 ms 16.111487 ms 15.087188 ms #multi-container IP |
노드 route table 확인
코드 블럭 |
---|
root@sungroot@w2-ubuntu04k8s:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway10.0.2.2 0.0.0.0 UG 100 0 0 ens3eth0 10.2330.782.0 sung-ubuntu05.c 255.255.255 0.0.0.0 UG 0 255.255.255.0 U 0 100 0 tunl0 10.233.91.0 sung-ubuntu02.c 255.255.255.0 UG 0eth0 10.0.2.2 0 0 tunl0 10.2330.950.0 sung-ubuntu01.c 255.255.255.0 UG255.255 UH 0 100 0 0 tunl0eth0 10.2330.99.02.3 0.0.0.0 255.255.255.0255 UH U 100 0 0 0 *eth0 10172.23316.99103.1 128 0.0.0.0 255.255.255.255192 U UH 0 0 0 calie3df4d89b13* 10172.23316.99.2 103.134 0.0.0.0 255.255.255.255 UH 0 0 0 calia85a668c715calie3df4d89b13 10172.23316.112132.0 sung-ubuntu03.cw3-k8s 255.255.255.0192 UG 0 0 0 tunl0 169172.25416.169.254 192.168.51.110196.128 cp-k8s 255.255.255.255 UGH 100192 UG 0 0 ens3 172.17.0.0 0 tunl0 172.016.0.0221.128 w1-k8s 255.255.0255.0192 UG U 0 0 0 docker0tunl0 192.168.01.0 0.0.0.0 255.255.0255.0 U 0 0 0 ens3eth1 root@sungroot@w2-ubuntu04k8s:~# cat /etc/hosts 127.0.0.1 # Ansible inventory hosts BEGIN localhost 192.168.1101.10010 sung-ubuntu01.cluster.local sung-ubuntu01 cp-k8s 192.168.1101.101 sung-ubuntu02.cluster.local sung-ubuntu02w1-k8s 192.168.1101.102 sung-ubuntu03.cluster.local sung-ubuntu03w2-k8s 192.168.1101.103 sung-ubuntu04.cluster.local sung-ubuntu04 192.168.110.104 sung-ubuntu05.cluster.local sung-ubuntu05w3-k8s |
참고
https://kubernetes.io/ko/docs/concepts/cluster-administration/networking/
...