· ☕ 1 分钟
https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/
from worknode
no cert check TLS
|
|
simple TLS
|
|
through gateway
no cert check TLS
|
|
no cert check TLS
|
|
寫點東西吧,懒人。
https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/
|
|
|
|
|
|
|
|
https://istio.io/latest/docs/ops/configuration/traffic-management/tls-configuration/
Sidecar traffic has a variety of associated connections. Let’s break them down one at a time.

Sidecar proxy network connections
PERMISSIVE mode. The mode can alternatively be configured to STRICT, where traffic must be mTLS, or DISABLE, where traffic must be plaintext. The mTLS mode is configured using a PeerAuthentication resource.trafficPolicy of a DestinationRule resource. A mode setting of DISABLE will send plaintext, while SIMPLE, MUTUAL, and ISTIO_MUTUAL will originate a TLS connection.The key takeaways are:
https://istio.io/v1.4/docs/tasks/security/authentication/mtls-migration/
Ensure that your cluster is in PERMISSIVE mode before migrating to mutual TLS. Run the following command to check:
|
|
In PERMISSIVE mode, the Envoy sidecar relies on the ALPN value istio to decide whether to terminate the mutual TLS traffic. If your workloads (without Envoy sidecar) have enabled mutual TLS directly to the services with Envoy sidecars, enabling PERMISSIVE mode may cause these connections to fail.
old school Official SPIFFE method:
https://blog.envoyproxy.io/securing-the-service-mesh-with-spire-0-3-abb45cd79810

A workload is a single piece of software, deployed with a particular configuration for a single purpose; it may comprise multiple running instances of software, all of which perform the same task. The term “workload” may encompass a range of different definitions of a software system, including:
A SPIFFE ID is a string that uniquely and specifically identifies a workload. SPIFFE IDs may also be assigned to intermediate systems that a workload runs on (such as a group of virtual machines). For example, spiffe://acme.com/billing/payments is a valid SPIFFE ID.
x-forwarded-client-cert (XFCC) is a proxy header which indicates certificate information of part or all of the clients or proxies that a request has flowed through, on its way from the client to the server. A proxy may choose to sanitize/append/forward the XFCC header before proxying the request.
The XFCC header value is a comma (",") separated string. Each substring is an XFCC element, which holds information added by a single proxy. A proxy can append the current client certificate information as an XFCC element, to the end of the request’s XFCC header after a comma.
https://istio.io/latest/docs/ops/common-problems/network-issues/#double-tls
When configuring Istio to perform TLS origination, you need to make sure that the application sends plaintext requests to the sidecar, which will then originate the TLS.
TLS Origination
TLS origination occurs when an Istio proxy (sidecar or egress gateway) is configured to accept unencrypted internal HTTP connections, encrypt the requests, and then forward them to HTTPS servers that are secured using simple or mutual TLS. This is the opposite of TLS termination where an ingress proxy accepts incoming TLS connections, decrypts the TLS, and passes unencrypted requests on to internal mesh services.
When CPU manager is enabled with the “static” policy, it manages a shared pool of CPUs. Initially this shared pool contains all the CPUs in the compute node. When a container with integer CPU request in a Guaranteed pod is created by the Kubelet, CPUs for that container are removed from the shared pool and assigned exclusively for the lifetime of the container. Other containers are migrated off these exclusively allocated CPUs.
保证最少 NUMA Node 去满足 POD 的内存需求: Offer guaranteed memory (and hugepages) allocation over a minimum number of NUMA nodes for containers (within a pod).
长远是让pod中的所有 container 运行在尽量少的 NUMA NODE 中: Guaranteeing the affinity of memory and hugepages to the same NUMA node for the whole group of containers (within a pod). This is a long-term goal which will be achieved along with PR #1752 and the implementation of hintprovider.GetPodLevelTopologyHints() API in the Memory Manager.- Offer guaranteed memory (and hugepages) allocation over a minimum number of NUMA nodes for containers (within a pod).
Your Kubernetes server must be at or later than version v1.21. To check the version, enter kubectl version.
To align memory resources with other requested resources in a Pod Spec:
Starting from v1.22, the Memory Manager is enabled by default through MemoryManager feature gate.
Topology Manager provides two distinct knobs: scope and policy.
The scope defines the granularity at which you would like resource alignment to be performed (e.g. at the pod or container level). And the policy defines the actual strategy used to carry out the alignment (e.g. best-effort, restricted, single-numa-node, etc.).
The Topology Manager can deal with the alignment of resources in a couple of distinct scopes:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container
https://towardsdatascience.com/the-easiest-way-to-debug-kubernetes-workloads-ff2ff5e3cc75
|
|
https://towardsdatascience.com/the-easiest-way-to-debug-kubernetes-workloads-ff2ff5e3cc75
kubectl debug -it some-app –image=busybox –share-processes –copy-to=some-app-debug