{"id":6193,"date":"2019-03-29T23:00:02","date_gmt":"2019-03-29T23:00:02","guid":{"rendered":"http:\/\/howk.de\/w1\/blog-kube-proxy-subtleties-debugging-an-intermittent-connection-reset\/"},"modified":"2019-03-29T23:00:02","modified_gmt":"2019-03-29T23:00:02","slug":"blog-kube-proxy-subtleties-debugging-an-intermittent-connection-reset","status":"publish","type":"post","link":"https:\/\/howk.de\/?p=6193","title":{"rendered":"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset"},"content":{"rendered":"<p><strong>Author:<\/strong> <a href=\"ygui@google.com\" target=\"_blank\">Yongkun Gui<\/a>, Google<\/p>\n<p>I recently came across a bug that causes intermittent connection resets. After<br \/>\nsome digging, I found it was caused by a subtle combination of several different<br \/>\nnetwork subsystems. It helped me understand Kubernetes networking better, and I<br \/>\nthink it\u2019s worthwhile to share with a wider audience who are interested in the same<br \/>\ntopic.<\/p>\n<h2 id=\"the-symptom\">The symptom<\/h2>\n<p>We received a user report claiming they were getting connection resets while using a<br \/>\nKubernetes service of type ClusterIP to serve large files to pods running in the<br \/>\nsame cluster. Initial debugging of the cluster did not yield anything<br \/>\ninteresting: network connectivity was fine and downloading the files did not hit<br \/>\nany issues. However, when we ran the workload in parallel across many clients,<br \/>\nwe were able to reproduce the problem. Adding to the mystery was the fact that<br \/>\nthe problem could not be reproduced when the workload was run using VMs without<br \/>\nKubernetes. The problem, which could be easily reproduced by <a href=\"https:\/\/github.com\/tcarmet\/k8s-connection-reset\" target=\"_blank\">a simple<br \/>\napp<\/a>, clearly has something to<br \/>\ndo with Kubernetes networking, but what?<\/p>\n<h2 id=\"kubernetes-networking-basics\">Kubernetes networking basics<\/h2>\n<p>Before digging into this problem, let\u2019s talk a little bit about some basics of<br \/>\nKubernetes networking, as Kubernetes handles network traffic from a pod<br \/>\nvery differently depending on different destinations.<\/p>\n<h3 id=\"pod-to-pod\">Pod-to-Pod<\/h3>\n<p>In Kubernetes, every pod has its own IP address. The benefit is that the<br \/>\napplications running inside pods could use their canonical port, instead of<br \/>\nremapping to a different random port. Pods have L3 connectivity between each<br \/>\nother. They can ping each other, and send TCP or UDP packets to each other.<br \/>\n<a href=\"https:\/\/github.com\/containernetworking\/cni\" target=\"_blank\">CNI<\/a> is the standard that solves<br \/>\nthis problem for containers running on different hosts. There are tons of<br \/>\ndifferent plugins that support CNI.<\/p>\n<h3 id=\"pod-to-external\">Pod-to-external<\/h3>\n<p>For the traffic that goes from pod to external addresses, Kubernetes simply uses<br \/>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Network_address_translation\" target=\"_blank\">SNAT<\/a>. What it does<br \/>\nis replace the pod\u2019s internal source IP:port with the host\u2019s IP:port. When<br \/>\nthe return packet comes back to the host, it rewrites the pod\u2019s IP:port as the<br \/>\ndestination and sends it back to the original pod. The whole process is transparent<br \/>\nto the original pod, who doesn\u2019t know the address translation at all.<\/p>\n<h3 id=\"pod-to-service\">Pod-to-Service<\/h3>\n<p>Pods are mortal. Most likely, people want reliable service. Otherwise, it\u2019s<br \/>\npretty much useless. So Kubernetes has this concept called &ldquo;service&rdquo; which is<br \/>\nsimply a L4 load balancer in front of pods. There are several different types of<br \/>\nservices. The most basic type is called ClusterIP. For this type of service, it<br \/>\nhas a unique VIP address that is only routable inside the cluster.<\/p>\n<p>The component in Kubernetes that implements this feature is called kube-proxy.<br \/>\nIt sits on every node, and programs complicated iptables rules to do all kinds<br \/>\nof filtering and NAT between pods and services. If you go to a Kubernetes node<br \/>\nand type <code>iptables-save<\/code>, you\u2019ll see the rules that are inserted by Kubernetes<br \/>\nor other programs. The most important chains are <code>KUBE-SERVICES<\/code>, <code>KUBE-SVC-*<\/code><br \/>\nand <code>KUBE-SEP-*<\/code>.<\/p>\n<ul>\n<li><code>KUBE-SERVICES<\/code> is the entry point for service packets. What it does is to<br \/>\nmatch the destination IP:port and dispatch the packet to the corresponding<br \/>\n<code>KUBE-SVC-*<\/code> chain.<\/li>\n<li><code>KUBE-SVC-*<\/code> chain acts as a load balancer, and distributes the packet to<br \/>\n<code>KUBE-SEP-*<\/code> chain equally. Every <code>KUBE-SVC-*<\/code> has the same number of<br \/>\n<code>KUBE-SEP-*<\/code> chains as the number of endpoints behind it.<\/li>\n<li><code>KUBE-SEP-*<\/code> chain represents a Service EndPoint. It simply does DNAT,<br \/>\nreplacing service IP:port with pod&rsquo;s endpoint IP:Port.<\/li>\n<\/ul>\n<p>For DNAT, conntrack kicks in and tracks the connection state using a state<br \/>\nmachine. The state is needed because it needs to remember the destination<br \/>\naddress it changed to, and changed it back when the returning packet came back.<br \/>\nIptables could also rely on the conntrack state (ctstate) to decide the destiny<br \/>\nof a packet. Those 4 conntrack states are especially important:<\/p>\n<ul>\n<li><em>NEW<\/em>: conntrack knows nothing about this packet, which happens when the SYN<br \/>\npacket is received.<\/li>\n<li><em>ESTABLISHED<\/em>: conntrack knows the packet belongs to an established connection,<br \/>\nwhich happens after handshake is complete.<\/li>\n<li><em>RELATED<\/em>: The packet doesn\u2019t belong to any connection, but it is affiliated<br \/>\nto another connection, which is especially useful for protocols like FTP.<\/li>\n<li><em>INVALID<\/em>: Something is wrong with the packet, and conntrack doesn\u2019t know how<br \/>\nto deal with it. This state plays a centric role in this Kubernetes issue.<\/li>\n<\/ul>\n<p>Here is a diagram of how a TCP connection works between pod and service. The<br \/>\nsequence of events are:<\/p>\n<ul>\n<li>Client pod from left hand side sends a packet to a<br \/>\nservice: 192.168.0.2:80<\/li>\n<li>The packet is going through iptables rules in client<br \/>\nnode and the destination is changed to pod IP, 10.0.1.2:80<\/li>\n<li>Server pod handles the packet and sends back a packet with destination 10.0.0.2<\/li>\n<li>The packet is going back to the client node, conntrack recognizes the packet and rewrites the source<br \/>\naddress back to 192.169.0.2:80<\/li>\n<li>Client pod receives the response packet<\/li>\n<\/ul>\n<figure>\n<img decoding=\"async\" src=\"https:\/\/kubernetes.io\/images\/blog\/2019-03-26-kube-proxy-subtleties-debugging-an-intermittent-connection-resets\/good-packet-flow.png\" alt=\"Good packet flow\" width=\"100%\" \/><figcaption>\n<p>Good packet flow<\/p>\n<\/figcaption><\/figure>\n<h2 id=\"what-caused-the-connection-reset\">What caused the connection reset?<\/h2>\n<p>Enough of the background, so what really went wrong and caused the unexpected<br \/>\nconnection reset?<\/p>\n<p>As the diagram below shows, the problem is packet 3. When conntrack cannot<br \/>\nrecognize a returning packet, and mark it as <em>INVALID<\/em>. The most common<br \/>\nreasons include: conntrack cannot keep track of a connection because it is out<br \/>\nof capacity, the packet itself is out of a TCP window, etc. For those packets<br \/>\nthat have been marked as <em>INVALID<\/em> state by conntrack, we don\u2019t have the<br \/>\niptables rule to drop it, so it will be forwarded to client pod, with source IP<br \/>\naddress not rewritten (as shown in packet 4)! Client pod doesn\u2019t recognize this<br \/>\npacket because it has a different source IP, which is pod IP, not service IP. As<br \/>\na result, client pod says, &ldquo;Wait a second, I don&rsquo;t recall this connection to<br \/>\nthis IP ever existed, why does this dude keep sending this packet to me?&rdquo; Basically,<br \/>\nwhat the client does is simply send a RST packet to the server pod IP, which<br \/>\nis packet 5. Unfortunately, this is a totally legit pod-to-pod packet, which can<br \/>\nbe delivered to server pod. Server pod doesn\u2019t know all the address translations<br \/>\nthat happened on the client side. From its view, packet 5 is a totally legit<br \/>\npacket, like packet 2 and 3. All server pod knows is, &ldquo;Well, client pod doesn\u2019t<br \/>\nwant to talk to me, so let\u2019s close the connection!&rdquo; Boom! Of course, in order<br \/>\nfor all these to happen, the RST packet has to be legit too, with the right TCP<br \/>\nsequence number, etc. But when it happens, both parties agree to close the<br \/>\nconnection.<\/p>\n<figure>\n<img decoding=\"async\" src=\"https:\/\/kubernetes.io\/images\/blog\/2019-03-26-kube-proxy-subtleties-debugging-an-intermittent-connection-resets\/connection-reset-packet-flow.png\" alt=\"Connection reset packet flow\" width=\"100%\" \/><figcaption>\n<p>Connection reset packet flow<\/p>\n<\/figcaption><\/figure>\n<h2 id=\"how-to-address-it\">How to address it?<\/h2>\n<p>Once we understand the root cause, the fix is not hard. There are at least 2<br \/>\nways to address it.<\/p>\n<ul>\n<li>Make conntrack more liberal on packets, and don\u2019t mark the packets as<br \/>\n<em>INVALID<\/em>. In Linux, you can do this by <code>echo 1 &gt;<br \/>\n\/proc\/sys\/net\/ipv4\/netfilter\/ip_conntrack_tcp_be_liberal<\/code>.<\/li>\n<li>Specifically add an iptables rule to drop the packets that are marked as<br \/>\n<em>INVALID<\/em>, so it won\u2019t reach to client pod and cause harm.<\/li>\n<\/ul>\n<p>The fix is drafted (<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/74840\" target=\"_blank\">https:\/\/github.com\/kubernetes\/kubernetes\/pull\/74840<\/a>), but<br \/>\nunfortunately it didn\u2019t catch the v1.14 release window. However, for the users<br \/>\nthat are affected by this bug, there is a way to mitigate the problem by applying<br \/>\nthe following rule in your cluster.<\/p>\n<div class=\"highlight\">\n<pre style=\"background-color:#f8f8f8\"><code class=\"language-yaml\" data-lang=\"yaml\">apiVersion:<span style=\"color:#bbb\"> <\/span>extensions\/v1beta1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>kind:<span style=\"color:#bbb\"> <\/span>DaemonSet<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>metadata:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>name:<span style=\"color:#bbb\"> <\/span>startup-script<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>labels:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>app:<span style=\"color:#bbb\"> <\/span>startup-script<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"><\/span>spec:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>template:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>metadata:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>labels:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>app:<span style=\"color:#bbb\"> <\/span>startup-script<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>spec:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>hostPID:<span style=\"color:#bbb\"> <\/span><span style=\"color:#a2f;font-weight:bold\">true<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>containers:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>-<span style=\"color:#bbb\"> <\/span>name:<span style=\"color:#bbb\"> <\/span>startup-script<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>image:<span style=\"color:#bbb\"> <\/span>gcr.io\/google-containers\/startup-script:v1<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>imagePullPolicy:<span style=\"color:#bbb\"> <\/span>IfNotPresent<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>securityContext:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>privileged:<span style=\"color:#bbb\"> <\/span><span style=\"color:#a2f;font-weight:bold\">true<\/span><span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>env:<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>-<span style=\"color:#bbb\"> <\/span>name:<span style=\"color:#bbb\"> <\/span>STARTUP_SCRIPT<span style=\"color:#bbb\">\n<\/span><span style=\"color:#bbb\"> <\/span>value:<span style=\"color:#bbb\"> <\/span><span style=\"color:#b44;font-style:italic\">|\n<\/span><span style=\"color:#b44;font-style:italic\"> #! \/bin\/bash\n<\/span><span style=\"color:#b44;font-style:italic\"> echo 1 &gt; \/proc\/sys\/net\/ipv4\/netfilter\/ip_conntrack_tcp_be_liberal\n<\/span><span style=\"color:#b44;font-style:italic\"> echo done<\/span><\/code><\/pre>\n<\/div>\n<h2 id=\"summary\">Summary<\/h2>\n<p>Obviously, the bug has existed almost forever. I am surprised that it<br \/>\nhasn\u2019t been noticed until recently. I believe the reasons could be: (1) this<br \/>\nhappens more in a congested server serving large payloads, which might not be a<br \/>\ncommon use case; (2) the application layer handles the retry to be tolerant of<br \/>\nthis kind of reset. Anyways, regardless of how fast Kubernetes has been growing,<br \/>\nit\u2019s still a young project. There are no other secrets than listening closely to<br \/>\ncustomers\u2019 feedback, not taking anything for granted but digging deep, we can<br \/>\nmake it the best platform to run applications.<\/p>\n<p>Special thanks to <a href=\"https:\/\/github.com\/bowei\" target=\"_blank\">bowei<\/a> for the consulting for both<br \/>\ndebugging process and the blog, to <a href=\"https:\/\/github.com\/tcarmet\" target=\"_blank\">tcarmet<\/a> for<br \/>\nreporting the issue and providing a reproduction.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Yongkun Gui, Google I recently came across a bug that causes intermittent connection resets. After some digging, I found it was caused by a subtle combination of several different network subsystems. It helped me understand Kubernetes networking better, and I think it\u2019s worthwhile to share with a wider audience who are interested in the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.9.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset - Howk IT-Dienstleistungen<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/howk.de\/?p=6193\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset - Howk IT-Dienstleistungen\" \/>\n<meta property=\"og:description\" content=\"Author: Yongkun Gui, Google I recently came across a bug that causes intermittent connection resets. After some digging, I found it was caused by a subtle combination of several different network subsystems. It helped me understand Kubernetes networking better, and I think it\u2019s worthwhile to share with a wider audience who are interested in the [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/howk.de\/?p=6193\" \/>\n<meta property=\"og:site_name\" content=\"Howk IT-Dienstleistungen\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/howk.de\" \/>\n<meta property=\"article:published_time\" content=\"2019-03-29T23:00:02+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/kubernetes.io\/images\/blog\/2019-03-26-kube-proxy-subtleties-debugging-an-intermittent-connection-resets\/good-packet-flow.png\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/howk.de\/?p=6193#article\",\"isPartOf\":{\"@id\":\"https:\/\/howk.de\/?p=6193\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5\"},\"headline\":\"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset\",\"datePublished\":\"2019-03-29T23:00:02+00:00\",\"dateModified\":\"2019-03-29T23:00:02+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/howk.de\/?p=6193\"},\"wordCount\":1332,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/howk.de\/#organization\"},\"articleSection\":[\"Hi Tech\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/howk.de\/?p=6193#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/howk.de\/?p=6193\",\"url\":\"https:\/\/howk.de\/?p=6193\",\"name\":\"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset - Howk IT-Dienstleistungen\",\"isPartOf\":{\"@id\":\"https:\/\/howk.de\/#website\"},\"datePublished\":\"2019-03-29T23:00:02+00:00\",\"dateModified\":\"2019-03-29T23:00:02+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/howk.de\/?p=6193#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/howk.de\/?p=6193\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/howk.de\/?p=6193#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/howk.de\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/howk.de\/#website\",\"url\":\"https:\/\/howk.de\/\",\"name\":\"Howk IT-Dienstleistungen\",\"description\":\"Howk IT Services - Howk IT-Dienstleistungen\",\"publisher\":{\"@id\":\"https:\/\/howk.de\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/howk.de\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/howk.de\/#organization\",\"name\":\"HowK\",\"url\":\"https:\/\/howk.de\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/howk.de\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png\",\"contentUrl\":\"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png\",\"width\":170,\"height\":170,\"caption\":\"HowK\"},\"image\":{\"@id\":\"https:\/\/howk.de\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/howk.de\",\"http:\/\/de.linkedin.com\/in\/howkde\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"url\":\"https:\/\/howk.de\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset - Howk IT-Dienstleistungen","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/howk.de\/?p=6193","og_locale":"en_US","og_type":"article","og_title":"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset - Howk IT-Dienstleistungen","og_description":"Author: Yongkun Gui, Google I recently came across a bug that causes intermittent connection resets. After some digging, I found it was caused by a subtle combination of several different network subsystems. It helped me understand Kubernetes networking better, and I think it\u2019s worthwhile to share with a wider audience who are interested in the [&hellip;]","og_url":"https:\/\/howk.de\/?p=6193","og_site_name":"Howk IT-Dienstleistungen","article_publisher":"https:\/\/www.facebook.com\/howk.de","article_published_time":"2019-03-29T23:00:02+00:00","og_image":[{"url":"https:\/\/kubernetes.io\/images\/blog\/2019-03-26-kube-proxy-subtleties-debugging-an-intermittent-connection-resets\/good-packet-flow.png"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/howk.de\/?p=6193#article","isPartOf":{"@id":"https:\/\/howk.de\/?p=6193"},"author":{"name":"admin","@id":"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5"},"headline":"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset","datePublished":"2019-03-29T23:00:02+00:00","dateModified":"2019-03-29T23:00:02+00:00","mainEntityOfPage":{"@id":"https:\/\/howk.de\/?p=6193"},"wordCount":1332,"commentCount":0,"publisher":{"@id":"https:\/\/howk.de\/#organization"},"articleSection":["Hi Tech"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/howk.de\/?p=6193#respond"]}]},{"@type":"WebPage","@id":"https:\/\/howk.de\/?p=6193","url":"https:\/\/howk.de\/?p=6193","name":"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset - Howk IT-Dienstleistungen","isPartOf":{"@id":"https:\/\/howk.de\/#website"},"datePublished":"2019-03-29T23:00:02+00:00","dateModified":"2019-03-29T23:00:02+00:00","breadcrumb":{"@id":"https:\/\/howk.de\/?p=6193#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/howk.de\/?p=6193"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/howk.de\/?p=6193#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/howk.de\/"},{"@type":"ListItem","position":2,"name":"Blog: kube-proxy Subtleties: Debugging an Intermittent Connection Reset"}]},{"@type":"WebSite","@id":"https:\/\/howk.de\/#website","url":"https:\/\/howk.de\/","name":"Howk IT-Dienstleistungen","description":"Howk IT Services - Howk IT-Dienstleistungen","publisher":{"@id":"https:\/\/howk.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/howk.de\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/howk.de\/#organization","name":"HowK","url":"https:\/\/howk.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/howk.de\/#\/schema\/logo\/image\/","url":"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png","contentUrl":"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png","width":170,"height":170,"caption":"HowK"},"image":{"@id":"https:\/\/howk.de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/howk.de","http:\/\/de.linkedin.com\/in\/howkde"]},{"@type":"Person","@id":"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/howk.de\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g","caption":"admin"},"url":"https:\/\/howk.de\/?author=1"}]}},"_links":{"self":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts\/6193"}],"collection":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6193"}],"version-history":[{"count":0,"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts\/6193\/revisions"}],"wp:attachment":[{"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6193"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6193"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6193"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}