{"id":6321,"date":"2019-04-04T16:00:03","date_gmt":"2019-04-04T16:00:03","guid":{"rendered":"http:\/\/howk.de\/w1\/blog-kubernetes-1-14-local-persistent-volumes-ga\/"},"modified":"2019-04-04T16:00:03","modified_gmt":"2019-04-04T16:00:03","slug":"blog-kubernetes-1-14-local-persistent-volumes-ga","status":"publish","type":"post","link":"https:\/\/howk.de\/?p=6321","title":{"rendered":"Blog: Kubernetes 1.14: Local Persistent Volumes GA"},"content":{"rendered":"<p><strong>Authors<\/strong>: Michelle Au (Google), Matt Schallert (Uber), Celina Ward (Uber)<\/p>\n<p>The <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#local\" target=\"_blank\">Local Persistent Volumes<\/a><br \/>\nfeature has been promoted to GA in Kubernetes 1.14.<br \/>\nIt was first introduced as alpha in Kubernetes 1.7, and then<br \/>\n<a href=\"https:\/\/kubernetes.io\/blog\/2018\/04\/13\/local-persistent-volumes-beta\/\" target=\"_blank\">beta<\/a> in Kubernetes<br \/>\n1.10. The GA milestone indicates that Kubernetes users may depend on the feature<br \/>\nand its API for production use. GA features are protected by the Kubernetes<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/using-api\/deprecation-policy\/\" target=\"_blank\">deprecation<br \/>\npolicy<\/a>.<\/p>\n<h2 id=\"what-is-a-local-persistent-volume\">What is a Local Persistent Volume?<\/h2>\n<p>A local persistent volume represents a local disk directly-attached to a single<br \/>\nKubernetes Node.<\/p>\n<p>Kubernetes provides a powerful volume plugin system that enables Kubernetes<br \/>\nworkloads to use a <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#types-of-volumes\" target=\"_blank\">wide<br \/>\nvariety<\/a><br \/>\nof block and file storage to persist data. Most<br \/>\nof these plugins enable remote storage &ndash; these remote storage systems persist<br \/>\ndata independent of the Kubernetes node where the data originated. Remote<br \/>\nstorage usually can not offer the consistent high performance guarantees of<br \/>\nlocal directly-attached storage. With the Local Persistent Volume plugin,<br \/>\nKubernetes workloads can now consume high performance local storage using the<br \/>\nsame volume APIs that app developers have become accustomed to.<\/p>\n<h2 id=\"how-is-it-different-from-a-hostpath-volume\">How is it different from a HostPath Volume?<\/h2>\n<p>To better understand the benefits of a Local Persistent Volume, it is useful to<br \/>\ncompare it to a <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#hostpath\" target=\"_blank\">HostPath volume<\/a>.<br \/>\nHostPath volumes mount a file or directory from<br \/>\nthe host node\u2019s filesystem into a Pod. Similarly a Local Persistent Volume<br \/>\nmounts a local disk or partition into a Pod.<\/p>\n<p>The biggest difference is that the Kubernetes scheduler understands which node a<br \/>\nLocal Persistent Volume belongs to. With HostPath volumes, a pod referencing a<br \/>\nHostPath volume may be moved by the scheduler to a different node resulting in<br \/>\ndata loss. But with Local Persistent Volumes, the Kubernetes scheduler ensures<br \/>\nthat a pod using a Local Persistent Volume is always scheduled to the same node.<\/p>\n<p>While HostPath volumes may be referenced via a Persistent Volume Claim (PVC) or<br \/>\ndirectly inline in a pod definition, Local Persistent Volumes can only be<br \/>\nreferenced via a PVC. This provides additional security benefits since<br \/>\nPersistent Volume objects are managed by the administrator, preventing Pods from<br \/>\nbeing able to access any path on the host.<\/p>\n<p>Additional benefits include support for formatting of block devices during<br \/>\nmount, and volume ownership using fsGroup.<\/p>\n<h2 id=\"what-s-new-with-ga\">What&rsquo;s New With GA?<\/h2>\n<p>Since 1.10, we have mainly focused on improving stability and scalability of the<br \/>\nfeature so that it is production ready.<\/p>\n<p>The only major feature addition is the ability to specify a raw block device and<br \/>\nhave Kubernetes automatically format and mount the filesystem. This reduces the<br \/>\nprevious burden of having to format and mount devices before giving it to<br \/>\nKubernetes.<\/p>\n<h2 id=\"limitations-of-ga\">Limitations of GA<\/h2>\n<p>At GA, Local Persistent Volumes do not support <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/dynamic-provisioning\/\" target=\"_blank\">dynamic volume<br \/>\nprovisioning<\/a>.<br \/>\nHowever there is an <a href=\"https:\/\/github.com\/kubernetes-sigs\/sig-storage-local-static-provisioner\" target=\"_blank\">external<br \/>\ncontroller<\/a><br \/>\navailable to help manage the local<br \/>\nPersistentVolume lifecycle for individual disks on your nodes. This includes<br \/>\ncreating the PersistentVolume objects, cleaning up and reusing disks once they<br \/>\nhave been released by the application.<\/p>\n<h2 id=\"how-to-use-a-local-persistent-volume\">How to Use a Local Persistent Volume?<\/h2>\n<p>Workloads can request a local persistent volume using the same<br \/>\nPersistentVolumeClaim interface as remote storage backends. This makes it easy<br \/>\nto swap out the storage backend across clusters, clouds, and on-prem<br \/>\nenvironments.<\/p>\n<p>First, a StorageClass should be created that sets <code>volumeBindingMode:<br \/>\nWaitForFirstConsumer<\/code> to enable <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/storage-classes\/#volume-binding-mode\" target=\"_blank\">volume topology-aware<br \/>\nscheduling<\/a>.<br \/>\nThis mode instructs Kubernetes to wait to bind a PVC until a Pod using it is scheduled.<\/p>\n<pre><code>kind: StorageClass\napiVersion: storage.k8s.io\/v1\nmetadata:\nname: local-storage\nprovisioner: kubernetes.io\/no-provisioner\nvolumeBindingMode: WaitForFirstConsumer\n<\/code><\/pre>\n<p>Then, the external static provisioner can be <a href=\"https:\/\/github.com\/kubernetes-sigs\/sig-storage-local-static-provisioner#user-guide\" target=\"_blank\">configured and<br \/>\nrun<\/a> to create PVs<br \/>\nfor all the local disks on your nodes.<\/p>\n<pre><code>$ kubectl get pv\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\nlocal-pv-27c0f084 368Gi RWO Delete Available local-storage 8s\nlocal-pv-3796b049 368Gi RWO Delete Available local-storage 7s\nlocal-pv-3ddecaea 368Gi RWO Delete Available local-storage 7s\n<\/code><\/pre>\n<p>Afterwards, workloads can start using the PVs by creating a PVC and Pod or a<br \/>\nStatefulSet with volumeClaimTemplates.<\/p>\n<pre><code>apiVersion: apps\/v1\nkind: StatefulSet\nmetadata:\nname: local-test\nspec:\nserviceName: &quot;local-service&quot;\nreplicas: 3\nselector:\nmatchLabels:\napp: local-test\ntemplate:\nmetadata:\nlabels:\napp: local-test\nspec:\ncontainers:\n- name: test-container\nimage: k8s.gcr.io\/busybox\ncommand:\n- &quot;\/bin\/sh&quot;\nargs:\n- &quot;-c&quot;\n- &quot;sleep 100000&quot;\nvolumeMounts:\n- name: local-vol\nmountPath: \/usr\/test-pod\nvolumeClaimTemplates:\n- metadata:\nname: local-vol\nspec:\naccessModes: [ &quot;ReadWriteOnce&quot; ]\nstorageClassName: &quot;local-storage&quot;\nresources:\nrequests:\nstorage: 368Gi\n<\/code><\/pre>\n<p>Once the StatefulSet is up and running, the PVCs are all bound:<\/p>\n<pre><code>$ kubectl get pvc\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\nlocal-vol-local-test-0 Bound local-pv-27c0f084 368Gi RWO local-storage 3m45s\nlocal-vol-local-test-1 Bound local-pv-3ddecaea 368Gi RWO local-storage 3m40s\nlocal-vol-local-test-2 Bound local-pv-3796b049 368Gi RWO local-storage 3m36s\n<\/code><\/pre>\n<p>When the disk is no longer needed, the PVC can be deleted. The external static provisioner<br \/>\nwill clean up the disk and make the PV available for use again.<\/p>\n<pre><code>$ kubectl patch sts local-test -p '{&quot;spec&quot;:{&quot;replicas&quot;:2}}'\nstatefulset.apps\/local-test patched\n$ kubectl delete pvc local-vol-local-test-2\npersistentvolumeclaim &quot;local-vol-local-test-2&quot; deleted\n$ kubectl get pv\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\nlocal-pv-27c0f084 368Gi RWO Delete Bound default\/local-vol-local-test-0 local-storage 11m\nlocal-pv-3796b049 368Gi RWO Delete Available local-storage 7s\nlocal-pv-3ddecaea 368Gi RWO Delete Bound default\/local-vol-local-test-1 local-storage 19m\n<\/code><\/pre>\n<p>You can find full <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/#local\" target=\"_blank\">documentation<\/a><br \/>\nfor the feature on the Kubernetes website.<\/p>\n<h2 id=\"what-are-suitable-use-cases\">What Are Suitable Use Cases?<\/h2>\n<p>The primary benefit of Local Persistent Volumes over remote persistent storage<br \/>\nis performance: local disks usually offer higher IOPS and throughput and lower<br \/>\nlatency compared to remote storage systems.<\/p>\n<p>However, there are important limitations and caveats to consider when using<br \/>\nLocal Persistent Volumes:<\/p>\n<ul>\n<li>Using local storage ties your application to a specific node, making your<br \/>\napplication harder to schedule. Applications which use local storage should<br \/>\nspecify a high priority so that lower priority pods, that don\u2019t require local<br \/>\nstorage, can be preempted if necessary.<\/li>\n<li>If that node or local volume encounters a failure and becomes inaccessible, then<br \/>\nthat pod also becomes inaccessible. Manual intervention, external controllers,<br \/>\nor operators may be needed to recover from these situations.<\/li>\n<li>While most remote storage systems implement synchronous replication, most local<br \/>\ndisk offerings do not provide data durability guarantees. Meaning loss of the<br \/>\ndisk or node may result in loss of all the data on that disk<\/li>\n<\/ul>\n<p>For these reasons, local persistent storage should only be considered for<br \/>\nworkloads that handle data replication and backup at the application layer, thus<br \/>\nmaking the applications resilient to node or data failures and unavailability<br \/>\ndespite the lack of such guarantees at the individual disk level.<\/p>\n<p>Examples of good workloads include software defined storage systems and<br \/>\nreplicated databases. Other types of applications should continue to use highly<br \/>\navailable, remotely accessible, durable storage.<\/p>\n<h2 id=\"how-uber-uses-local-storage\">How Uber Uses Local Storage<\/h2>\n<p><a href=\"https:\/\/eng.uber.com\/m3\/\" target=\"_blank\">M3<\/a>, Uber\u2019s in-house metrics platform,<br \/>\npiloted Local Persistent Volumes at scale<br \/>\nin an effort to evaluate <a href=\"https:\/\/m3db.io\/\" target=\"_blank\">M3DB<\/a> \u2014<br \/>\nan open-source, distributed timeseries database<br \/>\ncreated by Uber. One of M3DB\u2019s notable features is its ability to shard its<br \/>\nmetrics into partitions, replicate them by a factor of three, and then evenly<br \/>\ndisperse the replicas across separate failure domains.<\/p>\n<p>Prior to the pilot with local persistent volumes, M3DB ran exclusively in<br \/>\nUber-managed environments. Over time, internal use cases arose that required the<br \/>\nability to run M3DB in environments with fewer dependencies. So the team began<br \/>\nto explore options. As an open-source project, we wanted to provide the<br \/>\ncommunity with a way to run M3DB as easily as possible, with an open-source<br \/>\nstack, while meeting M3DB\u2019s requirements for high throughput, low-latency<br \/>\nstorage, and the ability to scale itself out.<\/p>\n<p>The Kubernetes Local Persistent Volume interface, with its high-performance,<br \/>\nlow-latency guarantees, quickly emerged as the perfect abstraction to build on<br \/>\ntop of. With Local Persistent Volumes, individual M3DB instances can comfortably<br \/>\nhandle up to 600k writes per-second. This leaves plenty of headroom for spikes<br \/>\non clusters that typically process a few million metrics per-second.<\/p>\n<p>Because M3DB also gracefully handles losing a single node or volume, the limited<br \/>\ndata durability guarantees of Local Persistent Volumes are not an issue. If a<br \/>\nnode fails, M3DB finds a suitable replacement and the new node begins streaming<br \/>\ndata from its two peers.<\/p>\n<p>Thanks to the Kubernetes scheduler\u2019s intelligent handling of volume topology,<br \/>\nM3DB is able to programmatically evenly disperse its replicas across multiple<br \/>\nlocal persistent volumes in all available cloud zones, or, in the case of<br \/>\non-prem clusters, across all available server racks.<\/p>\n<h2 id=\"uber-s-operational-experience\">Uber&rsquo;s Operational Experience<\/h2>\n<p>As mentioned above, while Local Persistent Volumes provide many benefits, they<br \/>\nalso require careful planning and careful consideration of constraints before<br \/>\ncommitting to them in production. When thinking about our local volume strategy<br \/>\nfor M3DB, there were a few things Uber had to consider.<\/p>\n<p>For one, we had to take into account the hardware profiles of the nodes in our<br \/>\nKubernetes cluster. For example, how many local disks would each node cluster<br \/>\nhave? How would they be partitioned?<\/p>\n<p>The local static provisioner<br \/>\n<a href=\"https:\/\/github.com\/kubernetes-sigs\/sig-storage-local-static-provisioner\/#best-practices\" target=\"_blank\">README<\/a><br \/>\nprovides guidance to help answer these<br \/>\nquestions. It\u2019s best to be able to dedicate a full disk to each local volume<br \/>\n(for IO isolation) and a full partition per-volume (for capacity isolation).<br \/>\nThis was easier in our cloud environments where we could mix and match local<br \/>\ndisks. However, if using local volumes on-prem, hardware constraints may be a<br \/>\nlimiting factor depending on the number of disks available and their<br \/>\ncharacteristics.<\/p>\n<p>When first testing local volumes, we wanted to have a thorough understanding of<br \/>\nthe effect<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/\" target=\"_blank\">disruptions<\/a><br \/>\n(voluntary and involuntary) would have on pods using<br \/>\nlocal storage, and so we began testing some failure scenarios. We found that<br \/>\nwhen a local volume becomes unavailable while the node remains available (such<br \/>\nas when performing maintenance on the disk), a pod using the local volume will<br \/>\nbe stuck in a ContainerCreating state until it can mount the volume. If a node<br \/>\nbecomes unavailable, for example if it is removed from the cluster or is<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/safely-drain-node\/\" target=\"_blank\">drained<\/a>,<br \/>\nthen pods using local volumes on that node are stuck in an Unknown or<br \/>\nPending state depending on whether or not the node was removed gracefully.<\/p>\n<p>Recovering pods from these interim states means having to delete the PVC binding<br \/>\nthe pod to its local volume and then delete the pod in order for it to be<br \/>\nrescheduled (or wait until the node and disk are available again). We took this<br \/>\ninto account when building our <a href=\"https:\/\/github.com\/m3db\/m3db-operator\" target=\"_blank\">operator<\/a><br \/>\nfor M3DB, which makes changes to the<br \/>\ncluster topology when a pod is rescheduled such that the new one gracefully<br \/>\nstreams data from the remaining two peers. Eventually we plan to automate the<br \/>\ndeletion and rescheduling process entirely.<\/p>\n<p>Alerts on pod states can help call attention to stuck local volumes, and<br \/>\nworkload-specific controllers or operators can remediate them automatically.<br \/>\nBecause of these constraints, it\u2019s best to exclude nodes with local volumes from<br \/>\nautomatic upgrades or repairs, and in fact some cloud providers explicitly<br \/>\nmention this as a best practice.<\/p>\n<h2 id=\"portability-between-on-prem-and-cloud\">Portability Between On-Prem and Cloud<\/h2>\n<p>Local Volumes played a big role in Uber\u2019s decision to build orchestration for<br \/>\nM3DB using Kubernetes, in part because it is a storage abstraction that works<br \/>\nthe same across on-prem and cloud environments. Remote storage solutions have<br \/>\ndifferent characteristics across cloud providers, and some users may prefer not<br \/>\nto use networked storage at all in their own data centers. On the other hand,<br \/>\nlocal disks are relatively ubiquitous and provide more predictable performance<br \/>\ncharacteristics.<\/p>\n<p>By orchestrating M3DB using local disks in the cloud, where it was easier to get<br \/>\nup and running with Kubernetes, we gained confidence that we could still use our<br \/>\noperator to run M3DB in our on-prem environment without any modifications. As we<br \/>\ncontinue to work on how we\u2019d run Kubernetes on-prem, having solved such an<br \/>\nimportant pending question is a big relief.<\/p>\n<h2 id=\"what-s-next-for-local-persistent-volumes\">What&rsquo;s Next for Local Persistent Volumes?<\/h2>\n<p>As we\u2019ve seen with Uber\u2019s M3DB, local persistent volumes have successfully been<br \/>\nused in production environments. As adoption of local persistent volumes<br \/>\ncontinues to increase, SIG Storage continues to seek feedback for ways to<br \/>\nimprove the feature.<\/p>\n<p>One of the most frequent asks has been for a controller that can help with<br \/>\nrecovery from failed nodes or disks, which is currently a manual process (or<br \/>\nsomething that has to be built into an operator). SIG Storage is investigating<br \/>\ncreating a common controller that can be used by workloads with simple and<br \/>\nsimilar recovery processes.<\/p>\n<p>Another popular ask has been to support dynamic provisioning using lvm. This can<br \/>\nsimplify disk management, and improve disk utilization. SIG Storage is<br \/>\nevaluating the performance tradeoffs for the viability of this feature.<\/p>\n<h2 id=\"getting-invovled\">Getting Invovled<\/h2>\n<p>If you have feedback for this feature or are interested in getting involved with<br \/>\nthe design and development, join the <a href=\"https:\/\/github.com\/kubernetes\/community\/blob\/master\/sig-storage\/README.md\" target=\"_blank\">Kubernetes Storage<br \/>\nSpecial-Interest-Group<\/a><br \/>\n(SIG). We\u2019re rapidly growing and always welcome new contributors.<\/p>\n<p>Special thanks to all the contributors that helped bring this feature to GA,<br \/>\nincluding Chuqiang Li (lichuqiang), Dhiraj Hedge (dhirajh), Ian Chakeres<br \/>\n(ianchakeres), Jan \u0160afr\u00e1nek (jsafrane), Michelle Au (msau42), Saad Ali<br \/>\n(saad-ali), Yecheng Fu (cofyc) and Yuquan Ren (nickrenren).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Authors: Michelle Au (Google), Matt Schallert (Uber), Celina Ward (Uber) The Local Persistent Volumes feature has been promoted to GA in Kubernetes 1.14. It was first introduced as alpha in Kubernetes 1.7, and then beta in Kubernetes 1.10. The GA milestone indicates that Kubernetes users may depend on the feature and its API for production [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.9.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Blog: Kubernetes 1.14: Local Persistent Volumes GA - Howk IT-Dienstleistungen<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/howk.de\/?p=6321\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Blog: Kubernetes 1.14: Local Persistent Volumes GA - Howk IT-Dienstleistungen\" \/>\n<meta property=\"og:description\" content=\"Authors: Michelle Au (Google), Matt Schallert (Uber), Celina Ward (Uber) The Local Persistent Volumes feature has been promoted to GA in Kubernetes 1.14. It was first introduced as alpha in Kubernetes 1.7, and then beta in Kubernetes 1.10. The GA milestone indicates that Kubernetes users may depend on the feature and its API for production [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/howk.de\/?p=6321\" \/>\n<meta property=\"og:site_name\" content=\"Howk IT-Dienstleistungen\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/howk.de\" \/>\n<meta property=\"article:published_time\" content=\"2019-04-04T16:00:03+00:00\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/howk.de\/?p=6321#article\",\"isPartOf\":{\"@id\":\"https:\/\/howk.de\/?p=6321\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5\"},\"headline\":\"Blog: Kubernetes 1.14: Local Persistent Volumes GA\",\"datePublished\":\"2019-04-04T16:00:03+00:00\",\"dateModified\":\"2019-04-04T16:00:03+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/howk.de\/?p=6321\"},\"wordCount\":1949,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/howk.de\/#organization\"},\"articleSection\":[\"Hi Tech\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/howk.de\/?p=6321#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/howk.de\/?p=6321\",\"url\":\"https:\/\/howk.de\/?p=6321\",\"name\":\"Blog: Kubernetes 1.14: Local Persistent Volumes GA - Howk IT-Dienstleistungen\",\"isPartOf\":{\"@id\":\"https:\/\/howk.de\/#website\"},\"datePublished\":\"2019-04-04T16:00:03+00:00\",\"dateModified\":\"2019-04-04T16:00:03+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/howk.de\/?p=6321#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/howk.de\/?p=6321\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/howk.de\/?p=6321#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/howk.de\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog: Kubernetes 1.14: Local Persistent Volumes GA\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/howk.de\/#website\",\"url\":\"https:\/\/howk.de\/\",\"name\":\"Howk IT-Dienstleistungen\",\"description\":\"Howk IT Services - Howk IT-Dienstleistungen\",\"publisher\":{\"@id\":\"https:\/\/howk.de\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/howk.de\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/howk.de\/#organization\",\"name\":\"HowK\",\"url\":\"https:\/\/howk.de\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/howk.de\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png\",\"contentUrl\":\"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png\",\"width\":170,\"height\":170,\"caption\":\"HowK\"},\"image\":{\"@id\":\"https:\/\/howk.de\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/howk.de\",\"http:\/\/de.linkedin.com\/in\/howkde\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"url\":\"https:\/\/howk.de\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Blog: Kubernetes 1.14: Local Persistent Volumes GA - Howk IT-Dienstleistungen","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/howk.de\/?p=6321","og_locale":"en_US","og_type":"article","og_title":"Blog: Kubernetes 1.14: Local Persistent Volumes GA - Howk IT-Dienstleistungen","og_description":"Authors: Michelle Au (Google), Matt Schallert (Uber), Celina Ward (Uber) The Local Persistent Volumes feature has been promoted to GA in Kubernetes 1.14. It was first introduced as alpha in Kubernetes 1.7, and then beta in Kubernetes 1.10. The GA milestone indicates that Kubernetes users may depend on the feature and its API for production [&hellip;]","og_url":"https:\/\/howk.de\/?p=6321","og_site_name":"Howk IT-Dienstleistungen","article_publisher":"https:\/\/www.facebook.com\/howk.de","article_published_time":"2019-04-04T16:00:03+00:00","author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/howk.de\/?p=6321#article","isPartOf":{"@id":"https:\/\/howk.de\/?p=6321"},"author":{"name":"admin","@id":"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5"},"headline":"Blog: Kubernetes 1.14: Local Persistent Volumes GA","datePublished":"2019-04-04T16:00:03+00:00","dateModified":"2019-04-04T16:00:03+00:00","mainEntityOfPage":{"@id":"https:\/\/howk.de\/?p=6321"},"wordCount":1949,"commentCount":0,"publisher":{"@id":"https:\/\/howk.de\/#organization"},"articleSection":["Hi Tech"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/howk.de\/?p=6321#respond"]}]},{"@type":"WebPage","@id":"https:\/\/howk.de\/?p=6321","url":"https:\/\/howk.de\/?p=6321","name":"Blog: Kubernetes 1.14: Local Persistent Volumes GA - Howk IT-Dienstleistungen","isPartOf":{"@id":"https:\/\/howk.de\/#website"},"datePublished":"2019-04-04T16:00:03+00:00","dateModified":"2019-04-04T16:00:03+00:00","breadcrumb":{"@id":"https:\/\/howk.de\/?p=6321#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/howk.de\/?p=6321"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/howk.de\/?p=6321#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/howk.de\/"},{"@type":"ListItem","position":2,"name":"Blog: Kubernetes 1.14: Local Persistent Volumes GA"}]},{"@type":"WebSite","@id":"https:\/\/howk.de\/#website","url":"https:\/\/howk.de\/","name":"Howk IT-Dienstleistungen","description":"Howk IT Services - Howk IT-Dienstleistungen","publisher":{"@id":"https:\/\/howk.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/howk.de\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/howk.de\/#organization","name":"HowK","url":"https:\/\/howk.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/howk.de\/#\/schema\/logo\/image\/","url":"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png","contentUrl":"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png","width":170,"height":170,"caption":"HowK"},"image":{"@id":"https:\/\/howk.de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/howk.de","http:\/\/de.linkedin.com\/in\/howkde"]},{"@type":"Person","@id":"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/howk.de\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g","caption":"admin"},"url":"https:\/\/howk.de\/?author=1"}]}},"_links":{"self":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts\/6321"}],"collection":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6321"}],"version-history":[{"count":0,"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts\/6321\/revisions"}],"wp:attachment":[{"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}