{"id":6051,"date":"2019-03-22T17:00:02","date_gmt":"2019-03-22T17:00:02","guid":{"rendered":"http:\/\/howk.de\/w1\/blog-kubernetes-end-to-end-testing-for-everyone\/"},"modified":"2019-03-22T17:00:02","modified_gmt":"2019-03-22T17:00:02","slug":"blog-kubernetes-end-to-end-testing-for-everyone","status":"publish","type":"post","link":"https:\/\/howk.de\/?p=6051","title":{"rendered":"Blog: Kubernetes End-to-end Testing for Everyone"},"content":{"rendered":"<p><strong>Author:<\/strong> Patrick Ohly (Intel)<\/p>\n<p>More and more components that used to be part of Kubernetes are now<br \/>\nbeing developed outside of Kubernetes. For example, storage drivers<br \/>\nused to be compiled into Kubernetes binaries, then were moved into<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/community\/blob\/master\/contributors\/devel\/sig-storage\/flexvolume.md\" target=\"_blank\">stand-alone Flexvolume<br \/>\nbinaries<\/a><br \/>\non the host, and now are delivered as <a href=\"https:\/\/github.com\/container-storage-interface\/spec\" target=\"_blank\">Container Storage Interface<br \/>\n(CSI) drivers<\/a><br \/>\nthat get deployed in pods inside the Kubernetes cluster itself.<\/p>\n<p>This poses a challenge for developers who work on such components: how<br \/>\ncan end-to-end (E2E) testing on a Kubernetes cluster be done for such<br \/>\nexternal components? The E2E framework that is used for testing<br \/>\nKubernetes itself has all the necessary functionality. However, trying<br \/>\nto use it outside of Kubernetes was difficult and only possible by<br \/>\ncarefully selecting the right versions of a large number of<br \/>\ndependencies. E2E testing has become a lot simpler in Kubernetes 1.13.<\/p>\n<p>This blog post summarizes the changes that went into Kubernetes<br \/>\n1.13. For CSI driver developers, it will cover the ongoing effort to<br \/>\nalso make the storage tests available for testing of third-party CSI<br \/>\ndrivers. How to use them will be shown based on two Intel CSI drivers:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/intel\/oim\/\" target=\"_blank\">Open Infrastructure Manager (OIM)<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/intel\/pmem-csi\" target=\"_blank\">PMEM-CSI<\/a><\/li>\n<\/ul>\n<p>Testing those drivers was the main motivation behind most of these<br \/>\nenhancements.<\/p>\n<h2 id=\"e2e-overview\">E2E overview<\/h2>\n<p>E2E testing consists of several phases:<\/p>\n<ul>\n<li>Implementing a test suite. This is the main focus of this blog<br \/>\npost. The Kubernetes E2E framework is written in Go. It relies on<br \/>\n<a href=\"https:\/\/onsi.github.io\/ginkgo\/\" target=\"_blank\">Ginkgo<\/a> for managing tests and<br \/>\n<a href=\"http:\/\/onsi.github.io\/gomega\/\" target=\"_blank\">Gomega<\/a> for assertions. These tools<br \/>\nsupport \u201cbehavior driven development\u201d, which describes expected<br \/>\nbehavior in \u201cspecs\u201d. In this blog post, \u201ctest\u201d is used to reference<br \/>\nan individual <code>Ginkgo.It<\/code> spec. Tests interact with the Kubernetes<br \/>\ncluster using<br \/>\n<a href=\"https:\/\/godoc.org\/k8s.io\/client-go\/kubernetes\" target=\"_blank\">client-go<\/a>.<\/li>\n<li>Bringing up a test cluster. Tools like<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/test-infra\/blob\/master\/kubetest\/README.md\" target=\"_blank\">kubetest<\/a><br \/>\ncan help here.<\/li>\n<li>Running an E2E test suite against that cluster. Ginkgo test suites<br \/>\ncan be run with the <code>ginkgo<\/code> tool or as a normal Go test with <code>go<br \/>\ntest<\/code>. Without any parameters, a Kubernetes E2E test suite will<br \/>\nconnect to the default cluster based on environment variables like<br \/>\nKUBECONFIG, exactly like kubectl. Kubetest also knows how to run the<br \/>\nKubernetes E2E suite.<\/li>\n<\/ul>\n<h2 id=\"e2e-framework-enhancements-in-kubernetes-1-13\">E2E framework enhancements in Kubernetes 1.13<\/h2>\n<p>All of the following enhancements follow the same basic pattern: they<br \/>\nmake the E2E framework more useful and easier to use outside of<br \/>\nKubernetes, without changing the behavior of the original Kubernetes<br \/>\ne2e.test binary.<\/p>\n<h3 id=\"splitting-out-provider-support\">Splitting out provider support<\/h3>\n<p>The main reason why using the E2E framework from Kubernetes &lt;= 1.12<br \/>\nwas difficult were the dependencies on provider-specific SDKs, which<br \/>\npulled in a large number of packages. Just getting it compiled was<br \/>\nnon-trivial.<\/p>\n<p>Many of these packages are only needed for certain tests. For example,<br \/>\ntesting the mounting of a pre-provisioned volume must first provision<br \/>\nsuch a volume the same way as an administrator would, by talking<br \/>\ndirectly to a specific storage backend via some non-Kubernetes API.<\/p>\n<p>There is an effort to <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/70194\" target=\"_blank\">remove cloud provider-specific<br \/>\ntests<\/a> from<br \/>\ncore Kubernetes. The approach taken in <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/68483\" target=\"_blank\">PR<br \/>\n#68483<\/a> can be<br \/>\nseen as an incremental step towards that goal: instead of ripping out<br \/>\nthe code immediately and breaking all tests that depend on it, all<br \/>\ncloud provider-specific code was moved into optional packages under<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/tree\/release-1.13\/test\/e2e\/framework\/providers\" target=\"_blank\">test\/e2e\/framework\/providers<\/a>. The<br \/>\nE2E framework then accesses it via <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/6c1e64b94a3e111199c934c39a0c25bc219ed5f9\/test\/e2e\/framework\/provider.go#L79-L99\" target=\"_blank\">an<br \/>\ninterface<\/a><br \/>\nthat gets implemented separately by each vendor package.<\/p>\n<p>The author of a E2E test suite decides which of these packages get<br \/>\nimported into the test suite. The vendor support is then activated via<br \/>\nthe <code>--provider<\/code> command line flag. The Kubernetes e2e.test binary in<br \/>\n1.13 and 1.14 still contains support for the same providers as in<br \/>\n1.12. It is also okay to include no packages, which means that only<br \/>\nthe generic providers will be available:<\/p>\n<ul>\n<li>\u201cskeleton\u201d: cluster is accessed via the Kubernetes API and nothing<br \/>\nelse<\/li>\n<li>\u201clocal\u201d: like \u201cskeleton\u201d, but in addition the scripts in<br \/>\nkubernetes\/kubernetes\/cluster can retrieve logs via ssh after a test<br \/>\nsuite is run<\/li>\n<\/ul>\n<h3 id=\"external-files\">External files<\/h3>\n<p>Tests may have to read additional files at runtime, like .yaml<br \/>\nmanifests. But the Kubernetes e2e.test binary is supposed to be usable<br \/>\nand entirely stand-alone because that simplifies shipping and running<br \/>\nit. The solution in the Kubernetes build system is to link all files<br \/>\nunder <code>test\/e2e\/testing-manifests<\/code> into the binary with<br \/>\n<a href=\"https:\/\/github.com\/jteeuwen\/go-bindata\/go-bindata\" target=\"_blank\">go-bindata<\/a>. The<br \/>\nE2E framework used to have a hard dependency on the output of<br \/>\n<code>go-bindata<\/code>, now <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/69103\" target=\"_blank\">bindata support is<br \/>\noptional<\/a>. When<br \/>\naccessing a file via the <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/v1.13.0\/test\/e2e\/framework\/testfiles\/testfiles.go\" target=\"_blank\">testfiles<br \/>\npackage<\/a>,<br \/>\nfiles will be retrieved from different sources:<\/p>\n<ul>\n<li>relative to the directory specified with <code>--repo-root<\/code> parameter<\/li>\n<li>zero or more bindata chunks<\/li>\n<\/ul>\n<h3 id=\"test-parameters\">Test parameters<\/h3>\n<p>The e2e.test binary takes additional parameters which control test<br \/>\nexecution. In 2016, an effort was started to replace all E2E command<br \/>\nline parameters with a Viper configuration file. But that effort<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/0ed33881dc4355495f623c6f22e7dd0b7632b7c0\/test\/e2e\/framework\/test_context.go#L318-L319\" target=\"_blank\">stalled<\/a>, which left developers without clear guidance how they should handle<br \/>\ntest-specific parameters.<\/p>\n<p>The approach in v1.12 was to add all flags to the central<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/v1.12.0\/test\/e2e\/framework\/test_context.go\" target=\"_blank\">test\/e2e\/framework\/test_context.go<\/a>,<br \/>\nwhich does not work for tests developed independently from the<br \/>\nframework. Since <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/69105\" target=\"_blank\">PR<br \/>\n#69105<\/a> the<br \/>\nrecommendation has been to use the normal <code>flag<\/code> package to<br \/>\ndefine its parameters, in its own source code. Flag names must be<br \/>\nhierarchical with dots separating different levels, for example<br \/>\n<code>my.test.parameter<\/code>, and must be unique. Uniqueness is enforced by the<br \/>\n<code>flag<\/code> package which panics when registering a flag a second time. The<br \/>\nnew<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/v1.13.0\/test\/e2e\/framework\/config\/config.go\" target=\"_blank\">config<\/a><br \/>\npackage simplifies the definition of multiple options, which are<br \/>\nstored in a single struct.<\/p>\n<p>To summarize, this is how parameters are handled now:<\/p>\n<ul>\n<li>The init code in test packages defines tests and parameters. The<br \/>\nactual parameter <em>values<\/em> are not available yet, so test definitions<br \/>\ncannot use them.<\/li>\n<li>The init code of the test suite parses parameters and (optionally)<br \/>\nthe configuration file.<\/li>\n<li>The tests run and now can use parameter values.<\/li>\n<\/ul>\n<p>However, recently it <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/69105#discussion_r267960062\" target=\"_blank\">was pointed<br \/>\nout<\/a><br \/>\nthat it is desirable and was possible to not expose test settings as<br \/>\ncommand line flags and only set them via a configuration file. There<br \/>\nis an <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/75590\" target=\"_blank\">open bug<\/a> and a<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/75593\" target=\"_blank\">pending PR<\/a><br \/>\nabout this.<\/p>\n<p>Viper support has been enhanced. Like the provider support, it is<br \/>\ncompletely optional. It gets pulled into a e2e.test binary by<br \/>\nimporting the <code>viperconfig<\/code> package and <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/ddf47ac13c1a9483ea035a79cd7c10005ff21a6d\/test\/e2e\/e2e_test.go#L49-L57\" target=\"_blank\">calling<br \/>\nit<\/a><br \/>\nafter parsing the normal command line flags. This has been implemented<br \/>\nso that all variables which can be set via command line flags are also<br \/>\nset when the flag appears in a Viper config file. For example, the<br \/>\nKubernetes v1.13 <code>e2e.test<\/code> binary accepts<br \/>\n<code>--viper-config=\/tmp\/my-config.yaml<\/code> and that file will set the<br \/>\n<code>my.test.parameter<\/code> to <code>value<\/code> when it has this content: my: test:<br \/>\nparameter: value<\/p>\n<p>In older Kubernetes releases, that option could only load a file from<br \/>\nthe current directory, the suffix had to be left out, and only a few<br \/>\nparameters actually could be set this way. Beware that one limitation<br \/>\nof Viper still exists: it works by matching config file entries<br \/>\nagainst known flags, without warning about unknown config file entries<br \/>\nand thus leaving typos undetected. A <a href=\"https:\/\/github.com\/kubernetes\/kubeadm\/issues\/1040\" target=\"_blank\">better config file<br \/>\nparser<\/a> for<br \/>\nKubernetes is still work in progress.<\/p>\n<h3 id=\"creating-items-from-yaml-manifests\">Creating items from .yaml manifests<\/h3>\n<p>In Kubernetes 1.12, there was some support for loading individual<br \/>\nitems from a .yaml file, but then creating that item had to be done by<br \/>\nhand-written code. Now the framework has <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/v1.13.0\/test\/e2e\/framework\/create.go\" target=\"_blank\">new<br \/>\nmethods<\/a><br \/>\nfor loading a .yaml file that has multiple items, patching those items<br \/>\n(for example, setting the namespace created for the current test), and<br \/>\ncreating them. This is currently <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/ddf47ac13c1a9483ea035a79cd7c10005ff21a6d\/test\/e2e\/storage\/drivers\/csi.go#L192-L209\" target=\"_blank\">used to deploy CSI<br \/>\ndrivers<\/a> anew for each test from exactly the same .yaml files that are also<br \/>\nused for deployment via kubectl. If the CSI driver supports running<br \/>\nunder different names, then tests are completely independent and can<br \/>\nrun in parallel.<\/p>\n<p>However, redeploying a driver slows down test execution and it does<br \/>\nnot cover concurrent operations against the driver. A more realistic<br \/>\ntest scenario is to deploy a driver once when bringing up the test<br \/>\ncluster, then run all tests against that deployment. Eventually the<br \/>\nKubernetes E2E testing will move to that model, once it is clearer how<br \/>\ntest cluster bringup can be extended such that it also includes<br \/>\ninstalling additional entities like CSI drivers.<\/p>\n<h2 id=\"upcoming-enhancements-in-kubernetes-1-14\">Upcoming enhancements in Kubernetes 1.14<\/h2>\n<h3 id=\"reusing-storage-tests\">Reusing storage tests<\/h3>\n<p>Being able to use the framework outside of Kubernetes enables building<br \/>\na custom test suite. But a test suite without tests is still<br \/>\nuseless. Several of the existing tests, in particular for storage, can<br \/>\nalso be applied to out-of-tree components. Thanks to the work done by<br \/>\nMasaki Kimura, <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/tree\/v1.13.0\/test\/e2e\/storage\/testsuites\" target=\"_blank\">storage<br \/>\ntests<\/a><br \/>\nin Kubernetes 1.13 are defined such that they can be instantiated<br \/>\nmultiple times for different drivers.<\/p>\n<p>But history has a habit of repeating itself. As with providers, the<br \/>\npackage defining these tests also pulled in driver definitions for all<br \/>\nin-tree storage backends, which in turn pulled in more additional<br \/>\npackages than were needed. This has been<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/70862\" target=\"_blank\">fixed<\/a> for the<br \/>\nupcoming Kubernetes 1.14.<\/p>\n<h3 id=\"skipping-unsupported-tests\">Skipping unsupported tests<\/h3>\n<p>Some of the storage tests depend on features of the cluster (like<br \/>\nrunning on a host that supports XFS) or of the driver (like supporting<br \/>\nblock volumes). These conditions are checked while the test runs,<br \/>\nleading to skipped tests when they are not satisfied. The good thing<br \/>\nis that this records an explanation why the test did not run.<\/p>\n<p>Starting a test is slow, in particular when it must first deploy the<br \/>\nCSI driver, but also in other scenarios. Creating the namespace for a<br \/>\ntest has been measured at 5 seconds on a fast cluster, and it produces<br \/>\na lot of noisy test output. It would have been possible to address<br \/>\nthat by <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/70992\" target=\"_blank\">skipping the definition of unsupported<br \/>\ntests<\/a>, but then<br \/>\nreporting why a test isn\u2019t even part of the test suite becomes<br \/>\ntricky. This approach has been dropped in favor of reorganizing the<br \/>\nstorage test suite such that it <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/72434\" target=\"_blank\">first checks<br \/>\nconditions<\/a><br \/>\nbefore doing the more expensive test setup steps.<\/p>\n<h3 id=\"more-readable-test-definitions\">More readable test definitions<\/h3>\n<p>The same PR also rewrites the tests to operate like conventional<br \/>\nGinkgo tests, with test cases and their local variables in <a href=\"https:\/\/github.com\/pohly\/kubernetes\/blob\/ec3655a1d40ced6b1873e627b736aae1cf242477\/test\/e2e\/storage\/testsuites\/provisioning.go#L82\" target=\"_blank\">a single<br \/>\nfunction<\/a>.<\/p>\n<h3 id=\"testing-external-drivers\">Testing external drivers<\/h3>\n<p>Building a custom E2E test suite is still quite a bit of work. The<br \/>\ne2e.test binary that will get distributed in the <a href=\"https:\/\/dl.k8s.io\/v1.14.0\/kubernetes-test.tar.gz\" target=\"_blank\">Kubernetes 1.14 test<br \/>\narchive<\/a> will have<br \/>\nthe <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/pull\/72836\" target=\"_blank\">ability to<br \/>\ntest<\/a> already<br \/>\ninstalled storage drivers without rebuilding the test suite. See this<br \/>\n<a href=\"https:\/\/github.com\/pohly\/kubernetes\/blob\/6644db9914379a4a7b3d3487b41b2010f226e4dc\/test\/e2e\/storage\/external\/README.md\" target=\"_blank\">README<\/a><br \/>\nfor further instructions.<\/p>\n<h2 id=\"e2e-test-suite-howto\">E2E test suite HOWTO<\/h2>\n<h3 id=\"test-suite-initialization\">Test suite initialization<\/h3>\n<p>The first step is to set up the necessary boilerplate code that<br \/>\ndefines the test suite. <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/tree\/v1.13.0\/test\/e2e\" target=\"_blank\">In Kubernetes<br \/>\nE2E<\/a>,<br \/>\nthis is done in the <code>e2e.go<\/code> and <code>e2e_test.go<\/code> files. It could also be<br \/>\ndone in a single <code>e2e_test.go<\/code> file. Kubernetes imports all of the<br \/>\nvarious providers, in-tree tests, Viper configuration support, and<br \/>\nbindata file lookup in <code>e2e_test.go<\/code>. <code>e2e.go<\/code> controls the actual<br \/>\nexecution, including some cluster preparations and metrics collection.<\/p>\n<p>A simpler starting point are the <code>e2e_[test].go<\/code> files <a href=\"https:\/\/github.com\/intel\/pmem-csi\/tree\/586ae281ac2810cb4da6f1e160cf165c7daf0d80\/test\/e2e\" target=\"_blank\">from<br \/>\nPMEM-CSI<\/a>. It<br \/>\ndoesn\u2019t use any providers, no Viper, no bindata, and imports just the<br \/>\nstorage tests.<\/p>\n<p>Like PMEM-CSI, OIM drops all of the extra features, but is a bit more<br \/>\ncomplex because it integrates a custom cluster startup directly into<br \/>\nthe <a href=\"https:\/\/github.com\/intel\/pmem-csi\/blob\/a7b0d66b59771bf615e07fcd3d4f0ba08cfdf90f\/test\/e2e\/e2e.go\" target=\"_blank\">test<br \/>\nsuite<\/a>,<br \/>\nwhich was useful in this case because some additional components have<br \/>\nto run on the host side. By running them directly in the E2E binary,<br \/>\ninteractive debugging with <code>dlv<\/code> becomes easier.<\/p>\n<p>Both CSI drivers follow the Kubernetes example and use the <code>test\/e2e<\/code><br \/>\ndirectory for their test suites, but any other directory and other<br \/>\nfile names would also work.<\/p>\n<h3 id=\"adding-e2e-storage-tests\">Adding E2E storage tests<\/h3>\n<p>Tests are defined by packages that get imported into a test suite. The<br \/>\nonly thing specific to E2E tests is that they instantiate a<br \/>\n<code>framework.Framework<\/code> pointer (usually called <code>f<\/code>) with<br \/>\n<code>framework.NewDefaultFramework<\/code>. This variable gets initialized anew<br \/>\nin a <code>BeforeEach<\/code> for each test and freed in an <code>AfterEach<\/code>. It has a<br \/>\n<code>f.ClientSet<\/code> and <code>f.Namespace<\/code> at runtime (and only at runtime!)<br \/>\nwhich can be used by a test.<\/p>\n<p>The <a href=\"https:\/\/github.com\/intel\/pmem-csi\/blob\/586ae281ac2810cb4da6f1e160cf165c7daf0d80\/storage\/csi_volumes.go#L51\" target=\"_blank\">PMEM-CSI storage<br \/>\ntest<\/a><br \/>\nimports the Kubernetes storage test suite and sets up one instance of<br \/>\nthe provisioning tests for a PMEM-CSI driver which must be already<br \/>\ninstalled in the test cluster. The storage test suite changes the<br \/>\nstorage class to run tests with different filesystem types. Because of<br \/>\nthis requirement, the storage class is created from a .yaml file.<\/p>\n<p>Explaining all the various utility methods available in the framework<br \/>\nis out of scope for this blog post. Reading existing tests and the<br \/>\nsource code of the framework is a good way to get started.<\/p>\n<h3 id=\"vendoring\">Vendoring<\/h3>\n<p>Vendoring Kubernetes code is still not trivial, even after eliminating<br \/>\nmany of the unnecessary dependencies. <code>k8s.io\/kubernetes<\/code> is not meant<br \/>\nto be included in other projects and does not define its dependencies<br \/>\nin a way that is understood by tools like <code>dep<\/code>. The other <code>k8s.io<\/code><br \/>\npackages are meant to be included, but <a href=\"\/\/github.com\/kubernetes\/kubernetes\/issues\/72638\" target=\"_blank\">don\u2019t follow semantic<br \/>\nversioning<br \/>\nyet<\/a> or don\u2019t<br \/>\ntag any releases (<code>k8s.io\/kube-openapi<\/code>, <code>k8s.io\/utils<\/code>).<\/p>\n<p>PMEM-CSI uses <a href=\"https:\/\/golang.github.io\/dep\/\" target=\"_blank\">dep<\/a>. It\u2019s<br \/>\n<a href=\"https:\/\/github.com\/intel\/pmem-csi\/blob\/0ad8251c064b1010c91e7fc1dd423b95d5594bba\/Gopkg.toml\" target=\"_blank\">Gopkg.toml<\/a><br \/>\nfile is a good starting point. It enables pruning (not enabled in dep<br \/>\nby default) and locks certain projects onto versions that are<br \/>\ncompatible with the Kubernetes version that is used. When <code>dep<\/code><br \/>\ndoesn\u2019t pick a compatible version, then checking Kubernetes\u2019<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/Godeps\/Godeps.json\" target=\"_blank\">Godeps.json<\/a><br \/>\nhelps to determine which revision might be the right one.<\/p>\n<h3 id=\"compiling-and-running-the-test-suite\">Compiling and running the test suite<\/h3>\n<p><code>go test .\/test\/e2e -args -help<\/code> is the fastest way to test that the<br \/>\ntest suite compiles.<\/p>\n<p>Once it does compile and a cluster has been set up, the command <code>go<br \/>\ntest -timeout=0 -v .\/test\/e2e -ginkgo.v<\/code> runs all tests. In order to<br \/>\nrun tests in parallel, use the <code>ginkgo -p .\/test\/e2e<\/code> command instead.<\/p>\n<h2 id=\"getting-involved\">Getting involved<\/h2>\n<p>The Kubernetes E2E framework is owned by the testing-commons<br \/>\nsub-project in<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/community\/tree\/master\/sig-testing\" target=\"_blank\">SIG-testing<\/a>. See<br \/>\nthat page for contact information.<\/p>\n<p>There are various tasks that could be worked on, including but not<br \/>\nlimited to:<\/p>\n<ul>\n<li>Moving test\/e2e\/framework into a staging repo and restructuring it<br \/>\nso that it is more modular<br \/>\n(<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/74352\" target=\"_blank\">#74352<\/a>).<\/li>\n<li>Simplifying <code>e2e.go<\/code> by moving more of its code into<br \/>\n<code>test\/e2e\/framework<\/code><br \/>\n(<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/74353\" target=\"_blank\">#74353<\/a>).<\/li>\n<li>Removing provider-specific code from the Kubernetes E2E test suite<br \/>\n(<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/70194\" target=\"_blank\">#70194<\/a>).<\/li>\n<\/ul>\n<p>Special thanks to the reviewers of this article:<\/p>\n<ul>\n<li>Olev Kartau (<a href=\"https:\/\/github.com\/okartau\" target=\"_blank\">https:\/\/github.com\/okartau<\/a>)<\/li>\n<li>Mary Camp (<a href=\"https:\/\/github.com\/MCamp859\" target=\"_blank\">https:\/\/github.com\/MCamp859<\/a>)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Author: Patrick Ohly (Intel) More and more components that used to be part of Kubernetes are now being developed outside of Kubernetes. For example, storage drivers used to be compiled into Kubernetes binaries, then were moved into stand-alone Flexvolume binaries on the host, and now are delivered as Container Storage Interface (CSI) drivers that get [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.9.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Blog: Kubernetes End-to-end Testing for Everyone - Howk IT-Dienstleistungen<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/howk.de\/?p=6051\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Blog: Kubernetes End-to-end Testing for Everyone - Howk IT-Dienstleistungen\" \/>\n<meta property=\"og:description\" content=\"Author: Patrick Ohly (Intel) More and more components that used to be part of Kubernetes are now being developed outside of Kubernetes. For example, storage drivers used to be compiled into Kubernetes binaries, then were moved into stand-alone Flexvolume binaries on the host, and now are delivered as Container Storage Interface (CSI) drivers that get [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/howk.de\/?p=6051\" \/>\n<meta property=\"og:site_name\" content=\"Howk IT-Dienstleistungen\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/howk.de\" \/>\n<meta property=\"article:published_time\" content=\"2019-03-22T17:00:02+00:00\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/howk.de\/?p=6051#article\",\"isPartOf\":{\"@id\":\"https:\/\/howk.de\/?p=6051\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5\"},\"headline\":\"Blog: Kubernetes End-to-end Testing for Everyone\",\"datePublished\":\"2019-03-22T17:00:02+00:00\",\"dateModified\":\"2019-03-22T17:00:02+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/howk.de\/?p=6051\"},\"wordCount\":2300,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/howk.de\/#organization\"},\"articleSection\":[\"Hi Tech\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/howk.de\/?p=6051#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/howk.de\/?p=6051\",\"url\":\"https:\/\/howk.de\/?p=6051\",\"name\":\"Blog: Kubernetes End-to-end Testing for Everyone - Howk IT-Dienstleistungen\",\"isPartOf\":{\"@id\":\"https:\/\/howk.de\/#website\"},\"datePublished\":\"2019-03-22T17:00:02+00:00\",\"dateModified\":\"2019-03-22T17:00:02+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/howk.de\/?p=6051#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/howk.de\/?p=6051\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/howk.de\/?p=6051#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/howk.de\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog: Kubernetes End-to-end Testing for Everyone\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/howk.de\/#website\",\"url\":\"https:\/\/howk.de\/\",\"name\":\"Howk IT-Dienstleistungen\",\"description\":\"Howk IT Services - Howk IT-Dienstleistungen\",\"publisher\":{\"@id\":\"https:\/\/howk.de\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/howk.de\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/howk.de\/#organization\",\"name\":\"HowK\",\"url\":\"https:\/\/howk.de\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/howk.de\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png\",\"contentUrl\":\"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png\",\"width\":170,\"height\":170,\"caption\":\"HowK\"},\"image\":{\"@id\":\"https:\/\/howk.de\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/howk.de\",\"http:\/\/de.linkedin.com\/in\/howkde\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/howk.de\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"url\":\"https:\/\/howk.de\/?author=1\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Blog: Kubernetes End-to-end Testing for Everyone - Howk IT-Dienstleistungen","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/howk.de\/?p=6051","og_locale":"en_US","og_type":"article","og_title":"Blog: Kubernetes End-to-end Testing for Everyone - Howk IT-Dienstleistungen","og_description":"Author: Patrick Ohly (Intel) More and more components that used to be part of Kubernetes are now being developed outside of Kubernetes. For example, storage drivers used to be compiled into Kubernetes binaries, then were moved into stand-alone Flexvolume binaries on the host, and now are delivered as Container Storage Interface (CSI) drivers that get [&hellip;]","og_url":"https:\/\/howk.de\/?p=6051","og_site_name":"Howk IT-Dienstleistungen","article_publisher":"https:\/\/www.facebook.com\/howk.de","article_published_time":"2019-03-22T17:00:02+00:00","author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/howk.de\/?p=6051#article","isPartOf":{"@id":"https:\/\/howk.de\/?p=6051"},"author":{"name":"admin","@id":"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5"},"headline":"Blog: Kubernetes End-to-end Testing for Everyone","datePublished":"2019-03-22T17:00:02+00:00","dateModified":"2019-03-22T17:00:02+00:00","mainEntityOfPage":{"@id":"https:\/\/howk.de\/?p=6051"},"wordCount":2300,"commentCount":0,"publisher":{"@id":"https:\/\/howk.de\/#organization"},"articleSection":["Hi Tech"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/howk.de\/?p=6051#respond"]}]},{"@type":"WebPage","@id":"https:\/\/howk.de\/?p=6051","url":"https:\/\/howk.de\/?p=6051","name":"Blog: Kubernetes End-to-end Testing for Everyone - Howk IT-Dienstleistungen","isPartOf":{"@id":"https:\/\/howk.de\/#website"},"datePublished":"2019-03-22T17:00:02+00:00","dateModified":"2019-03-22T17:00:02+00:00","breadcrumb":{"@id":"https:\/\/howk.de\/?p=6051#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/howk.de\/?p=6051"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/howk.de\/?p=6051#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/howk.de\/"},{"@type":"ListItem","position":2,"name":"Blog: Kubernetes End-to-end Testing for Everyone"}]},{"@type":"WebSite","@id":"https:\/\/howk.de\/#website","url":"https:\/\/howk.de\/","name":"Howk IT-Dienstleistungen","description":"Howk IT Services - Howk IT-Dienstleistungen","publisher":{"@id":"https:\/\/howk.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/howk.de\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/howk.de\/#organization","name":"HowK","url":"https:\/\/howk.de\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/howk.de\/#\/schema\/logo\/image\/","url":"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png","contentUrl":"https:\/\/howk.de\/w1\/wp-content\/uploads\/2013\/12\/howk-logo.png","width":170,"height":170,"caption":"HowK"},"image":{"@id":"https:\/\/howk.de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/howk.de","http:\/\/de.linkedin.com\/in\/howkde"]},{"@type":"Person","@id":"https:\/\/howk.de\/#\/schema\/person\/b029bd02d4f35dce869ef54c81a100c5","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/howk.de\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b5a20f4d07bca1b73f25cff58a1116c4?s=96&d=mm&r=g","caption":"admin"},"url":"https:\/\/howk.de\/?author=1"}]}},"_links":{"self":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts\/6051"}],"collection":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6051"}],"version-history":[{"count":0,"href":"https:\/\/howk.de\/index.php?rest_route=\/wp\/v2\/posts\/6051\/revisions"}],"wp:attachment":[{"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6051"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6051"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/howk.de\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6051"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}