Autonomy, a word that holds so much promise yet hides so much complexity behind the scenes.
One generally feels Tesla, Intel (Mobileye), and Waymo, from an operational and cloud-native standpoint, are out of reach and typically out of anyone’s league.
Now my party-trick to bridge any gap of knowledge and experience, in any vertical, is obviously through Open-Source, and after several months with k3s, I feel a little more confident about the path an organization should take to reach autonomous edge computing (along with autonomous k8s cluster operation and auto-scaling in the process).
Quite a while ago, when the sun was shining and you could actually see people’s smiles, I was approached about autonomous drones applying Machine Learning (ML) and Deep Learning (DL) for autonomous operations.
One could use k3s to run Cloud-Native ML operations, utilizing GPU enabled edge devices, using boards like Nvidia Jetsons (arm) or utilizing up-board (x86).
Think about harvester drones like the awesome fruit picking drones by Tevel .
Or a whole group of delivery drones, disrupting shipping and on-demand delivery of anything from packages, food, supplies, and medicine for remote areas.
These drones need to “understand”, “consider” and “relate” to the world and environment around them, if they want to achieve their mission and successfully deliver their goods. This idea of using k3s on these GPU enabled boards excited me because it meant changing the way we think about developing software for IoT.
Typically, IoT development involves creating an installable binary — precompiled to the architecture we were deploying upon, and then flashing the OS or deploying the binary on the provisioned operating system.
Using k3s (and hence Kubernetes) on the edge device lets me treat it as any other Kubernetes cluster and apply the CI/CD tools and practices to do so.
This way, we can use the same tools (GitLab, Harbor, Rancher) and practices (CI-CD, Container Orchestration, DevSecOps ) that we use on the data-centers to apply and achieve the same development velocity when developing on these devices.
This also lets us apply Machine Learning and Deep Learning practices in IoT and edge scenarios (disclaimer: I’m a ML/DL newb )
This question of k3s on edge devices sent me through a rabbit hole and injected some interest into some of my quarantined days and nights during these past months.
One of the most important things to look at when building a technological platform is owning the architecture. I’ve seen a lot of clients that get stuck with proprietary solutions because they were rooted too deep into the infrastructure and architecture. A vendor lock so tight will typically create expensive setup or maintenance costs and will ultimately affect the bottom line.
Proprietary solutions and products are probably the cheaper and faster way to set up a system, however, they should augment your existing architecture and not replace it altogether.
With Open – Source technology and an Open-Source architecture mindset, you can set up your own architecture and make sure that you’re in control of the technological choices your startup company will need to make as the company evolves.
Rancher described the idea of the potential and power of Kubernetes everywhere here…
Since almost the beginning of programming, the idea of write-once and deploy everywhere, on all platforms, has been an unreachable ideal to minimize development costs for cross-platform applications, drive UI consistency and reduce security service area. In programming, the cross-platform languages Java and Python have topped developer utilization charts for decades. Kubernetes provides the next step in that evolution, providing a consistent platform that can be used for development in the cloud, on prem and in edge devices, allowing many modern application languages to be used. Used properly, Kubernetes can simplify and speed up development to get value to customers faster and where they need it. The immense flexibility of Kubernetes is almost overwhelming and the path of success mined with craters to failure. In this blog, I will outline an effective approach for the myriad of choices available in the Kubernetes ecosystem to realize the vision of simplified application development and deployment.
Basically, In this and coming posts, we will be exploring how to reap the benefits from the technology stack, the development tools and ML/DL related practices and apply them to these smaller form factors.
One of the issues we need to address pretty early is the existence of multiple architectures when speaking about edge devices and k3s.
In the data center, the x86 architecture is the dominating one and will be 99% of anything you would work with, but on edge devices ARM is the prominent architecture with raspberry pi’s and Nvidia jetsons leading the pack and with boards like up-board crafting space for x86 on edge as well.
What this means, is that we need our CI/CD process to be versatile enough in order to build, store and deploy artifacts across multiple architectures. As an expert CI/CD architect, there’s a lot to be aware of.
In the next several blog posts, I’ll dive into this journey towards the deep edge using the tools we love to use for the data center — mainly Rancher, Harbor, Gitlab-ci, ansible, and k3s.
Click on the following links for a seamless, secure and supported deployment of the above-mentioned apps on the cloud marketplaces, provided by Hossted;
For Rancher, click here.
For Harbor, click here.
For GitLab, click here.
The original article was featured on Linnovate’s blog