A communications company with the world’s largest commercial satellite fleet wanted to capitalize on the explosive growth of the Internet of Things (IoT). With the expected increase in demand for connected cars, planes or boats, their goal is to provide the capability for third-parties to tap into its high availability global network capacity through exposed and managed Application Programming Interfaces (APIs). IoT technologies depend on connectivity without boundaries, beyond the borders of countries or regions. In addition, the company sought a process for provisioning new application services to customers quickly.
However, the organization faced several critical challenges to providing ubiquitous IoT connectivity and application services:
- Disparate, Unconnected Systems: Their systems required multiple API calls to pull a single data set – a scenario that was as complex as it was unscalable.
- Data Provisioning Delays: The multi-step, complex process for acquiring data also increased customer delays in provisioning data, increased costs, and potential end-user dissatisfaction.
- No Ability for Continuous Updates: Inability to innovate faster due to direct dependency between internal system and customer-facing managed API’s. Customers expect new updates, features, and versions of APIs; without this ability, customer churn could have been a risk.
- Infrastructure Dependence: With a single data center and virtual machines on-premise, their IT team lacked the ability to scale capacity during sometimes unpredictable peak times, while paying for costly, unused capacity.
As the pace of business increased, such challenges were no longer sustainable. To meet the demands of the IoT market and gain a competitive edge, scalability, reduced disruption, and data delivery, speed was key. They needed to modernize its infrastructure.
A new, modern API framework for quick data provisioning to customers was needed. The first step was to switch from virtual machines to containers. To manage this environment, IT selected Kubernetes container orchestration and Kublr. Kublr is a pluggable, enterprise-grade Kubernetes platform that automates away the complexity of running containerized applications at scale.
Having compared the pros and cons of available enterprise Kubernetes solutions, Kublr checked all the boxes for the IT, engineering, quality assurance, and infrastructure operations and management teams as being the most enterprise-ready solution. Furthermore, their high security standards meant that managed services or container-as-a-service (CaaS) were not an option. They needed something their team could run by themselves.
Kublr was selected because it provides critical enterprise components such as security, high availability, backup and disaster recovery, needed to ensure a seamless move to containerization. Kublr eliminates configuration hassles or the need to hire new skill sets or invest in costly professional services. It removes the complexity of Kubernetes management with automated and pre-configured cluster performance monitoring and alerting, event analysis, smart recommendations, troubleshooting, and preventative actions to rectify performance issues.
The same technology stack and deployments used for cluster logging and monitoring can be used on an application level. These normally complex tasks only require a few tweaks, reducing deployment time from months to weeks. That’s because the shared stack is enterprise-grade, built-in tools are ready-to-use and configuration within Kublr is simple.
The newly-built API framework was deployed through Kublr and hosted in a Kubernetes cluster in Microsoft Azure. Today, it can pull one dataset with just one API call fast through a single, seamless interface that streamlines and abstracts the complexity of the data disparity behind it.
A containerized cloud environment checked several critical boxes. Shifting from an on-premise infrastructure to the cloud provides a scalable and optimal cost model (pay-as-you-go as opposed to fixed costs). Containers also enable a more lightweight, optimized infrastructure that reduces server capacity and cuts costs.
With a solution that complements its existing business model, while delivering a modern, nimble, startup-like approach, Kublr has enabled IT to rapidly leap into the cloud-native world by running, managing, and monitoring their Kubernetes clusters on a fully production- and enterprise-ready platform.
Managed by Kublr, the new API framework reduced the multiple, semi-automated steps for pulling data to one, fully-automated, single-step process for data access, readying its ability to meet IoT demand.
Furthermore, unlike the old system that lacked a process for provisioning new services to customers quickly, using an API that is automatically scaled and managed by Kublr, delivering new services is easy, reducing lead times from months to days.
From an operations perspective, Kublr automates all management and operations functions so that the ops team can focus on other business-critical functions.
- Faster data delivery
- Increased efficiencies. All clusters can be viewed and managed from a single interface
- Simplified infrastructure
- Enterprise-grade security
- Cost and resource savings
- Reduced third-party managed services overhead
- Prometheus, Grafana, ELK Kibana (pre-configured components for cluster management, provided out-of-the-box by Kublr)