Introducing the Technologies Set to Redefine Cloud Computing

For many years, enterprise cloud computing has been a careful balance — and sometimes an epic battle — between what’s possible and what is practical on the ground. This dichotomy has led to a lot of confusion, which in turn can take back development. Here is introducing the technologies set to redefine Cloud computing.

– Advertisement –

Towards the end of the last decade, there was a sharpening of focus from both business people and cloud service providers alike. Both parties understood they needed one another, but neither side could determine just how that relationship should look.

In 2020, clarity has emerged related to what cloud computing is for the coming decade.

Understanding these patterns will be the key to success for service and product vendors when planning their business strategies.

A New Generation of Containers

– Advertisement –

Business owners are actually beginning to understand the cloud cost reduction and performance advantages of containerization.

Just 2 yrs ago, hardly half of the owners interviewed in Portworx and Aqua Security’s annual Container Adoption Study were looking at adopting containerized computing. As of 2019 — 87% of respondents in the study are now planning to use containers.

One thing that has been holding back enterprise adoption is the fear around security breaches and the complexity of isolating containers using Linux get a handle on groups, mandatory access controls, and the like.

To win the trust of companies, a new strain of containers has been gaining traction over the last few years. If you haven’t already, you’re going to start hearing a lot more about Kata Containers over the next month or two and years.

Managed by the OpenStack Foundation, the Kata Containers project began back 2017. The project’s overarching aim is to blend the advantages of containerization with that of virtualization – particularly workload isolation.

The Kata Containers project brings together the high performance of Intel’s Clear Containers with the adaptability of Hyper’s runV platform (RunV is just a platform-agnostic cloud-based runtime predicated on super lightweight VMs).

With input from Google and Microsoft (among others), Kata Containers is clearly positioning itself as the containerization technology that may drive widespread enterprise adoption.

Kata Containers is made to be compatible with all major network architectures and hypervisors, enabling workloads to run seamlessly in multi-cloud environments.

Improving Microservices at Scale

Twitter and Netflix have previously proven that microservice architectures work well at scale, but there remains a lot of complexity behind the scenes. Communication between modular components could be problematic, leading to too little visibility and an ongoing challenge to maintain security and QoS.

To solve for this challenge, IBM, Google, and Lyft put their collective heads together to create a solution. The outcome became Istio.

Istio is referred to as an open-source ‘service mesh’ designed to supply a common environment for connecting, securing, monitoring, and scaling distributed microservices. An integral benefit to Istio is that it works across both hybrid and multi-cloud environments without any change in application code.

In terms of security, Istio creates another, secure communications channel between microservices and end-users (and between the microservices themselves). In terms of performance monitoring and troubleshooting, Istio provides an intuitive dashboard and system-wide view of the entire distributed environment.

The distributed environment enables operators to see not merely how individual microservices are performing, but additionally how they truly are affecting each other. Problem areas can, therefore, be pinpointed and remediated very quickly.

Istio is probable to be welcomed with open arms by both developers and operators working together with microservice architectures. By simplifying security and troubleshooting while also removing roadblocks to scaling, developers will be free to create new applications at their leisure. As a result, the microservice enterprize model will become more desirable than ever.

The Race to Own the Hybrid Cloud Space

The developments detailed above are aimed at a hybrid and multi-cloud future. Any dreams that the major public cloud providers could have had of a public cloud-based ‘as-a-service’ monopoly have all but evaporated. Most important will undoubtedly be safety and security.

A recent Red Hat survey all but confirmed this new reality, revealing that only 4% of businesses see cloud-native as the most readily useful path forward. In contrast, 31% of respondents favored hybrid cloud deployments.

Predictably, the likes of Amazon, Microsoft, and Google have reacted by rolling out managed hybrid cloud services. These are likely to gain traction as they carry on to blur the boundaries between on-premise and cloud computing.

Microsoft includes a clear head start in this area due to its well-developed Azure Stack, which is one reason why Azure has grown so quickly despite the dominance of AWS’s market share of public cloud services. Azure Stack works with many different partner vendors such as Dell EMC, Lenovo, and Cisco, but it uses the same pricing model as its public cloud.

Amazon’s most recent response came with a partnership with private cloud specialists Vmware to launch AWS Outposts. Outposts are marketed as a hybrid cloud solution for companies needing low latency performance at the cloud’s edge. They include on-premise, single-vendor hardware deployments that are installed, configured, and managed by Amazon technicians. These are then connected, ideally via AWS Direct Connect, to a parent AWS Region.

Google’s approach, in its patented style, is slightly different. But because they claim, their innovative solution is the one that truly solves the multi-cloud challenge.

Solving for the Multi-Cloud Challenge

While Microsoft and Amazon are clearly interested in expanding both their cloud environments and their service offerings to meet their clients’ needs, Google is placing it self as the company that may truly free businesses up to operate across any combination of private and public clouds. And, as usual, Google comes with an ace up its sleeve: Kubernetes.

Google’s hybrid and multi-cloud solution, Anthos, predictably runs on GKE, but it also includes an on-premise platform (GKE On-Prem), which runs on vSphere. Also included are Istio’s service mesh technology (described above), a configuration management platform to handle Kubernetes policies, and Stackdriver for monitoring.

With AWS and Azure both supporting Kubernetes, this gives Anthos users the ability to work with either or both public clouds in tandem with their own private clouds – i.e. a genuine, honest-to-goodness hybrid cloud.

Of course, Google now offers a cloud direct connect (Cloud Interconnect) to ensure high-speed, secure connectivity between on-premises networks and GCP.

But it doesn’t stop there. Google has additionally released Anthos Migrate, a free of charge P2K (physical-to-Kubernetes) migration tool built from Velostrata technology. Anthos Migrate is designed to allow GCP users to easily modernize existing applications or, perhaps more interestingly, to migrate VMs over from other cloud services.

The Ultimate Machine Learning Hotbed

Cloud computing not only allows businesses to provide cheaper, faster, and much more scalable services — but inaddition it changes the nature and scope of what companies can actually achieve. As cloud technologies be widespread and ever-easier to use, the workloads predictably become more ambitious.

Speaking of ambition, many businesses have put artificial intelligence (both creating it and benefiting from it) at the top of these wish lists.

From diagnosing illnesses and pinpointing Earth-like planets to autonomous cars and language translation, the potential of machine learning outperforming humans on specific tasks will carry on to develop and grow over the coming years.

That said, in the event that you ask Google AI lead, Jeff Dean, the current method of beginning scratch on every project needs to change yesterday. Jeff envisions replacing the current atomic, unit-based types of ML with one multi-functional model. This model could be inactive the majority of the time but would build upon previous relevant learning whenever called upon to carry out a fresh task.

As Dean explained in a recent Keynote, this would more closely resemble adult human learning instead of the types of today, which he compares to the lengthy, inefficient process of infant learning.

There are sure to be a plethora of challenges on the road ahead, but as the cloud continues to expand to be able to attract more companies, the quantity of developers rising to meet those challenges will grow in kind.

Still, no one will really understand what that future will look like until it’s actually here. Just be ready to be awed, excited, and maybe a good little terrified by the sheer scope and scale of so what can (and will) be achieved in the 2020s.

Paul Cooney


Paul Cooney is the Founder and President of Shamrock Consulting Group, the leader in technical procurement for telecommunications, data communications, data center, SD WAN consulting, dark fibre and cloud direct connect services provider.
After finding early success with Teligent, Inc. in the late 90’s, that he took over AT&T’s struggling Los Angeles salesforce and turned them in to one of the best in the country within a few months. In 2008, Paul left AT&T to start Shamrock, which he has grown into an award-winning industry disruptor offering vendor-neutral expertise on 1000s of products and services related to cloud, colocation, wide area networking and telecommunications. Shamrock guarantees the most readily useful price on any product from over 250 different service providers.

(Excerpt) Read more Here | 2020-07-10 20:35:00
Image credit: source


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.