When it rains it pours. It seems regarding Enterprise IT technology innovation, it is common for multiple game-changing innovations to hit the street simultaneously. Yet, if ever the analogy of painting the car while its traveling down the highway is suitable, it’s this time. Certainly, you can take a wait and see approach with regard to adoption, but given the association of these innovations toward greater business agility, you’d run the risk of falling behind your competitors.
Let’s take a look at what each of these innovations mean for the enterprise and their associated impact to the business.
First, let’s explore the synergies of some of these innovations. Certainly, each innovation can and does have a certain value by themselves, however, when grouped they can provide powerful solutions to help drive growth and new business models.
- Hybrid Cloud + IoT + AI/ML. IoT produces a lot of exhaust (data) that results in two primary outcomes: a) immediate analysis resulting in a directive to the IoT endpoint (the basis for many smartX initiatives) or b) collect and analyze looking for patterns. Either way, the public cloud going to offer the most economic solution for IoT services, data storage and the compute and services supporting machine learning algorithms.
- IoT + Blockchain. Blockchains provide immutable entries stored in a distributed ledger. When combined with machine-driven entries, for example from an IoT sensor, we have non-refutable evidence. This is great for tracing chain of custody, not just law enforcement, but perishables, such as meat and plants.
- Containers, DevOps and agile software development. These form the basis for delivering solutions like those above quickly and economically bringing allowing the value to be realized rapidly by the business.
There are businesses that are already using these technologies to deliver new and innovative solutions, many of which have been promoted in the press and at conferences. While these stories illustrate strong forward momentum, they also tend to foster a belief that these innovations have reached a sufficient level of maturity, such that the solution is not susceptible to lack of availability. This is far from the case. Indeed, these innovations are far from mainstream.
Let’s explore what adoption means to IT and the business for these various innovations.
I specifically chose hybrid cloud versus public cloud because it represents an even greater amount of complexity to enterprise IT than public cloud alone. It requires collaboration and integration between organizations and departments that have a common goal but very different approaches to achieving success.
First, cloud is about managing and delivering software services, whereas the data center is charged with delivering both infrastructure and software services. However, the complexity and overhead of managing and delivering reliable and available infrastructure overshadows the complexity of software services, resulting in the latter often receiving far less attention in most self-managed environments. When the complexity surrounding delivery of infrastructure is removed, the operations team can focus solely on delivery and consumption of software services.
Security is always an issue, but the maturation process surrounding delivery of cloud services by the top cloud service providers means that it is a constantly changing environment. With security in the cloud, there is no room for error or the applications could be compromised. This, in turn, requires that after each update to the security controls around a service the cloud team (architects, developers, operations, etc.) must educate themselves on the implications of the change and then assess how that change may affect their production environments. Any misunderstanding of these updates and the environment could become vulnerable.
Hybrid cloud also often means that the team must retain traditional data center skills while also adding skills related to the cloud service provider(s) of choice. This is an often overlooked aspect of assessing cloud costs. Moreover, highly-skilled cloud personnel are still difficult to attract and usually demand higher than market salaries. You could (and should) upskill your own staff, but you will want a few experts as part of the team on-the-job training for public cloud, as unsecured public cloud may lead to compromising situations for businesses.
The issue with IoT is that it is not one single thing, but a complex network of physical and mechanical components. In a world that has been moving to a high degree of virtualization, IoT represents a marked shift back toward data center skills with an emphasis on device configurations, disconnected states, limitations on size of data packets being exchanged, and low-memory code footprints. Anyone who was around during the early days of networking DOS PC’s will be able to relate to some of the constraints.
As with all things digital, security is a highly-complex topic with regard to IoT. There are so many layers within an IoT solution that welcomes compromise: the sensor, the network, the edge, the data endpoint, etc. As many of the devices participating in an IoT network may be resource constrained there’s only so much overhead that can be introduced for security before it impairs the purpose.
For many, however, when you say IoT they immediately only see the analytical aspects associated with all the data collected from the myriad of devices. Sure, analyzing the data obtained from the sensor mesh and the edge devices can yield an understanding of the way things worked in ways that were extremely difficult with the coarse-grained telemetry provided by these devices. For example, a manufacturing device that signaled issues with a low hum prior to the use of sensors that now reveal that in tandem with the hum, there’s also a rise in temperature and an increase in vibration. With a few short months of collecting data, there’s no need to even wait for the hum, the data will indicate the beginning of a problem.
Of course, the value discussed in the prior paragraph can only be expressed if you have the right skilled individuals across the entire information chain. Those able to modify or configure endpoint devices to participate in an IoT scenario, the cybersecurity and infosec experts to limit potential issues due to breach or misuse, and the data scientists capable of making sense of the volumes of data being collected. Of course, if you haven’t selected the public cloud as the endpoint for your data, you also then have the additional overhead of managing network connectivity and storage capacity management associated with rapidly growing volumes of data.
Artificial Intelligence and Machine Learning (AI/ML)
If you can harness the power of machine learning and AI you gain insights into your business and industry in a way that was very difficult up until recently. While this is seemingly a simple statement, that one word “harness” is loaded with complexity. First, these technologies are most successful when operating against massive quantities of data.
The more data you have the more accurate the outcomes. This means that it is incumbent upon the business to a) find, aggregate, cleanse and store the data to support the effort, b) formulate a hypothesis, c) evaluate the output of multiple algorithms to determine which will best support the outcome you are seeking—e.g. predictive, trends, etc.—and d) create a model. This all equates to a lot of legs to get the job done. Once your model is complete and your hypothesis proven, the machine will do most of the work from there on out but getting there requires a lot of human knowledge engineering effort.
A point of caution, make business decisions using the outcome of your AI/ML models when you have not followed every one of these steps and then qualified the outcome of the model against the real world at least two times.
Touted as the technology that will “change the world,” yet outside of cryptocurrencies, blockchain is still trying to establish firm roots within the business world. There are many issues with blockchain adoption at the moment, the most prevalent one is velocity of change. There is no single standard blockchain technology.
There are multiple technologies each attempting to provide the foundation for trusted and validated transactional exchange without requiring a centralized party. Buying into a particular technology at this point in the maturity curve, will provide insight into the value of blockchain, but will require constant care and feeding as well as the potential need to migrate to a completely different network foundation at some point in the future. Hence, don’t bet the farm on the approach you choose today.
Additionally, there are still many outstanding non-technical issues that blockchain value is dependent upon, such as the legality of blockchain entries as a form of non-repudiation. That is, can a blockchain be used as evidence in a legal case to demonstrate intent and validation of agreed upon actions? There are also issues related to what effect use of a blockchain may have on various partnering contracts and credit agreements, especially for global companies with GDPR requirements.
Finally, is the value of the blockchain a large enough network to enforce consensus? Who should host these nodes? Are the public networks sufficient for business or is there a need for a private network shared among a community with common needs?
Containers, DevOps, & Agile SDLC
I’ve lumped these three innovation together because unlike the others, they are more technological in nature and carry elements of the “how” more so than the “what”. Still, there is a significant amount of attention being paid to these three topics that extend far outside the IT organization due to their association with enabling businesses to become more agile. To wit, I add my general disclaimer and word of caution, the technology is only an enabler, it’s what you do with it that might be valuable or may have an opposite effect.
Containers should be the least impactful of these three topics, as it’s simply another way to use compute resources. Containers are smaller and more lightweight than virtual machines but still facilitate a level of isolation between what is running in the container and what is running outside the container. The complexity arises from moving processes from bare metal and virtual machines into containers as containers leverage machine resources differently than the aforementioned platforms.
While it’s fairly simple to create a container, getting a group of containers to work together reliably can be fraught with challenges. This is why container management systems have become more and more complex over time. With the addition of Kubernetes, businesses effectively needs the knowledge of data center operations in a single team. Of course, public cloud service providers now offer managed container management systems that reduce the requirements on such a broad set of knowledge, but it’s still incumbent on operations to know how to configure and organize containers from a performance and security perspective.
DevOps and Agile Software Development Lifecycle (SDLC) really force the internal engineering teams to think and act differently if they are transitioning from traditional waterfall development practices. Many businesses have taken the first step of this transition by starting to adopt some Agile SDLC practices. However, because of the need for retraining, hiring, and support of this effort, the interim state many of these businesses are in have been called “wagile” meaning some combination of waterfall and agile.
As for DevOps, the metrics have been published regarding the business value of becoming a high-performing software delivery and operations organization. In this age of “software is eating the world” can your organization ignore DevOps and if not ignore take years to transition? You will hear stories from businesses that have adopted DevOps and Agile SDLC and made great strides in reducing latency, increasing the number of releases they can make in a given time period, and deploying new capabilities and functions to production at a much faster rate with fewer change failures. Many of these stories are real, but even in these businesses, you will still find pockets where there is no adoption and they still follow a waterfall SDLC that take ten months to get a single release into production.
Individually, each of these innovations requires trained resources, funding, and can be difficult to move beyond proof-of-concept to completely operationalized production outcomes. Taken in combination, on top of existing operational pressures, these innovations can rapidly overwhelm even the most adept enterprise IT organization. Even in cases where there is multi-modal IT and these innovations are occurring outside the path of traditional IT, existing IT knowledge and experience will be required to support. For example, if you want to analyze purchasing trends for the past five years, you will need to support of the teams responsible for your financial systems.
All this leads to the really big question, how should businesses go about absorbing these innovations? The pragmatic answer is of course introduce those innovations related to a specific business outcome. However, as stated, waiting to introduce some of these innovations could result in losing ground to competition. This means that you may want to introduce some proof-of-concept projects especially around AI/ML and Agile SDLC with IoT and Blockchain projects where they make sense for your business.