A multicloud infrastructure strategy can maximize the flexibility of enterprise IT staff, isolate workloads, and increase agility, but there may be overriding circumstances. Credit: Golden Dayz/Shutterstock Most organizations use both on-prem data centers and cloud-based IaaS services, often employing multiple IaaS platforms. For some, this multicloud reality has come about as part of a steady, one-way migration to the cloud, and they may have intentionally kept their cloud networks distinct as part of that goal. Others may have a business strategy for keeping them distinct, such as providing services for a stand-alone division or a particular geography. As a consequence, they are almost certainly already tying their on-premises and cloud infrastructure networks together in some way or are about to be. Those with limited integration among their networks are often dealing with a patchwork of solutions that evolved haphazardly as cloud systems went from being experimental and isolated to being developmental and peripheral and then to being central and in-production. For those planning to bring these networks together or looking to architect and engineer their current infrastructure more intentionally, there are some fundamental points to consider. Treat external clouds separately or together? One model for cloud adoption treats each external cloud as another data center, connected only as additional WAN destinations, and leaves them otherwise distinct. That would mean routing-level connections only, with separate network management and controls for each. The other model is allowing deeper integration, including tunneling Layer 2 protocols and centralizing control not only between on-prem data centers and cloud but among and across clouds. Keeping things separate has virtues: Easier network isolation of workloads from each other for security and compliance reasons Easier implementation of network policies within each environment thanks to a more limited scope Smaller skill set required for network engineers focused on a single environment. However, it also has significant drawbacks: Less agility Less portability across environments More limited integration options Greater complexity in implementing network and security policies across environments with increased risk of error. Most organizations seem to be following the path of bringing all their environments together, from the network up. Either way, they are faced with a second major consideration: whether and how to make the environments as similar as possible in terms of what can be done on the networks within them or to allow them to remain different. Allow all features or only those common across clouds? When solutions get deployed across multiple platforms that do not have identical feature sets, IT has long chosen one of two solutions: Use each platform separately and take advantage of all the “special sauce” features in each to get the best possible performance from them. Add a layer of abstraction between IT workloads and the underlying platforms and give up those functions not common to them all in order to get maximum consistency and portability. The great thing about each cloud being a distinct island of functionality with respect to on-prem data centers and each other is that the networking team has less to do in each. And the modes of interaction among the clouds and on-pre data centers are well understood. The terrible thing about each cloud being distinct and different is that each cloud is distinct and different. IT folks managing these environments develop custom skill sets, and there is less ability to have cross coverage. As a result, each environment has a shallower bench of support and less resilience at the staff level. When there is turnover, the skill set sought from replacements is more specialized too. Application and cybersecurity teams must also understand the differences among the environments in order to allow both the flexible placement of workloads within them and the movement of workloads among them. In the age of containerization and microservices, portability is considered a key virtue. Teams can lose track of basic differences like whether an environment defaults to “deny all” or “allow all” on connections among networks—with the potential for disaster. For these reasons, some organizations decide instead to minimize differences in the application-facing environments by implementing tools to abstract away differences. Sometimes adding a layer of consistency, via an overlay or a new standard, enormously amplifies the power of a technology. SQL is a good example of the standards-driven approach, or TCP/IP. SD-WAN is a great example of an overlay approach to standardizing network functionality atop disparate underlays. Implementing a standard across all environments allows interoperability, defines a common skill set, and makes it easier to design and deploy applications to leverage those standards. Extensions beyond a standard are possible, as is support for competing standards. So “secret sauce” functionality in an environment can still get a look in, and implementations of a standard can vary, so vendors can compete on performance. An important and powerful approach to providing a consistent, abstracted platform across environments is to shim up the low spots. That is, rather than hide functionality from the common catalog of network services or design options if it is not available across all platforms, instead add missing functionality to the platforms that lack it. SD-WAN solutions and multi-cloud network solutions can work this way. Shimming up the low spots in each platform’s catalog is distinct from simply porting an alien environment into each platform. It keeps each environment as close to its native state as possible, to leverage its strengths and reduce the amount of one-off development required to fit the standard environment into it. Multicloud networking is either already a reality or in the works for most organizations. In considering the next phase of their network strategy and architecture, they should go back to these fundamental questions and making sure they are clear on how they are answering them and why so the answers can guide the rest of their decisions. Related content news Billion-dollar fine against Intel annulled, says EU Court of Justice A 15-year-long roller coaster ride of appeals and counter-appeals over the European Commission’s antitrust ruling has ended in victory for the company. By Lynn Greiner Oct 25, 2024 1 min CPUs and Processors Cloud Computing news F5, Nvidia team to boost AI, cloud security F5 and Nvidia team to integrate the F5 BIG-IP Next for Kubernetes platform with Nvidia BlueField-3 DPUs. By Michael Cooney Oct 24, 2024 3 mins Generative AI Cloud Security Cloud Computing analysis AWS, Google Cloud certs command highest pay Skillsoft’s annual ranking finds AWS security certifications can bring in more than $200,000 while other cloud certifications average more than $175,000 in the U.S. By Denise Dubie Oct 24, 2024 8 mins Certifications IT Jobs Careers news 2024 global network outage report and internet health check ThousandEyes tracks internet and cloud traffic and provides Network World with weekly updates on the performance of ISPs, cloud service providers, and UCaaS providers. By Ann Bednarz Oct 22, 2024 101 mins Internet Service Providers Network Management Software Cloud Computing PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe