1 comment on “YES – VM + Containers can be faster than Bare Metal!”

YES – VM + Containers can be faster than Bare Metal!

Pat Gelsinger, VMware CEO, said that VM managed Containers could be 8% faster than bare metal during the VMworld keynote (@kitcolbert). On the surface, this comment defies logic: bare metal should be the theoretical limit for performance like the speed of light. While I don’t know the specifics of his test case, the claim of improved performance is credible.  Let’s explore how.

RackN specializes in bare metal workloads so let me explain how it’s possible in the right cases that containers in VMs benchmark faster than containers alone.

The crux of the argument comes down to two factors:

  1. Operating systems degrade when key resources are depleted
  2. CPUs are optimized for virtualization (see NUMA architecture)

Together, these factors conspire to make VMs a design necessity on large bare metal systems.

A large RAM and CPU core system can become saturated with container workloads even in the 10s of containers. In these cases, the performance cost for operating system to manage resources starts to take away from the performance. Since typical hypervisor hosts have a lot of resources, the risk of over saturation is very high.

The solution on high resource hosts is to leverage a hypervisor to partition the resources into multiple operating system instances. That eliminates over saturation and improves throughput for the host. We’re talking about 10 vms with 10 containers instead of 1 host with 100 containers.

In addition to simple partitioning, most CPUs are optimized for virtualization. That means that they can run multiple virtualization operating systems on the same host with minimal overhead.  The non-virtualized host does not get to leverage this optimization.

Due to these factors AND with the right tuning, it would be possible to demonstrate improved container performance for hosts that were optimized for running a hypervisor. The same would not hold true for systems that are size optimized for only container workloads. Since the container optimized machines are also much cheaper, the potential performance gain is likely an not a good ROI.

While bare metal will eventually come; this strange optimization reinforces why we expect to see hypervisors continue to be desired in container deployments for a long time.

RackN Digital Rebar v4 Release

RackN is excited to announce v4 of Digital Rebar.  In this fourth generation, we recognize the project’s evolution from provisioner into a data center automation platform.  While the external APIs and behavior are NOT changing (v4.0 and v3.13 are API compatible), we are making important structural changes that allow for significant enhancements to scale, performance and collaboration.

This important restructuring enables Digital Rebar to expand as a community platform with open content.

RackN is streamlining and inverting our license model (see chart below) by making our previously closed content and plugins open and taking commercial control of our implementation behind the open Digital Rebar APIs.  This change empowers both expanding community and sustaining commercial engagement.

Over the last 2 years, our operator community around Digital Rebar and RackN content has frequently asked for source access to our closed catalog of content and plugins while leaving core development exclusively to RackN.  We believe that providing open, collaborative access is critical for the growth of the platform; consequently, Generation 4 makes these critical components available.

RackN will curate and support foundational content packs and plugins in the open.  They include items including operating system templates, out of band management (IPMI), image deploy, classification and our server life-cycle (RAID/BIOS) components.  We will be working with the Digital Rebar community to determine the right management model for items in this broad portfolio of content.

The RackN implementation powering the Digital Rebar API will be significantly enhanced in this generation.

We are integrating powerful new capabilities like multi-site management, 10,000+ machine scale, single-sign-on and deeper security directly into the implementation.  These specialized enhancements to the platform enable an even broader range of applications. Since the community has focused on contributing to platform content instead of the core; we believe this is the right place to manage the commercial side of sustaining the platform.

Please note that our basic commercial models and pricing plans are not changing.  We are still maintaining Digital Rebar and RackN binaries free for operators with up to 20 machines.  The change also allows us to enable new classes of use such as non-profit, academic and service provider.

Thank you to the Digital Rebar community!  We are handing you the keys to revolutionized data center operations.  Let’s get started!

Generation 4 clarifies the boundary between open and proprietary
Layer Component Generation 3 Generation 4
Content Platforms, Apps, Etc Mixed Digital Rebar APLv2
Ops Best Practices RackN Digital Rebar APLv2
O/S Image Deployer RackN Digital Rebar APLv2
OOB & Firmware RackN Digital Rebar APLv2
Utility Workflows RackN Digital Rebar APLv2
O/S Install Templates Mixed Digital Rebar APLv2
Platform Agents (Runner) Digital Rebar APLv2 Digital Rebar APLv2
CLI Digital Rebar APLv2 Digital Rebar APLv2
Sledgehammer Digital Rebar APLv2 Digital Rebar APLv2
API Digital Rebar APLv2 Digital Rebar APLv2
Enterprise Extensions (Multi-Site, SSO, RBAC) RackN RackN
Web UX RackN RackN
API Implementation Digital Rebar APLv2 RackN
Commercial Support RackN RackN

RackN founding member of LF Edge

Today, the Linux Foundation announced LF Edge to help build open source platforms and community around Edge Infrastructure. We’re just at the start of the effort with projects and vendors working in parallel without coordination. We think this LF Edge initiative is important because the Edge is already a patchwork of different data centers, infrastructure and platforms.

RackN has been designing our DEEP Edge federated control around open Digital Rebar because we know that having a shared abstraction at the physical layer makes everything we build above easier to sustain and more resilient.

We’re excited to be part of this effort and help that you’ll collaborate us to build some amazing Edge capabilities.

Note: We’re also watching the OpenStack Foundation work with Edge also. Hopefully, we’ll see collaboration between these groups since there are already overlapping projects.

0 comments on “Podcast – Ian Rae talks Cloud, Innovation, and Updates from Google Next 2018”

Podcast – Ian Rae talks Cloud, Innovation, and Updates from Google Next 2018

Joining us this week is Ian Rae, CEO and Founder CloudOps who recorded the podcast during the Google Next conference in 2018.

Highlights

  • 1 min 55 sec: Define Cloud from a CloudOps perspective
    • Business Model and an Operations Model
  • 3 min 59 sec: Update from Google Next 2018 event
    • Google is the “Engineer’s Cloud”
    • Google’s approach vs Amazon approach in feature design/release
  • 9 min 55 sec: Early Amazon ~ no easy button
    • Amazon educated the market as industry leader
  • 12 min04 sec: What is the state of Hybrid? Do we need it?
    • Complexity of systems leads to private, public as well as multiple cloud providers
    • Open source enabled workloads to run on various clouds even if the cloud was not designed to support a type of workload
    • Google’s strategy is around open source in the cloud
  • 14 min 12 sec: IBM visibility in open source and cloud market
    • Didn’t build cloud services (e.g. open a ticket to remap a VLAN)
  • 16 min 40 sec: OpenStack tied to compete on service components
    • Couldn’t compete without Product Managers to guide developers
    • Missed last mile between technology and customer
    • Didn’t want to take on the operational aspects of the customer
  • 19 min 31 sec: Is innovation driven from listening to customers vs developers doing what they think is best?
    • OpenStack is seen as legacy as customers look for Cloud Native Infrastructure
    • OpenStack vs Kubernetes install time significance
  • 22 min 44 sec: Google announcement of GKE for on-premises infrastructure
    • Not really On-premise; more like Platform9 for OpenStack
    • GKE solve end user experience and operational challenges to deliver it
  • 26 min 07 sec: Edge IT replaces what is On-Premises IT
    • Bullish on the future with Edge computing
    • 27 min 27 sec: Who delivers control plane for edge?
      • Recommends Open Source in control plan
  • 28 min 29 sec: Current tech hides the infrastructure problems
    • Someone still has to deal with the physical hardware
  • 30 min 53 sec: Commercial driver for rapid Edge adoption
  • 32 min 20 sec: CloudOps building software / next generation of BSS or OSS for telco
    • Meet the needs of the cloud provider for flexibility in generating services with the ability to change the service backend provider
    • Amazon is the new Win32
  • 38 min 07 sec: Can customers install their own software? Will people buy software anymore?
    • Compare payment models from Salesforce and Slack
    • Google allowing customers to run their technology themselves of allow Google to manage it for you
  • 40 min 43 sec: Wrap-Up

Podcast Guest: Ian Rae, CEO and Founder CloudOps

Ian Rae is the founder and CEO of CloudOps, a cloud computing consulting firm that provides multi-cloud solutions for software companies, enterprises and telecommunications providers. Ian is also the founder of cloud.ca, a Canadian cloud infrastructure as a service (IaaS) focused on data residency, privacy and security requirements. He is a partner at Year One Labs, a lean startup incubator, and is the founder of the Centre cloud.ca in Montreal. Prior to clouds, Ian was responsible for engineering at Coradiant, a leader in application performance management.

0 comments on “Podcast – Yves Boudreau on State of Edge Report and Edge vs Cloud”

Podcast – Yves Boudreau on State of Edge Report and Edge vs Cloud

Joining us this week is Yves Boudreau from Ericsson for his 2nd Podcast appearance (1st Podcast) to talk about the new State of the Edge Report and the latest happenings in the Edge community.

Highlights

  • Edge as an accelerant not having to wait until Edge is built completely
  • Opportunity Cost using Edge as is; no time to wait
  • Be Specific when Requesting Services
  • Internet and Networks are Not Unlimited Pipes
  • Interesting Use Cases for Edge – Augmented Reality, Drone, Cars, Batteries
  • Cost savings of where the data processing is done
  • Open Source software communities at the Edge

Topic                                                                                    Time (Minutes.Seconds)

Intro                                                                                             0.0 – 1.22
State of the Edge Report                                                         1.22 – 5.22 (STE Podcast)  (https://www.stateoftheedge.com/)
Accessible Edge Environments                                              5.22 – 10.50 (Bulgaria)
Opportunity Cost and Missing Killer App                             10.50 – 12.04
Edge Infrastructure as Cloud Development Paradigm      12.04 – 14.29
Elasticity Issues b/w Cloud and Edge                                  14.29 – 21.45
Innovators Dilemma for Cloud & Telcom                             21.35 – 23.10
Favorite Use Cases for Infrastructure Edge                         23.10 –  28.55 (Hanger Podcast)
Data Location and Data Sovereignty                                    28.55 – 31.03
Cost for Processing Power in Edge Devices                        31.03 – 34.49 (SWIM.AI Podcast)
Free Software/ Open Source in Edge                                   34.49 – 46.58
Wrap Up                                                                                     46.58 – END

 

 

Podcast Guest:  Yves Boudreau, VP Partnership and Ecosystem Strategy

Mr. Boudreau is a 20 year veteran of the Digital, Telecom and Cable TV industries. From modest beginnings of one of the first cable broadband ISPs in Canada to the fast paced technology hub of Silicon Valley, Yves joined ERICSSON in 2011 as Vice President of Technical Sales Support and most recently has accepted a position as the VP of Partnerships and Ecosystem Strategy for the ERICSSON Unified Delivery Network. Previously, Mr. Boudreau has worked in R&D, Systems Engineering & Business Development for companies such as Com21 Inc., ARRIS Group (Cable), Imagine Communication (Video Compression) and Verivue Inc. (CDN). Yves now resides in Atlanta, Georgia with his wife Josée and 3 children. Mr. Boudreau completed his undergraduate studies in Commerce @ Laurentian University and graduate studies in Information Technology Management @ Athabasca University. Yves currently also serves on the Board of Director of the Streaming Video Alliance (www.streamingvideoalliance.org)

0 comments on “December 1 – Weekly Recap of Digital Rebar, RackN, and Industry News”

December 1 – Weekly Recap of Digital Rebar, RackN, and Industry News

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, Edge Computing, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet RackN (@rackngo)

Items of the Week

Industry News

Edge computing, in the context of IoT, is the idea that you can actually do some of the computational work required by a system close to the endpoints instead of in a cloud or a data center. The intent is to minimize latency, which, according to Renaud, means that it’s going to be a hot trend in certain kinds of industrial IoT application.

Solution providers that have been hit hard by a data center hardware retreat are finding sales and profit growth by living on the edge—the network edge, that is.

DevOps — a term used to refer to the integration of software developers and operations teams — continues to spread like wildfire throughout the open networking ecosystem. The main idea behind DevOps is that by breaking down barriers between these two departments, market applications can be delivered faster with lower costs and better quality. Nevertheless, for all the advantages attached to DevOps, it is still a budding concept since it is primarily concerned with re-aligning the workforce with a variety of tools. The following, therefore, is a list of DevOps trends to keep an eye out for.

Digital Rebar

Our architectural plans for Digital Rebar are beyond big – they are for massive distributed scale. Not up, but out. We are designing for the case where we have common automation content packages distributed over 100,000 stand-alone sites (think 5G cell towers) that are not synchronously managed. In that case, there will be version drift between the endpoints and content. For example, we may need to patch an installation script quickly over a whole fleet but want to upgrade the endpoints more slowly.

Prior Meetup on November 21st Notes

RackN

Yesterday, AWS confirmed that it actually uses physical servers to run its cloud infrastructure and, gasp, no one was surprised.  The actual news about the i3.metal instances by AWS Chief Evangelist Jeff Barr shows that bare metal is being treated as just another AMI managed instance type (see also GeekwireTechcrunchVenture Beat).  For AWS users, there’s no drama here because it’s an incremental add to processes they are already know well.

We are actively looking for feedback from customers and technologists before general availability of both RackN and the Terraform plug-in. It takes just a few minutes to get started and we offer direct engineering engagement on our community slack channel. Get started now by providing your email on our registration pagey so we can provide you all the necessary links.

L8ist Sh9y Podcast

Podcast Guest: Krishnan Subramanian, Rishidot Research

Founder and Chief Research Advisor, Infrastructure, Application Platforms and DevOps

UPCOMING EVENTS

  • KubeCon + CloudNativeCon : Dec 6 – 8 in Austin, TX

Event plans for the RackN and Digital Rebar team include 2 sessions and the RackN booth. We look forward to seeing you in Austin.

The RackN team is preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

 

 

 

 

 

 

0 comments on “Deep Thinking & Tech + Great Guests – L8ist Sh9y Podcast”

Deep Thinking & Tech + Great Guests – L8ist Sh9y Podcast

I love great conversations about technology – especially ones where the answer is not very neatly settled into winners and losers (which is ALL of them in IT).  I’m excited that RackN has (re)launched the L8ist Sh9y (aka Latest Shiny) podcast around this exact theme.

Please check out the deep and thoughtful discussion I just had with Mark Thiele (notes) of Apcera where we covered Mark’s thought on why public cloud will be under 20% of IT and culture issues head on.

Spoiler: we have David Linthicum coming next, SO SUBSCRIBE.

I’ve been a guest on some great podcasts (CloudcastgcOnDemandDatanautsIBM DojoHPEFoodfight) and have deep respect for critical work they do in industry.

We feel there’s still room for deep discussions specifically around automated IT Operations in cloud, data center and edge; consequently, we’re branching out to start including deep interviews in addition to our initial stable of IT Ops deep technical topics like TerraformEdge ComputingGartnerSYM review, Kubernetes and, of course, our own Digital Rebar.

Soundcloud Subscription Information

6 comments on “OpenStack’s Big Pivot: our suggestion to drop everything and focus on being a Kubernetes VM management workload”

OpenStack’s Big Pivot: our suggestion to drop everything and focus on being a Kubernetes VM management workload

TL;DR: Sometimes paradigm changes demand a rapid response and I believe unifying OpenStack services under Kubernetes has become an such an urgent priority that we must freeze all other work until this effort has been completed.

See Also Rob’s VMblog.com post How is OpenStack so dead AND yet so very alive

By design, OpenStack chose to be unopinionated about operations.

pexels-photo-422290That made sense for a multi-vendor project that was deeply integrated with the physical infrastructure and virtualization technologies.  The cost of that decision has been high for everyone because we did not converge to shared practices that would drive ease of operations, upgrade or tuning.  We ended up with waves of vendors vying to have the the fastest, simplest and openest version.  

Tragically, install became an area of competition instead an area of collaboration.

Containers and microservice architecture (as required for Kubernetes and other container schedulers) is providing an opportunity to correct this course.  The community is already moving towards containerized services with significant interest in using Kubernetes as the underlay manager for those services.  I’ve laid out the arguments for and challenges ahead of this approach in other places.  

These technical challenges involve tuning the services for cloud native configuration and immutable designs.  They include making sure the project configurations can be injected into containers securely and the infra-service communication can handle container life-cycles.  Adjacent concerns like networking and storage also have to be considered.  These are all solvable problems that can be more quickly resolved if the community acts together to target just one open underlay.

The critical fact is that the changes are manageable and unifying the solution makes the project stronger.

Using Kubernetes for OpenStack service management does not eliminate or even solve the challenges of deep integration.  OpenStack already has abstractions that manage vendor heterogeneity and those abstractions are a key value for the project.  Kubernetes solves a different problem: it manages the application services that run OpenStack with a proven, understood pattern.  By adopting this pattern fully, we finally give operators consistent, shared and open upgrade, availability and management tooling.

Having a shared, open operational model would help drive OpenStack faster.

There is a risk to this approach: driving Kubernetes as the underlay for OpenStack will force OpenStack services into a more narrow scope as an infrastructure service (aka IaaS).  This is a good thing in my opinion.   We need multiple abstractions when we build effective IT systems.  

The idea that we can build a universal single abstraction for all uses is a dangerous distraction; instead; we need to build platform layers collaborativity.  

While initially resisting, I have become enthusiatic about this approach.  RackN has been working hard on the upgradable & highly available Kubernetes on Metal prerequisite.  We’ve also created prototypes of the fully integrated stack.  We believe strongly that this work should be done as a community effort and not within a distro.

My call for a Kubernetes underlay pivot embraces that collaborative approach.  If we can keep these platforms focused on their core value then we can build bridges between what we have and our next innovation.  What do you think?  Is this a good approach?  Contact us if you’d like to work together on making this happen.

See Also Rob’s VMblog.com post How is OpenStack so dead AND yet so very alive to SREs? 

1 comment on “Cybercrime for Profit!? Five reasons why we need to start driving much more dynamic IT Operations”

Cybercrime for Profit!? Five reasons why we need to start driving much more dynamic IT Operations

Author’s call to action: if you think you already know this is a problem, then why do we keep reliving it?  We’re doing our part open with Digital Rebar and we need more help to secure infrastructure using foundational automation.

There’s a frustrating cyberattack driven security awareness cycle in IT Operations.  Exploits and vulnerabilities are neither new nor unexpected; however, there is a new element taking shape that should raise additional alarm.pexels-photo-169617.jpeg

Cyberattacks are increasingly profit generating and automated.

The fundamental fact of the latest attacks is that patches were available.  The extensive impact we are seeing is caused by IT Operations that relies on end-of-life components and cannot absorb incremental changes.  These practices are based on dangerous obsolete assumptions about perimeter defense and long delivery cycles.

It’s not just new products using CI/CD pipelines and dynamic delivery: we must retrofit all IT infrastructure to be constantly refreshed.

We simply cannot wait because the cybersecurity challenges are accelerating.  What’s changed in the industry?  There is a combination of factors driving these trends:

  1. Profit motive – attacks are not simply about getting information, they are profit centers made simpler with hard to trace cryptocurrency.
  2. Shortening windows – we’re doing better at finding, publishing and fixing issues than ever in the open.  That cycle assumes that downstream users are also applying the fixes quickly.  Without downstream adoption, the process fails to realize key benefit.
  3. Automation and machine learning – the attackers are using more and more sophisticated automation to find and exploit vulnerabilities.  Expect them to use machine learning to make it even more effective.
  4. No perimeter – our highly interconnected and mobile IT environments eliminate the illusion of a perimeter defense.  This not just a networking statement: our code bases and service catalogs are built from many outside sources that often have deep access.
  5. Expanding surface area – finally, we’re embedding and connected more devices every second into our infrastructure.  Costs are decreasing while capability increases.  There’s no turning back from that, we we should expect an ongoing list of vulnerabilities.

No company has all the answers for cybersecurity; however, it’s clear that we cannot solve this cybersecurity at the perimeter and allowing the interior to remain static.

The only workable IT posture starts with a continuously deployed and updated foundation.

Companies typically skip this work because it’s very difficult to automate in a cross-infrastructure and reliable way.  I’ve been working in this space for nearly two decades and we’re just delivering deep automation that can be applied in generalized ways as part of larger processes.  The good news is that means that we can finally start discussing real shared industry best practices.

Thankfully, with shared practices and tooling, we can get ahead of the attackers.

RackN focuses exclusively on addressing infrastructure automation in an open way.  We are solving this problem from the data center foundations upward.  That allows us to establish security practice that is both completely trusted and constantly refreshed.  It’s definitely not the only thing companies need to do, but that foundation and posture helps drive a better defense.

I don’t pretend to have complete answers to the cyberattacks we are seeing, but I hope they inspire us to more security discipline.  We are on the cusp of a new wave of automated and fast exploits.

Let us know if you are interested in working with RackN to build a more dynamic infrastructure.

0 comments on “If Private Cloud is dead. Where did it go? How did it get there? [JOINT POST]”

If Private Cloud is dead. Where did it go? How did it get there? [JOINT POST]

TL;DR: Hybrid killed IT.

I’m a regular participant on BWG Roundtable calls and often extend those discussions 1×1.  This post collects questions from one of those follow-up meetings where we explored how data center markets are changing based on new capacity and also the impact of cloud.  

We both believe in the simple answer, “it’s going to be hybrid.” We both feel that this answer does not capture the real challenges that customers are facing.

pexels-photo-325229So who are we?  Haynes Strader, Jr. comes at this from a real estate perspective via CBRE Data Center Solutions.  Rob Hirschfeld comes at this from an ops and automation perspective via RackN.  We are in very different aspects of the data center market.    

Rob: I know that we’re building a lot of data center capacity.  So far, it’s been really hard to move operations to new infrastructure and mobility is a challenge.  Do you see this too?

Haynes: Yes.  Creating a data center network that is both efficient and affordable is challenging. A couple of key data center interconnection providers offer this model, but few companies are in a position to truly leverage the node-cloud-node model, where a company leverages many small data center locations (colo) that all connect to a cloud option for the bulk of their computing requirements. This works well for smaller companies with a spread-out workforce, or brand new companies with no legacy infrastructure, but the Fortune 2000 still have the majority of their compute sitting in-house in owned facilities that weren’t originally designed to serve as data centers. Moving these legacy systems is nearly impossible.

Rob: I see many companies feeling trapped by these facilities and looking to the cloud as an alternative.  You are describing a lot of inertia in that migration.  Is there something that can help improve mobility?

Haynes: Data centers are physical presences to hold virtual environments. The physical aspect can only be optimized when a company truly understands its virtual footprint. IT capacity planning is key to this. System monitoring and usage analytics are critical to make growth and consolidation decisions. Why isn’t this being adopted more quickly? Is it cost? Is it difficulty to implement in complex IT environments? Is it the fear of the unknown?

Rob: I think that it’s technical debt that makes it hard (and scary) to change.  These systems were built manually or assuming that IT could maintain complete control.  That’s really not how cloud-focused operations work.  Is there a middle step between full cloud and legacy?

Haynes: Creating an environment where a company maximizes the use for its owned assets (leveraging sale leasebacks and forward-thinking financing) vs. waiting until end of life and attempting to dispose leads to opportunities to get capital injections early on and move to an OPEX model. This makes the transition to colo much easier, and avoids a large write-down that comes along with most IT transformations. Colocation is an excellent tool if it is properly negotiated because it can provide a flexible environment that can grow or shrink based on your utilization of other services. Sophisticated colo users know when it makes sense to pay top dollar for an environment that requires hyperconnectivity and when to save money for storage and day-to-day compute. They know when to leverage providers for services and when to manage IT tasks in-house. It is a daunting process, but the initial approach is key to getting to that place in the long term.

Rob:  So I’m back to thinking that the challenge for accessing all these colo opportunities is that it’s still way too hard to move operations between facilities and also between facilities and the cloud.  Until we improve mobility, choosing a provider can be a high stakes decision.  What factors do you recommend reviewing?

Haynes: There is an overwhelming number of factors in picking new colos:

  1. Location
  2. Connectivity/Latency
  3. Cloud Connectivity Options
  4. Pricing
  5. Quality of Services
  6. Security
  7. Hazard Risk Mitigation
  8. Comfort with services/provider
  9. Growth potential
  10. Flexibility of spend/portability (this is becoming ever-more important)

Rob: Yikes!  Are there minor operational differences between colos that are causing breaking changes in operations?

Haynes:  We run into this with our clients occasionally, but it is usually because they created two very different environments with different providers. This is a big reason to use a broker. Creating identical terms, pricing models, SLAs and work flows allow for clients to have a lot of leverage when they go to market. A select few of the top cloud providers do a really good job of this. They dominate the markets that they enter because they have a consistent, reliable process that is replicated globally. They also achieve some of the most attractive pricing and terms in the marketplace on a regular basis.

pexels-photo-119661.jpegRob: That makes sense.  Process matters for the operators and consistent practices make it easier to work with a partner.  Even so, moving can save a lot of money.  Is that savings justified against the risk and interruption?

Haynes: This is the biggest hurdle that our enterprise clients face. The risk of moving is risking an IT leader’s job. How do we do this with minimal risk and maximum upside? Long-term strategic planning is one answer, but in today’s world, IT leadership changes often and strategies go along with that. We don’t have a silver bullet for this one – but are always looking to partner with IT leaders that want to give it a shot and hopefully save a lot of money.

Rob: So is migration practical?

Haynes: Migration makes our clients cringe, but the ones that really try to take it on and make it happen strategically (not once it is too late) regularly reap the benefits of saving their company money and making them heroes to the organization.

Rob: I guess that brings us back to mixing infrastructures.  I know that public clouds have interconnect with colos that make it possible to avoid picking a single vendor.  Are you seeing this too?

Haynes: Hybrid, hybrid, hybrid. No one is the best one-stop shop. We all love 7-11 and it provides a lot of great solutions on the run, but I’m not grocery shopping there. Same reason I don’t run into a Kroger every time I need a bottle of water. Pick the right solution for the right application and workload.

Rob: That makes sense to me, but I see something different in practice.  Teams are too busy keeping the lights on to take advantage of longer-term thinking.  They seem so busy fighting fires that it’s hard to improve.

Haynes:  I TOTALLY agree. I don’t know how to change this. I get it, though. The CEO says, “We need to be in the cloud, yesterday,” and the CIO jumps. Suddenly everyone’s strategic planning is out the window and it is off to the races to find a quick-fix. Like most things, time and planning often reap more productive results.

Thanks for sharing our discussion!  

We’d love to hear your opinions about it.  We both agree that creating multi-site management abstractions could make life easier on IT and relatable to real estate and finance. With all of these organizations working in sync the world would be a better place. The challenge is figuring out how to get there!

%d bloggers like this: