This TFIR video wraps up of our series about AWS Outposts. We discuss the broader industry impact in this short video about why AWS Outposts validates the need for operator owned, on-premises infrastructure. Want some background, check out Part 1 and Part 2.
This video is the deep dive into the technology. Here’s the video introduction if you’re not familiar with AWS Outposts. In the follow-up post, we’ll discuss the impact on the industry.
This 30 minute Packet Pushers Day 2 Cloud podcast with Rob Hirschfeld discusses how Edge infrastructure is different has has both similar and unique challenges compared to regular data centers.
This is the first interview of the multi-part series about AWS Outposts and it’s impact on the IT industry. While not the first data center appliance in market, Outposts represents the most extreme version of tethered, remote managed infrastructure in market because the control plane for the system remains with AWS and the hardware itself is manufactured by AWS.
In the series, we’ll dig deeply into how the technology works and it’s impact on the broader IT market.
TL;DR: Hybrid killed IT.
I’m a regular participant on BWG Roundtable calls and often extend those discussions 1×1. This post collects questions from one of those follow-up meetings where we explored how data center markets are changing based on new capacity and also the impact of cloud.
We both believe in the simple answer, “it’s going to be hybrid.” We both feel that this answer does not capture the real challenges that customers are facing.
So who are we? Haynes Strader, Jr. comes at this from a real estate perspective via CBRE Data Center Solutions. Rob Hirschfeld comes at this from an ops and automation perspective via RackN. We are in very different aspects of the data center market.
Rob: I know that we’re building a lot of data center capacity. So far, it’s been really hard to move operations to new infrastructure and mobility is a challenge. Do you see this too?
Haynes: Yes. Creating a data center network that is both efficient and affordable is challenging. A couple of key data center interconnection providers offer this model, but few companies are in a position to truly leverage the node-cloud-node model, where a company leverages many small data center locations (colo) that all connect to a cloud option for the bulk of their computing requirements. This works well for smaller companies with a spread-out workforce, or brand new companies with no legacy infrastructure, but the Fortune 2000 still have the majority of their compute sitting in-house in owned facilities that weren’t originally designed to serve as data centers. Moving these legacy systems is nearly impossible.
Rob: I see many companies feeling trapped by these facilities and looking to the cloud as an alternative. You are describing a lot of inertia in that migration. Is there something that can help improve mobility?
Haynes: Data centers are physical presences to hold virtual environments. The physical aspect can only be optimized when a company truly understands its virtual footprint. IT capacity planning is key to this. System monitoring and usage analytics are critical to make growth and consolidation decisions. Why isn’t this being adopted more quickly? Is it cost? Is it difficulty to implement in complex IT environments? Is it the fear of the unknown?
Rob: I think that it’s technical debt that makes it hard (and scary) to change. These systems were built manually or assuming that IT could maintain complete control. That’s really not how cloud-focused operations work. Is there a middle step between full cloud and legacy?
Haynes: Creating an environment where a company maximizes the use for its owned assets (leveraging sale leasebacks and forward-thinking financing) vs. waiting until end of life and attempting to dispose leads to opportunities to get capital injections early on and move to an OPEX model. This makes the transition to colo much easier, and avoids a large write-down that comes along with most IT transformations. Colocation is an excellent tool if it is properly negotiated because it can provide a flexible environment that can grow or shrink based on your utilization of other services. Sophisticated colo users know when it makes sense to pay top dollar for an environment that requires hyperconnectivity and when to save money for storage and day-to-day compute. They know when to leverage providers for services and when to manage IT tasks in-house. It is a daunting process, but the initial approach is key to getting to that place in the long term.
Rob: So I’m back to thinking that the challenge for accessing all these colo opportunities is that it’s still way too hard to move operations between facilities and also between facilities and the cloud. Until we improve mobility, choosing a provider can be a high stakes decision. What factors do you recommend reviewing?
Haynes: There is an overwhelming number of factors in picking new colos:
- Cloud Connectivity Options
- Quality of Services
- Hazard Risk Mitigation
- Comfort with services/provider
- Growth potential
- Flexibility of spend/portability (this is becoming ever-more important)
Rob: Yikes! Are there minor operational differences between colos that are causing breaking changes in operations?
Haynes: We run into this with our clients occasionally, but it is usually because they created two very different environments with different providers. This is a big reason to use a broker. Creating identical terms, pricing models, SLAs and work flows allow for clients to have a lot of leverage when they go to market. A select few of the top cloud providers do a really good job of this. They dominate the markets that they enter because they have a consistent, reliable process that is replicated globally. They also achieve some of the most attractive pricing and terms in the marketplace on a regular basis.
Rob: That makes sense. Process matters for the operators and consistent practices make it easier to work with a partner. Even so, moving can save a lot of money. Is that savings justified against the risk and interruption?
Haynes: This is the biggest hurdle that our enterprise clients face. The risk of moving is risking an IT leader’s job. How do we do this with minimal risk and maximum upside? Long-term strategic planning is one answer, but in today’s world, IT leadership changes often and strategies go along with that. We don’t have a silver bullet for this one – but are always looking to partner with IT leaders that want to give it a shot and hopefully save a lot of money.
Rob: So is migration practical?
Haynes: Migration makes our clients cringe, but the ones that really try to take it on and make it happen strategically (not once it is too late) regularly reap the benefits of saving their company money and making them heroes to the organization.
Rob: I guess that brings us back to mixing infrastructures. I know that public clouds have interconnect with colos that make it possible to avoid picking a single vendor. Are you seeing this too?
Haynes: Hybrid, hybrid, hybrid. No one is the best one-stop shop. We all love 7-11 and it provides a lot of great solutions on the run, but I’m not grocery shopping there. Same reason I don’t run into a Kroger every time I need a bottle of water. Pick the right solution for the right application and workload.
Rob: That makes sense to me, but I see something different in practice. Teams are too busy keeping the lights on to take advantage of longer-term thinking. They seem so busy fighting fires that it’s hard to improve.
Haynes: I TOTALLY agree. I don’t know how to change this. I get it, though. The CEO says, “We need to be in the cloud, yesterday,” and the CIO jumps. Suddenly everyone’s strategic planning is out the window and it is off to the races to find a quick-fix. Like most things, time and planning often reap more productive results.
Thanks for sharing our discussion!
We’d love to hear your opinions about it. We both agree that creating multi-site management abstractions could make life easier on IT and relatable to real estate and finance. With all of these organizations working in sync the world would be a better place. The challenge is figuring out how to get there!
Last week, RackN announced our enterprise support for Kubernetes using nothing but upstream Ansible from the project itself. This effort represents years of effort by the RackN founders to keep platforms interoperable via open and shareable operations automation.
That’s why our Digital Rebar approach targets underlay challenges and leverages existing automation tools instead of investing yet another install path.
This week, we added Install Wizard templates to the DC/OS install automation we build in collaboration with Mesosphere last year. That makes it even easier to run DC/OS on physical infrastructure. Like our Kubernetes work, the Digital Rebar automation uses the same community dcos_install.sh that’s used in the community documentation. The difference is that we’re also driving all the underlay prep and configuration automatically.
If this approach appeals to you, contact RackN and join in the open Day 2 revolution.
Interested in seeing the DC/OS install in action? Here’s a demo video: