0 comments on “Containers, Private Clouds, GIFEE, and the Underlay Problem”

Containers, Private Clouds, GIFEE, and the Underlay Problem

ITRevoluion

Gene Kim (@RealGeneKim) posted an exclusive Q&A with Rob Hirschfeld (@zehicle) today on IT Technology: Rob Hirschfeld on Containers, Private Clouds, GIFEE, and the Remaining “Underlay Problem.” 

Questions from the post:

  • Gene Kim: Tell me about the landscape of docker, OpenStack, Kubernetes, etc. How do they all relate, what’s changed, and who’s winning?
  • GK: I recently saw a tweet that I thought was super funny, saying something along the lines “friends don’t let friends build private clouds” — obviously, given all your involvement in the OpenStack community for so many years, I know you disagree with that statement. What is it that you think everyone should know about private clouds that tell the other side of the story?
  • GK: We talked about how much you loved the book Site Reliability Engineering: How Google Runs Production Systems by Betsy Beyer, which I also loved. What resonated with you, and how do you think it relates to how we do Ops work in the next decade?
  • GK: Tell me what about the work you did with Crowbar, and how that informs the work you’re currently doing with Digital Rebar?

Read the full Q&A here.

1 comment on “Weekly Recap of All Things Site Reliability Engineering (SRE)”

Weekly Recap of All Things Site Reliability Engineering (SRE)

pexels-photo-273011

Welcome to the first post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com.

SRE Items of the Week

Things I Learned Managing Site Reliability for Some of the World’s Busiest Gambling Sites by Ian Miell

INTRO TO POST

For several years I managed the 3rd line site reliability operation for many of the world’s busiest gambling sites, working for a little-known company that built and ran the core backend online software for several businesses that each at peak could take tens of millions of pounds in revenue per hour. I left a couple of years ago, so it’s a good time to reflect on what I learned in the process.

In many ways, what we did was similar to what’s now called an SRE function (I’m going to call us SREs, but the acronym didn’t exist at the time). We were on call, had to respond to incidents, made recommendations for re-engineering, provided robust feedback to developers and customer teams, managed escalations and emergency situations, ran monitoring systems, and so on.

The team I joined was around 5 engineers (all former developers and technical leaders), which grew to around 50 of more mixed experience across multiple locations by the time I left.

I’m going to focus here on process and documentation, since I don’t think they’re talked about usefully enough where I do read about them.
_____

2017 trend lines: When DevOps and hybrid collide by Rob Hirschfeld (@zehicle)
IBM Cloud Computing News

INTRO TO POST

What happens when DevOps methods meet hybrid environments? Following are some emerging trends and my commentary on each.

There are two major casualties as the pace of innovation in IT continues to accelerate: manual processes (non-DevOps) and tightly-coupled software stacks (non-hybrid).

We are changing some things much too quickly for developers and operators to keep up using processes that require human intervention in routine activities like integrated testing or deployment. Furthermore, monolithic platforms—our traditional “duck-and-cover” protection from pace of change—are less attractive for numerous reasons, including slower pace, vendor lock-in and lack of choice.

RECENT SRE AND DEVOPS EVENTS

SRECon17 Americas

CloudNativeCon + KubeCon 2017 March 29-30, 2017 in Berlin

IBM Interconnect March in Las Vegas, NV

  • Christopher Ferris, IBM CTO Open Technology and Rob Hirschfeld “Open Cloud Architecture: Think You Can Out-Innovate the Best of the Rest” – SLIDES

DevOps Summit

  • “Best Practices in Operating Hybrid Infrastructure that Spans Clouds and the Data Center” – BLOG / SLIDES

UPCOMING MEETUPS & PODCASTS

Continuous Discussions (#c9d9) Episode 66: Scaling Agile and DevOps in the Enterprise – April 11, 2017 at 10am PT. Rob Hirschfeld a guest in this Electric Cloud podcast.

UPCOMING EVENTS FOR RACKN

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

DockerCon 2017 : April 17 – 20, 2017 in Austin, TX
DevOpsDays Austin : May 4-5, 2017 in Austin TX
OpenStack Summit : May 8 – 11, 2017 in Boston, MA
Interop ITX : May 15 – 19, 2017 in Las Vegas, NV
Open Source IT Summit – Tuesday, May 16, 9:00 – 5:00pm : Rob Hirschfeld to speak
Gluecon : May 24 – 25, 2017 in Denver, CO

  • Surviving Day 2 in Open Source Hybrid Automation – May 23, 2017 : Rob Hirschfeld and Greg Althaus

OTHER NEWSLETTERS

SRE Weekly (@SREWeekly)Issue #66

0 comments on “Don’t Balkanize My Installer, Yo!”

Don’t Balkanize My Installer, Yo!

kubernetesLast week, RackN announced our enterprise support for Kubernetes using nothing but upstream Ansible from the project itself.  This effort represents years of effort by the RackN founders to keep platforms interoperable via open and shareable operations automation.

That’s why our Digital Rebar approach targets underlay challenges and leverages existing automation tools instead of investing yet another install path.

dcosThis week, we added Install Wizard templates to the DC/OS install automation we build in collaboration with Mesosphere last year.  That makes it even easier to run DC/OS on physical infrastructure.  Like our Kubernetes work, the Digital Rebar automation uses the same community dcos_install.sh that’s used in the community documentation.  The difference is that we’re also driving all the underlay prep and configuration automatically.

If this approach appeals to you, contact RackN and join in the open Day 2 revolution.

Interested in seeing the DC/OS install in action?  Here’s a demo video:

 

1 comment on “Are you impatient enough to be an SRE?”

Are you impatient enough to be an SRE?

sre-seriesOur focus on SRE series continues… At RackN, we see a coming infrastructure explosion in both complexity and scale. Unless our industry radically rethinks operational processes, current backlogs will escalate and stability, security and sharing will suffer.

SRE minded teams are very impatient about eliminating manual, routine and non-differentiated work.

I’ve been talking to a lot of people about SRE lately in the context of helping Ops get out of the way while coping with increasing load and complexity.  Why are they so impatient? Because they know that ops demand is constantly increasing, there’s no “good enough” when it comes to finding ways to automate tasks and move up stack. Without consistent improvement in automation, teams will get buried (my post about Ops Debt).

The core SRE mantra needs to be “Own Ops, don’t be owned by Ops.”

Yet, outsourcing ops responsibility to a service is equally problematic for an SRE.  They cannot give up responsibility for the integrated system.  In fact, that’s one of the basic reasons why Google’s SRE teams went from just “web site reliability” to full system thinking.  Every aspect of the infrastructure stack needs to be considered when looking at system performance and reliability.  For example, something deep like SSD drive write behavior or GPU BIOS could make a critical difference.  SREs need to be able to root cause issues and black box infrastructure (a.k.a. Cloud) can get in the way.

SRE teams must balance owning the full stack versus focusing on what makes their job unique.

That’s why we have been rethinking about how SRE teams approach infrastructure.  Instead of trying to turn infrastructure into a black box services; we’ve designed the Digital Rebar composable Ops platform that embraces and contains heterogeneity with a high degree of transparency and control.  This is critical because SREs cannot afford to keep reinventing automation at the bottom of the stack.  We must be able to share and leverage best-practices on infrastructure provisioning and platform deployment.  

Like the hardware that runs it, the foundation automation layer must be commoditized.

That means that Operators should be able to buy infrastructure (physical and cloud) from any vendor and run it in a consistent way.  Instead of days or weeks to get infrastructure running, it should take hours and be fully automated from power-on.  We should be able to rehearse on cloud and transfer that automation directly to (and from) physical without modification.  That practice and pace should be the norm instead of the exception.

That’s what we are building at RackN.  Our primary goal is to reuse automation whenever possible.  That was our top design priority for Digital Rebar and it drives our customer engagement models.  If you’d like to hear more, download our SRE white paper.

More information:

1 comment on “Shouldn’t we have Standard Automation for Commodity Infrastructure?”

Shouldn’t we have Standard Automation for Commodity Infrastructure?

sre-seriesOur focus on SRE series continues… At RackN, we see a coming infrastructure explosion in both complexity and scale. Unless our industry radically rethinks operational processes, current backlogs will escalate and stability, security and sharing will suffer.

bookAn entire chapter of the Google SRE book was dedicated to the benefits of improving data center provisioning via automation; however, the description was abstract with a focus on the importance of validation testing and self-healing. That lack of detail is not surprising: Google’s infrastructure automation is highly specialized and considered a competitive advantage.

Shouldn’t everyone be able to do this?

After all, data centers are built from the same basic components with the same protocols.

Unfortunately, the stack of small (but critical) variations between these components makes it very difficult to build a universal solution. Reasonable variations like hardware configuration, vendor out-of-band management protocol, operating system, support systems and networking topologies add up quickly. Even Google, with their tremendous SRE talent and time investments, only built a solution for their specific needs.

To handle this variation, our SRE teams bake assumptions about their infrastructure directly into their automation. That’s expedient because there’s generally little operational reward for creating generic solutions for specific problems. I see this all the time in data centers that have server naming conventions and IP address schemes that are the automation glue between their tools and processes. While this may be a practical tactic for integration, it is fragile and site specific.

Hard coding your operational environment into automation has serious downsides.

First, it creates operational debt [reference] just like hard coding values in regular development. Please don’t mistake this as a call for yak shaving provisioning scripts into open ended models! There’s a happy medium where the scripts can be robust about infrastructure like ips, NIC ordering, system names and operating system behavior without compromising readability and development time.

Second, it eliminates reuse because code that works in one place must be forked (or copied) to be used again.  Forking creates a proliferation of truth and technical debt.  Unlike a shared script, the forked scripts do not benefit from mutual improvements.  This is true for both internal use and when external communities advance.  I have seen many cases where a company’s decision to fork away from open source code to “adjust it for their needs” cause them to forever lose the benefits accrued in the upstream community.

Consequently, Ops debt is quickly created when these infrastructure specific items are coded into the scripts because you have to touch a lot of code to make small changes. You also end up with hidden dependencies

However, until recently, we have not given SRE teams an alternative to site customization.

Of course, the alternative requires some additional investment up front.  Hard coding and forking are faster out of the gate; however, the SRE mandate is to aggressively reduce ongoing maintenance tasks wherever possible.  When core automation is site customized, Ops loses the benefits of reuse both internally and externally.

That’s why we believe SRE teams work to reuse automation whenever possible.

rebar-1Digital Rebar was built from our frustration watching the OpenStack community struggle with exactly this lesson.  We felt that having a platform for sharing code was essential; however, we also observed that differences between sites made it impossible to share code.  Our solution was to isolate those changes into composable units.  That isolation allowed us take a system integration view that did not break when inevitable changes were introduced.

If you are interested in breaking out of the script customization death spiral then review what the RackN team has done with Digital Rebar.

Even if you don’t use the code, the approach could save your SRE team a lot of heartburn down the road.  Of course, if you do want to use it then just contact us at sre@rackn.com.

0 comments on “The Danger of SRE Backlogs”

The Danger of SRE Backlogs

This post is part of an SRE series grounded in the ideas inspired by the Google SRE book. Every Ops team I know is underwater and doesn’t have the time to catch their breath. Why does the load increase and leave Ops behind? It’s because IT is increasingly fragmented and siloed by both new tech and […]

via Spiraling Ops Debt & the SRE coding imperative — Rob Hirschfeld