1 comment on “Making Server Deployment 10x Faster – the ROI on Immutable Infrastructure”

Making Server Deployment 10x Faster – the ROI on Immutable Infrastructure

Author’s note: We’re looking for RackN Beta participants who want to help refine next generation deployment capabilities like the one described below.  We have these processes working today – our goal is to make them broadly reusable and standardized.

We’ve been posting [Go CI/CD and Immutable Infrastructure for Edge Computing Management] and podcasting [Discoposse: The Death of Configuration Management, Immutable Deployment Challenges for DevOps] about the concept of immutable infrastructure because it offers simpler and more repeatable operations processes. Delivering a pre-built image with software that’s already installed and mostly configured can greatly simplify deployment (see cloud-init).  It is simpler because all of the “moving parts” of the image can be pre-wired together and tested as a unit.  This model is default for containers, but it’s also widely used in cloud deployments where it’s easy to push an AMI or VHD to the cloud as a master image.

It takes work and expertise to automate building these immutable images, so it’s important to understand the benefits of simplicity, repeatability and speed.

  • Simplicity: Traditional configuration approaches start from an operating system base and then run configuration scripts to install the application and its prerequisites.  This configuration process requires many steps that are sequence dependent and have external dependencies.  Even small changes will break the entire system and prevent deployments.  By doing this as an image, deploy time integration or configuration issues fare eliminated.
  • Repeatability: Since the deliverable is an image, all environments are using the exact same artifact from dev, test and production.  That consistency reduces error rates and encourages cross-team collaboration because all parties are invested in the providence of the images.  In fact, immutable images are a great way to ensure that development and operations are at the table because neither team can create a custom environment.
  • Speed: Post-deployment configuration is slow.  If your installation has to pull patches, libraries and other components every time you install it then you’ll spend a lot of time waiting for downloads.  Believe it or not, the overhead of downloading a full image is small compared to the incremental delays of configuring an application stack.  Even the compromise of pre-staging items and then running local only configuration still take a surprisingly long time.

These benefits have been relatively easy to realize with Docker containers (it’s built in!) or VM images; however, they are much harder to realize with physical systems.  Containers and VMs provide a consistent abstraction that is missing in hardware.  Variations in networking, storage or even memory can cause images deployments to fail.

But… if we could do image based deployments to metal then we’d be able to gain these significant advantages.  We’d also be able to create portability of images between cloud and physical infrastructure.  Between the pure speed of direct images to disk (compared to kickstart or pre-seed) and the elimination of post-provision configuration, immutable metal deploys can be 5x to 10x faster.  

Deployment going from 30 minutes down to 6 or even 3.  That’s a very big deal.

That’s exactly why RackN has been working to create a standardized, repeatable process for immutable deployments.  We have this process working today with some expert steps required in image creation.  

If this type of process would help your operations team then please contact us and join the RackN Beta Program with advanced extensions for Digital Rebar Provision.

Note: There are risks to this approach as well.  There is no system wide patch or update mechanism except creating a new image and redeploying.  That means it takes more time to generate and roll an emergency patch to all systems.  Also, even small changes require replacing whole images.  These are both practical concerns; however, they are mitigated by maintaining a robust continuous deployment process where images are being constantly refreshed.

0 comments on “Fast, Simple, and Open: 10x ROI of Building Infrastructure in Layers”

Fast, Simple, and Open: 10x ROI of Building Infrastructure in Layers

Last week, we released a new white paper: Fast, Simple, and Open: 10x ROI of Building Infrastructure in Layers. This blog highlights that white paper and provides links for additional information.

Executive Summary

RackN allows Enterprises to quickly transform their current physical data centers from basic workflows to cloud-like integrated processes. We turned decades of data center experience into data  center provisioning software so simple it only takes 5 minutes to install and provides a progressive path to full autonomy. Our critical insight was to deliver automation in a layered way that allows  operations teams to quickly adopt the platform into their current processes and incrementally add autonomous and self-service features.

Introduction

This short paper discusses the history and key architectural drivers for the RackN open source component known as Digital Rebar Provision. We describe how we designed independent architecture layers for Provision, Control and Orchestration that smoothly underlay popular tools like Ansible, Terraform, Chef and Puppet. We also discuss how RackN enhances the Digital Rebar Provision scaffolding with downloadable packages and a centralized management interface. Together, Digital Rebar Provision and RackN deliver a non-disruptive progressive approach to data enter automation that can drives a 10x (or higher!) improvement in infrastructure ROI.

Read the Complete White Paper:  9.17 RackN White Paper_Fast, Simple and Open

Get Started with Digital Rebar Provision and RackN today:

0 comments on “October 6 – Weekly Recap of All Things Digital Rebar and RackN”

October 6 – Weekly Recap of All Things Digital Rebar and RackN

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, SRE, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

Items of the Week

RackN

RackN Beta Program Launch

Blog Post: Fast, Simple, Open Provisioning – Rethinking Infrastructure w/ Cloud Centric-Automation 

Operating hardware is too hard today. And too expensive.  Let’s fix that.

The problem with physical ops is not that it’s hard, complex or fragile. Okay, it is and those ARE problems, but they are compounded by the lack of shared management software and practices missing from this layer.  When the RackN team set out to solve these physical challenges, we knew the software had to be very focused to replace the current Cobbler and Foreman environments. It also had to be flexible and composable for heterogeneous environments or we’d be right back into snowflake custom DevOps.

We’re talking about a platform that finally addresses full lifecycle control at the hardware layer with open software.  That’s complex stuff automated in a reusable way.

Read More

Podcast

To participate in the beta please email us at beta@rackn.com, add your email on the RackN Beta Program website, or contact us twitter at @rackngo.

Digital Rebar 

Next Week – Digital Rebar Community Meetup #2

October 10 at 11:00am PST

Proposed outline agenda:

  • Welcome and recap from v001 meetup
  • demo: Kubernetes deployment via DRP / packet.net
  • demo: Injecting passwords and SSH keys
  • demo: Content Loading – demo and information
  • Weekly / or every-other-weekly meetups? https://www.meetup.com/digitalrebar/polls/1255504/
  • Release planning and features for v3.2.0

More Information at https://www.meetup.com/digitalrebar/events/243490128/

New Digital Rebar Provision Videos:

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

If you are attending any of these events please reach out to Rob Hirschfeld to setup time to learn more about our solutions or discuss the latest industry trends.

OTHER NEWSLETTERS

0 comments on “Fast, Simple, Open Provisioning – Rethinking Infrastructure with Cloud-Centric Automation”

Fast, Simple, Open Provisioning – Rethinking Infrastructure with Cloud-Centric Automation

Operating hardware is too hard today. And too expensive.  Let’s fix that.

The problem with physical ops is not that it’s hard, complex or fragile. Okay, it is and those ARE problems, but they are compounded by the lack of shared management software and practices missing from this layer.  When the RackN team set out to solve these physical challenges, we knew the software had to be very focused to replace the current Cobbler and Foreman environments. It also had to be flexible and composable for heterogeneous environments or we’d be right back into snowflake custom DevOps.

We’re talking about a platform that finally addresses full lifecycle control at the hardware layer with open software.  That’s complex stuff automated in a reusable way.

Even worse, being both simple and flexible for ops is a design nightmare.

Yet, we think we’ve found the right balance by combining v3.1 Digital Rebar Provision with an online library of extension packages from RackN.  Keeping Digital Rebar Provision lightweight with minimal bootstrapping and configuration makes it simple to operate.  The RackN user interface (UI) makes the service even easier to use allowing users to pick from a catalog of next steps.

We’re asking for your help to redefine data center economics from these basic starting building blocks and then join our journey from simple automation to full autonomy.

We are pleased to announce the RackN Beta Program today for your opportunity to evaluate our current solution and work together to solve your provisioning challenges. To participate in the beta please email us at beta@rackn.com, add your email on the RackN Beta Program website, or contact us twitter at @rackngo.

For more information on the RackN Beta Program, please listen to this podcast:

0 comments on “September 29 – Weekly Recap Of All Things Digital Rebar And RackN”

September 29 – Weekly Recap Of All Things Digital Rebar And RackN

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, SRE, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

Items of the Week

Digital Rebar Community

The Community held its first Online Meetup on Tuesday to select the final name of the Mascot as well as cover the latest information on the Digital Rebar Provision 3.1 release. As for the Mascot, Cloudia is the official name of our bear!

Additional new DRP v.31 Videos available:

Events Updates

HashiConf 2017 

Messy yet Effective Hybrid Portability  Rob Hirschfeld Post on the Event

Last week, I was able to attend the HashiConf 2017 event in my hometown of Austin, Texas.  HashiCorp has a significant following of loyal fans for their platforms and the show reflected their enthusiasm for the HashiCorp clean and functional design aesthetic.  I count the RackN team in that list – we embedded Consul deeply into Digital Rebar v2 and recently announced a cutting edge bare metal Terraform integration(demo video) with Digital Rebar Provision (v3).

Overall, the show was impressively executed.  It was a comfortable size to connect with attendees and most of the attendees were users instead of vendors.  The announcements at the show were also notable.  HashiCorp announced enterprise versions of all their popular platforms including Consul, Vault, Nomad and Terraform.  For their enterprise versions include a cross-cutting service, Sentinel, that provides a policy engine to help enforce corporate governance. READ MORE

RackN 

New Product Page on Rackn.com

Have you been to our newly launched product page? If not, click on over now to see the latest on our Data Center Infrastructure provisioning software solution leveraging Digital Rebar Provision 3.1.

Podcast – Challenges of CIOs and Operators for DevOps

Rob Hirschfeld, Co-Founder/CEO of RackN discusses the challenges of DevOps from the CIO and Operator viewpoint and how it is critical for each group to better understand the issues they each face. Only then can a true DevOps experience be had.

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

If you are attending any of these events please reach out to Rob Hirschfeld to setup time to learn more about our solutions or discuss the latest industry trends.

OTHER NEWSLETTERS

0 comments on “September 22 – Weekly Recap of All Things Digital Rebar and RackN”

September 22 – Weekly Recap of All Things Digital Rebar and RackN

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, SRE, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

Items of the Week

This week, RackN released a new high-level image highlighting the RackN and Digital Rebar solution and how they operate together to deliver provisioning services. Next week, we will provide further detail into how Digital Rebar operates between RackN Infrastructure Management and the provisioned hardware and VMs.

Digital Rebar Community

Terraform to Metal with Digital Rebar
Data Center Bacon Blog

We’ve built a buttery smooth Terraform provider for Bare Metal that runs equally on, of course, servers, Packet.net servers or VirtualBox VMs. If you like Hashicorp Terraform and want it to own your data center too, then read on.

Deep into the Digital Rebar Provision (DRP) release plan, a customer asked the RackN team to build a Terraform provider for DRP.  They had some very specific requirements that would stress all the new workflows and out-of-band management features in the release: in many ways, this integration is the ultimate proof point for DRP v3.1 because it drives DRP autonomously.

The primary goal was simple: run a data center as a resource pool for Terraform.

Digital Rebar and Terraform Provisioning Podcast

Digital Rebar v3.1 Product Launch
Product Launch Blog

We’ve made open network provisioning radically simpler.  So simple, you can install in 5 minutes and be provisioning in under 30.  That’s a bold claim, but it’s also an essential deliverable for us to bridge the Ops execution gap in a way that does not disrupt your existing tool chains.

The v3 mantra is about starting simple and allowing users to grow automation incrementally.  RackN has been building advanced automation packages and powerful UX management to support that mission.

Key v3.1 Features:

  • Layered Storage System
  • Content Packaging System
  • Plug-In System
  • Stages, Tasks & Jobs
  • Websocket API for Event Subscription
  • Embedded UI

Digital Rebar Provision 3.1 Launch Podcast

First Online Meetup: Sept 26, 2017 at 11:00am PST
Join Meetup Group Here : Meetup Announcement Blog

Topics for Meetup:

  • Welcome
  • Introduction to Digital Rebar Provision (DRP) and RackN
  • Naming the Digital Rebar mascot [1]
  • Discussion on DRP version 3.1 features
  • Feature and roadmap planning for DRP version 3.2
  • Use Github Projects or Trello Board
  • Demo of DRP workload deployment
  • Getting in touch with the Digital Rebar community and RackN
  • Questions and answers period

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

If you are attending any of these events please reach out to Rob Hirschfeld to setup time to learn more about our solutions or discuss the latest industry trends.

OTHER NEWSLETTERS

0 comments on “Go CI/CD and Immutable Infrastructure for Edge Computing Management”

Go CI/CD and Immutable Infrastructure for Edge Computing Management

In our last post, we pretty much tore apart the idea of running mini-clouds on the edge because they are not designed to be managed at scale in resource constrained environments without deep hardware automation.  While I’m a huge advocate of API-driven infrastructure, I don’t believe in a one-size-fits-all API because a good API provides purpose-driven abstractions.

The logical extension is that having deep hardware automation means there’s no need for cloud (aka virtual infrastructure) APIs.  This is exactly what container-focused customers have been telling us at RackN in regular data centers so we’d expect the same to apply for edge infrastructure.

If “cloudification” is not the solution then where should we look for management patterns?  

We believe that software development CI/CD and immutable infrastructure patterns are well suited to edge infrastructure use cases.  We discussed this at a session at the OpenStack OpenDev Edge summit.

Continuous Integration / Continuous Delivery (CI/CD) software pipelines help to manage environments where the risk of making changes is significant by breaking the changes into small, verifiable units.  This is essential for edge because lack of physical access makes it very hard to mitigate problems.  Using CI/CD, especially with A/B testing, allows for controlled rolling distribution of new software.  

For example, in a 10,000 site deployment, the CI/CD infrastructure would continuously roll out updates and patches over the entire system.  Small incremental changes reduce the risk of a major flaw being introduced.  The effect is enhanced when changes are rolled slowly over the entire fleet instead of simultaneously rolled out to all sites (known as A/B or blue/green testing).  In the rolling deployment scenario, breaking changes can be detected and stopped before they have significant impacts.

These processes and the support software systems are already in place for large scale cloud software deployments.  There are likely gaps around physical proximity and heterogeneity; however, the process is there and initial use-case fit seems to be very good.

Immutable Infrastructure is a catch-all term for deployments based on images instead of configuration.  This concept is popular in cloud deployments were teams produce “golden” VM or container images that contain the exact version of software needed and then are provisioned with minimal secondary configuration.  In most cases, the images only need a small file injected (known as a cloud init) to complete the process.

In this immutable pattern, images are never updated post deployment; instead, instances are destroyed and recreated.  It’s a deploy, destroy, repeat process.  At RackN, we’ve been able to adapt Digital Rebar Provisioning to support this even at the hardware layer where images are delivered directly to disk and re-provisioning happens on a constant basis just like a cloud managing VMs.

The advantage of the immutable pattern is that we create a very repeatable and controlled environment.  Instead of trying to maintain elaborate configurations and bi-directional systems of record, we can simply reset whole environments.  In a CI/CD system, we constantly generate fresh images that are incrementally distributed through the environment.

Immutable Edge Infrastructure would mean building and deploying complete system images for our distributed environment.  Clearly, this requires moving around larger images than just pushing patches; however, these uploads can easily be staged and they provide critical repeatability in management.  The alternative is trying to keep track of which patches have been applied successfully to distributed systems.  Based on personal experience, having an atomic deliverable sounds very attractive.

CI/CD and Immutable patterns are deep and complex subjects that go beyond the scope of a single post; however, they also offer a concrete basis for building manageable data centers.

The takeaway is that we need to be looking first to scale distributed software management patterns to help build robust edge infrastructure platforms. Picking a cloud platform before we’ve figured out these concerns is a waste of time.

Previous 2 Posts on OpenStack Conference:

Post 1 – OpenStack on Edge? 4 Ways Edge is Distinct from Cloud
Post 2 – Edge Infrastructure is Not Just Thousands of Mini Clouds

0 comments on “Edge Infrastructure is Not Just Thousands of Mini Clouds”

Edge Infrastructure is Not Just Thousands of Mini Clouds

I left the OpenStack OpenDev Edge Infrastructure conference with a lot of concerns relating to how to manage geographically distributed infrastructure at scale.  We’ve been asking similar questions at RackN as we work to build composable automation that can be shared and reused.  The critical need is to dramatically reduce site-specific customization in a way that still accommodates required variation – this is something we’ve made surprising advances on in Digital Rebar v3.1.

These are very serious issues for companies like AT&T with 1000s of local exchanges, Walmart with 10,000s of in-store server farms or Verizon with 10,000s of coffee shop Wifi zones.  These workloads are not moving into centralized data centers.  In fact, with machine learning and IoT, we are expecting to see more and more distributed computing needs.

Running each site as a mini-cloud is clearly not the right answer.

While we do need the infrastructure to be easily API addressable, adding cloud without fixing the underlying infrastructure management moves us in the wrong direction.  For example, AT&T‘s initial 100+ OpenStack deployments were not field up-gradable and lead to their efforts to deploy OpenStack on Kubernetes; however, that may have simply moved the upgrade problem to a different platform because Kubernetes does not address the physical layer either!

There are multiple challenges here.  First, any scale infrastructure problem must be solved at the physical layer first.  Second, we must have tooling that brings repeatable, automation processes to that layer.  It’s not sufficient to have deep control of a single site: we must be able to reliably distribute automation over thousands of sites with limited operational support and bandwidth.  These requirements are outside the scope of cloud focused tools.

Containers and platforms like Kubernetes have a significant part to play in this story.  I was surprised that they were present only in a minor way at the summit.  The portability and light footprint of these platforms make them a natural fit for edge infrastructure.  I believe that lack of focus comes from the audience believing (incorrectly) that edge applications are not ready for container management.

With hardware layer control (which is required for edge), there is no need for a virtualization layer to provide infrastructure management.  In fact, “cloud” only adds complexity and cost for edge infrastructure when the workloads are containerized.  Our current cloud platforms are not designed to run in small environments and not designed to be managed in a repeatable way at thousands of data centers.  This is a deep architectural gap and not easily patched.

OpenStack sponsoring the edge infrastructure event got the right people in the room but also got in the way of discussing how we should be solving these operational.  How should we be solving them?  In the next post, we’ll talk about management models that we should be borrowing for the edge…

Read 1st Post of 3 from OpenStack OpenDev: OpenStack on Edge? 4 Ways Edge is Distinct from Cloud

0 comments on “Podcast – A Nice Mix of Ansible and Digital Rebar”

Podcast – A Nice Mix of Ansible and Digital Rebar

Rob Hirschfeld, CEO and Co-Founder, RackN talks with Stephen Spector, HPE Cloud Evangelist about the recent uptake in Ansible news as well as how Digital Rebar Provision assists Ansible users.

Listen to the 9 minute podcast here:

As this is the launch of L8ist Sh9y Podcast from RackN we encourage you to visit our site at https://soundcloud.com/user-410091210 or subscribe to the RSS Feed. We will also be publishing on iTunes as well shortly.

0 comments on “July 28 – Weekly Recap of All Things Site Reliability Engineering (SRE)”

July 28 – Weekly Recap of All Things Site Reliability Engineering (SRE)

Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet Rob (@zehicle) or RackN (@rackngo)

This week, we launched our new RackN website to provide more information on our solutions and services as well as provide customer examples. Click over to our new site and let us know your thoughts.

SRE Items of the Week

Site Reliability Engineer: Don’t fall victim to the bias blind spot
http://sdtimes.com/site-reliability-engineer-dont-fall-victim-to-the-bias-blind-spot/

To ensure websites and applications deliver consistently excellent speed and availability, some organizations are adopting Google’s Site Reliability Engineering (SRE) model. In this model, a Site Reliability Engineer (SRE) – usually someone with both development and IT Ops experience – institutes clear-cut metrics to determine when a website or application is production-ready from a user performance perspective. This helps reduce friction that often exists between the “dev” and “ops” sides of organizations. More specifically, metrics can eliminate the conflict between developers’ desire to “Ship it!” and operations desire to not be paged when they are on-call. If performance thresholds aren’t met, releases cannot move forward. READ MORE

Episode 50 – SRE Revisited plus the Challenge of Ops and more with Rob Hirschfeld
http://podcast.discoposse.com/e/ep-50-sre-revisited-plus-the-challenges-of-ops-and-more-with-rob-hirschfeld-zehicle/

This fun chat expands on what we started talking about in episode 42 (http://podcast.discoposse.com/e/ep-42-spiraling-ops-debt-sre-solutions-and-rackn-chat-with-rob-hirschfeld-zehicle/) as we dive into the challenges and potential solutions for thinking and acting with the SRE approach. Big thansk to Rob Hirschfeld from @RackN for sharing his thoughts and experiences from the field on this very exciting subject. LISTEN HERE

Site Reliability Engineering – Operators and Developers Working Together
http://bit.ly/2u7eSmm 

Rob Hirschfeld, Co-Founder and CEO of RackN provides his thoughts on how operators are equivalent to developers and work together to accomplish the critical task of keep the infrastructure running and available with constant changes in the data center

Subscribe to our new daily DevOps, SRE, & Operations Newsletter https://paper.li/e-1498071701#/
_____________

UPCOMING EVENTS

Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com.

OTHER NEWSLETTERS

%d bloggers like this: