0 comments on “Podcast – Ian Rae talks Cloud, Innovation, and Updates from Google Next 2018”

Podcast – Ian Rae talks Cloud, Innovation, and Updates from Google Next 2018

Joining us this week is Ian Rae, CEO and Founder CloudOps who recorded the podcast during the Google Next conference in 2018.

Highlights

  • 1 min 55 sec: Define Cloud from a CloudOps perspective
    • Business Model and an Operations Model
  • 3 min 59 sec: Update from Google Next 2018 event
    • Google is the “Engineer’s Cloud”
    • Google’s approach vs Amazon approach in feature design/release
  • 9 min 55 sec: Early Amazon ~ no easy button
    • Amazon educated the market as industry leader
  • 12 min04 sec: What is the state of Hybrid? Do we need it?
    • Complexity of systems leads to private, public as well as multiple cloud providers
    • Open source enabled workloads to run on various clouds even if the cloud was not designed to support a type of workload
    • Google’s strategy is around open source in the cloud
  • 14 min 12 sec: IBM visibility in open source and cloud market
    • Didn’t build cloud services (e.g. open a ticket to remap a VLAN)
  • 16 min 40 sec: OpenStack tied to compete on service components
    • Couldn’t compete without Product Managers to guide developers
    • Missed last mile between technology and customer
    • Didn’t want to take on the operational aspects of the customer
  • 19 min 31 sec: Is innovation driven from listening to customers vs developers doing what they think is best?
    • OpenStack is seen as legacy as customers look for Cloud Native Infrastructure
    • OpenStack vs Kubernetes install time significance
  • 22 min 44 sec: Google announcement of GKE for on-premises infrastructure
    • Not really On-premise; more like Platform9 for OpenStack
    • GKE solve end user experience and operational challenges to deliver it
  • 26 min 07 sec: Edge IT replaces what is On-Premises IT
    • Bullish on the future with Edge computing
    • 27 min 27 sec: Who delivers control plane for edge?
      • Recommends Open Source in control plan
  • 28 min 29 sec: Current tech hides the infrastructure problems
    • Someone still has to deal with the physical hardware
  • 30 min 53 sec: Commercial driver for rapid Edge adoption
  • 32 min 20 sec: CloudOps building software / next generation of BSS or OSS for telco
    • Meet the needs of the cloud provider for flexibility in generating services with the ability to change the service backend provider
    • Amazon is the new Win32
  • 38 min 07 sec: Can customers install their own software? Will people buy software anymore?
    • Compare payment models from Salesforce and Slack
    • Google allowing customers to run their technology themselves of allow Google to manage it for you
  • 40 min 43 sec: Wrap-Up

Podcast Guest: Ian Rae, CEO and Founder CloudOps

Ian Rae is the founder and CEO of CloudOps, a cloud computing consulting firm that provides multi-cloud solutions for software companies, enterprises and telecommunications providers. Ian is also the founder of cloud.ca, a Canadian cloud infrastructure as a service (IaaS) focused on data residency, privacy and security requirements. He is a partner at Year One Labs, a lean startup incubator, and is the founder of the Centre cloud.ca in Montreal. Prior to clouds, Ian was responsible for engineering at Coradiant, a leader in application performance management.

0 comments on “Podcast – Yves Boudreau on State of Edge Report and Edge vs Cloud”

Podcast – Yves Boudreau on State of Edge Report and Edge vs Cloud

Joining us this week is Yves Boudreau from Ericsson for his 2nd Podcast appearance (1st Podcast) to talk about the new State of the Edge Report and the latest happenings in the Edge community.

Highlights

  • Edge as an accelerant not having to wait until Edge is built completely
  • Opportunity Cost using Edge as is; no time to wait
  • Be Specific when Requesting Services
  • Internet and Networks are Not Unlimited Pipes
  • Interesting Use Cases for Edge – Augmented Reality, Drone, Cars, Batteries
  • Cost savings of where the data processing is done
  • Open Source software communities at the Edge

Topic                                                                                    Time (Minutes.Seconds)

Intro                                                                                             0.0 – 1.22
State of the Edge Report                                                         1.22 – 5.22 (STE Podcast)  (https://www.stateoftheedge.com/)
Accessible Edge Environments                                              5.22 – 10.50 (Bulgaria)
Opportunity Cost and Missing Killer App                             10.50 – 12.04
Edge Infrastructure as Cloud Development Paradigm      12.04 – 14.29
Elasticity Issues b/w Cloud and Edge                                  14.29 – 21.45
Innovators Dilemma for Cloud & Telcom                             21.35 – 23.10
Favorite Use Cases for Infrastructure Edge                         23.10 –  28.55 (Hanger Podcast)
Data Location and Data Sovereignty                                    28.55 – 31.03
Cost for Processing Power in Edge Devices                        31.03 – 34.49 (SWIM.AI Podcast)
Free Software/ Open Source in Edge                                   34.49 – 46.58
Wrap Up                                                                                     46.58 – END

 

 

Podcast Guest:  Yves Boudreau, VP Partnership and Ecosystem Strategy

Mr. Boudreau is a 20 year veteran of the Digital, Telecom and Cable TV industries. From modest beginnings of one of the first cable broadband ISPs in Canada to the fast paced technology hub of Silicon Valley, Yves joined ERICSSON in 2011 as Vice President of Technical Sales Support and most recently has accepted a position as the VP of Partnerships and Ecosystem Strategy for the ERICSSON Unified Delivery Network. Previously, Mr. Boudreau has worked in R&D, Systems Engineering & Business Development for companies such as Com21 Inc., ARRIS Group (Cable), Imagine Communication (Video Compression) and Verivue Inc. (CDN). Yves now resides in Atlanta, Georgia with his wife Josée and 3 children. Mr. Boudreau completed his undergraduate studies in Commerce @ Laurentian University and graduate studies in Information Technology Management @ Athabasca University. Yves currently also serves on the Board of Director of the Streaming Video Alliance (www.streamingvideoalliance.org)

0 comments on “December 1 – Weekly Recap of Digital Rebar, RackN, and Industry News”

December 1 – Weekly Recap of Digital Rebar, RackN, and Industry News

Welcome to the weekly post of the RackN blog recap of all things Digital Rebar, RackN, Edge Computing, and DevOps. If you have any ideas for this recap or would like to include content please contact us at info@rackn.com or tweet RackN (@rackngo)

Items of the Week

Industry News

Edge computing, in the context of IoT, is the idea that you can actually do some of the computational work required by a system close to the endpoints instead of in a cloud or a data center. The intent is to minimize latency, which, according to Renaud, means that it’s going to be a hot trend in certain kinds of industrial IoT application.

Solution providers that have been hit hard by a data center hardware retreat are finding sales and profit growth by living on the edge—the network edge, that is.

DevOps — a term used to refer to the integration of software developers and operations teams — continues to spread like wildfire throughout the open networking ecosystem. The main idea behind DevOps is that by breaking down barriers between these two departments, market applications can be delivered faster with lower costs and better quality. Nevertheless, for all the advantages attached to DevOps, it is still a budding concept since it is primarily concerned with re-aligning the workforce with a variety of tools. The following, therefore, is a list of DevOps trends to keep an eye out for.

Digital Rebar

Our architectural plans for Digital Rebar are beyond big – they are for massive distributed scale. Not up, but out. We are designing for the case where we have common automation content packages distributed over 100,000 stand-alone sites (think 5G cell towers) that are not synchronously managed. In that case, there will be version drift between the endpoints and content. For example, we may need to patch an installation script quickly over a whole fleet but want to upgrade the endpoints more slowly.

Prior Meetup on November 21st Notes

RackN

Yesterday, AWS confirmed that it actually uses physical servers to run its cloud infrastructure and, gasp, no one was surprised.  The actual news about the i3.metal instances by AWS Chief Evangelist Jeff Barr shows that bare metal is being treated as just another AMI managed instance type (see also GeekwireTechcrunchVenture Beat).  For AWS users, there’s no drama here because it’s an incremental add to processes they are already know well.

We are actively looking for feedback from customers and technologists before general availability of both RackN and the Terraform plug-in. It takes just a few minutes to get started and we offer direct engineering engagement on our community slack channel. Get started now by providing your email on our registration pagey so we can provide you all the necessary links.

L8ist Sh9y Podcast

Podcast Guest: Krishnan Subramanian, Rishidot Research

Founder and Chief Research Advisor, Infrastructure, Application Platforms and DevOps

UPCOMING EVENTS

  • KubeCon + CloudNativeCon : Dec 6 – 8 in Austin, TX

Event plans for the RackN and Digital Rebar team include 2 sessions and the RackN booth. We look forward to seeing you in Austin.

The RackN team is preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email info@rackn.com

 

 

 

 

 

 

0 comments on “Deep Thinking & Tech + Great Guests – L8ist Sh9y Podcast”

Deep Thinking & Tech + Great Guests – L8ist Sh9y Podcast

I love great conversations about technology – especially ones where the answer is not very neatly settled into winners and losers (which is ALL of them in IT).  I’m excited that RackN has (re)launched the L8ist Sh9y (aka Latest Shiny) podcast around this exact theme.

Please check out the deep and thoughtful discussion I just had with Mark Thiele (notes) of Apcera where we covered Mark’s thought on why public cloud will be under 20% of IT and culture issues head on.

Spoiler: we have David Linthicum coming next, SO SUBSCRIBE.

I’ve been a guest on some great podcasts (CloudcastgcOnDemandDatanautsIBM DojoHPEFoodfight) and have deep respect for critical work they do in industry.

We feel there’s still room for deep discussions specifically around automated IT Operations in cloud, data center and edge; consequently, we’re branching out to start including deep interviews in addition to our initial stable of IT Ops deep technical topics like TerraformEdge ComputingGartnerSYM review, Kubernetes and, of course, our own Digital Rebar.

Soundcloud Subscription Information

6 comments on “OpenStack’s Big Pivot: our suggestion to drop everything and focus on being a Kubernetes VM management workload”

OpenStack’s Big Pivot: our suggestion to drop everything and focus on being a Kubernetes VM management workload

TL;DR: Sometimes paradigm changes demand a rapid response and I believe unifying OpenStack services under Kubernetes has become an such an urgent priority that we must freeze all other work until this effort has been completed.

See Also Rob’s VMblog.com post How is OpenStack so dead AND yet so very alive

By design, OpenStack chose to be unopinionated about operations.

pexels-photo-422290That made sense for a multi-vendor project that was deeply integrated with the physical infrastructure and virtualization technologies.  The cost of that decision has been high for everyone because we did not converge to shared practices that would drive ease of operations, upgrade or tuning.  We ended up with waves of vendors vying to have the the fastest, simplest and openest version.  

Tragically, install became an area of competition instead an area of collaboration.

Containers and microservice architecture (as required for Kubernetes and other container schedulers) is providing an opportunity to correct this course.  The community is already moving towards containerized services with significant interest in using Kubernetes as the underlay manager for those services.  I’ve laid out the arguments for and challenges ahead of this approach in other places.  

These technical challenges involve tuning the services for cloud native configuration and immutable designs.  They include making sure the project configurations can be injected into containers securely and the infra-service communication can handle container life-cycles.  Adjacent concerns like networking and storage also have to be considered.  These are all solvable problems that can be more quickly resolved if the community acts together to target just one open underlay.

The critical fact is that the changes are manageable and unifying the solution makes the project stronger.

Using Kubernetes for OpenStack service management does not eliminate or even solve the challenges of deep integration.  OpenStack already has abstractions that manage vendor heterogeneity and those abstractions are a key value for the project.  Kubernetes solves a different problem: it manages the application services that run OpenStack with a proven, understood pattern.  By adopting this pattern fully, we finally give operators consistent, shared and open upgrade, availability and management tooling.

Having a shared, open operational model would help drive OpenStack faster.

There is a risk to this approach: driving Kubernetes as the underlay for OpenStack will force OpenStack services into a more narrow scope as an infrastructure service (aka IaaS).  This is a good thing in my opinion.   We need multiple abstractions when we build effective IT systems.  

The idea that we can build a universal single abstraction for all uses is a dangerous distraction; instead; we need to build platform layers collaborativity.  

While initially resisting, I have become enthusiatic about this approach.  RackN has been working hard on the upgradable & highly available Kubernetes on Metal prerequisite.  We’ve also created prototypes of the fully integrated stack.  We believe strongly that this work should be done as a community effort and not within a distro.

My call for a Kubernetes underlay pivot embraces that collaborative approach.  If we can keep these platforms focused on their core value then we can build bridges between what we have and our next innovation.  What do you think?  Is this a good approach?  Contact us if you’d like to work together on making this happen.

See Also Rob’s VMblog.com post How is OpenStack so dead AND yet so very alive to SREs? 

1 comment on “Cybercrime for Profit!? Five reasons why we need to start driving much more dynamic IT Operations”

Cybercrime for Profit!? Five reasons why we need to start driving much more dynamic IT Operations

Author’s call to action: if you think you already know this is a problem, then why do we keep reliving it?  We’re doing our part open with Digital Rebar and we need more help to secure infrastructure using foundational automation.

There’s a frustrating cyberattack driven security awareness cycle in IT Operations.  Exploits and vulnerabilities are neither new nor unexpected; however, there is a new element taking shape that should raise additional alarm.pexels-photo-169617.jpeg

Cyberattacks are increasingly profit generating and automated.

The fundamental fact of the latest attacks is that patches were available.  The extensive impact we are seeing is caused by IT Operations that relies on end-of-life components and cannot absorb incremental changes.  These practices are based on dangerous obsolete assumptions about perimeter defense and long delivery cycles.

It’s not just new products using CI/CD pipelines and dynamic delivery: we must retrofit all IT infrastructure to be constantly refreshed.

We simply cannot wait because the cybersecurity challenges are accelerating.  What’s changed in the industry?  There is a combination of factors driving these trends:

  1. Profit motive – attacks are not simply about getting information, they are profit centers made simpler with hard to trace cryptocurrency.
  2. Shortening windows – we’re doing better at finding, publishing and fixing issues than ever in the open.  That cycle assumes that downstream users are also applying the fixes quickly.  Without downstream adoption, the process fails to realize key benefit.
  3. Automation and machine learning – the attackers are using more and more sophisticated automation to find and exploit vulnerabilities.  Expect them to use machine learning to make it even more effective.
  4. No perimeter – our highly interconnected and mobile IT environments eliminate the illusion of a perimeter defense.  This not just a networking statement: our code bases and service catalogs are built from many outside sources that often have deep access.
  5. Expanding surface area – finally, we’re embedding and connected more devices every second into our infrastructure.  Costs are decreasing while capability increases.  There’s no turning back from that, we we should expect an ongoing list of vulnerabilities.

No company has all the answers for cybersecurity; however, it’s clear that we cannot solve this cybersecurity at the perimeter and allowing the interior to remain static.

The only workable IT posture starts with a continuously deployed and updated foundation.

Companies typically skip this work because it’s very difficult to automate in a cross-infrastructure and reliable way.  I’ve been working in this space for nearly two decades and we’re just delivering deep automation that can be applied in generalized ways as part of larger processes.  The good news is that means that we can finally start discussing real shared industry best practices.

Thankfully, with shared practices and tooling, we can get ahead of the attackers.

RackN focuses exclusively on addressing infrastructure automation in an open way.  We are solving this problem from the data center foundations upward.  That allows us to establish security practice that is both completely trusted and constantly refreshed.  It’s definitely not the only thing companies need to do, but that foundation and posture helps drive a better defense.

I don’t pretend to have complete answers to the cyberattacks we are seeing, but I hope they inspire us to more security discipline.  We are on the cusp of a new wave of automated and fast exploits.

Let us know if you are interested in working with RackN to build a more dynamic infrastructure.

0 comments on “If Private Cloud is dead. Where did it go? How did it get there? [JOINT POST]”

If Private Cloud is dead. Where did it go? How did it get there? [JOINT POST]

TL;DR: Hybrid killed IT.

I’m a regular participant on BWG Roundtable calls and often extend those discussions 1×1.  This post collects questions from one of those follow-up meetings where we explored how data center markets are changing based on new capacity and also the impact of cloud.  

We both believe in the simple answer, “it’s going to be hybrid.” We both feel that this answer does not capture the real challenges that customers are facing.

pexels-photo-325229So who are we?  Haynes Strader, Jr. comes at this from a real estate perspective via CBRE Data Center Solutions.  Rob Hirschfeld comes at this from an ops and automation perspective via RackN.  We are in very different aspects of the data center market.    

Rob: I know that we’re building a lot of data center capacity.  So far, it’s been really hard to move operations to new infrastructure and mobility is a challenge.  Do you see this too?

Haynes: Yes.  Creating a data center network that is both efficient and affordable is challenging. A couple of key data center interconnection providers offer this model, but few companies are in a position to truly leverage the node-cloud-node model, where a company leverages many small data center locations (colo) that all connect to a cloud option for the bulk of their computing requirements. This works well for smaller companies with a spread-out workforce, or brand new companies with no legacy infrastructure, but the Fortune 2000 still have the majority of their compute sitting in-house in owned facilities that weren’t originally designed to serve as data centers. Moving these legacy systems is nearly impossible.

Rob: I see many companies feeling trapped by these facilities and looking to the cloud as an alternative.  You are describing a lot of inertia in that migration.  Is there something that can help improve mobility?

Haynes: Data centers are physical presences to hold virtual environments. The physical aspect can only be optimized when a company truly understands its virtual footprint. IT capacity planning is key to this. System monitoring and usage analytics are critical to make growth and consolidation decisions. Why isn’t this being adopted more quickly? Is it cost? Is it difficulty to implement in complex IT environments? Is it the fear of the unknown?

Rob: I think that it’s technical debt that makes it hard (and scary) to change.  These systems were built manually or assuming that IT could maintain complete control.  That’s really not how cloud-focused operations work.  Is there a middle step between full cloud and legacy?

Haynes: Creating an environment where a company maximizes the use for its owned assets (leveraging sale leasebacks and forward-thinking financing) vs. waiting until end of life and attempting to dispose leads to opportunities to get capital injections early on and move to an OPEX model. This makes the transition to colo much easier, and avoids a large write-down that comes along with most IT transformations. Colocation is an excellent tool if it is properly negotiated because it can provide a flexible environment that can grow or shrink based on your utilization of other services. Sophisticated colo users know when it makes sense to pay top dollar for an environment that requires hyperconnectivity and when to save money for storage and day-to-day compute. They know when to leverage providers for services and when to manage IT tasks in-house. It is a daunting process, but the initial approach is key to getting to that place in the long term.

Rob:  So I’m back to thinking that the challenge for accessing all these colo opportunities is that it’s still way too hard to move operations between facilities and also between facilities and the cloud.  Until we improve mobility, choosing a provider can be a high stakes decision.  What factors do you recommend reviewing?

Haynes: There is an overwhelming number of factors in picking new colos:

  1. Location
  2. Connectivity/Latency
  3. Cloud Connectivity Options
  4. Pricing
  5. Quality of Services
  6. Security
  7. Hazard Risk Mitigation
  8. Comfort with services/provider
  9. Growth potential
  10. Flexibility of spend/portability (this is becoming ever-more important)

Rob: Yikes!  Are there minor operational differences between colos that are causing breaking changes in operations?

Haynes:  We run into this with our clients occasionally, but it is usually because they created two very different environments with different providers. This is a big reason to use a broker. Creating identical terms, pricing models, SLAs and work flows allow for clients to have a lot of leverage when they go to market. A select few of the top cloud providers do a really good job of this. They dominate the markets that they enter because they have a consistent, reliable process that is replicated globally. They also achieve some of the most attractive pricing and terms in the marketplace on a regular basis.

pexels-photo-119661.jpegRob: That makes sense.  Process matters for the operators and consistent practices make it easier to work with a partner.  Even so, moving can save a lot of money.  Is that savings justified against the risk and interruption?

Haynes: This is the biggest hurdle that our enterprise clients face. The risk of moving is risking an IT leader’s job. How do we do this with minimal risk and maximum upside? Long-term strategic planning is one answer, but in today’s world, IT leadership changes often and strategies go along with that. We don’t have a silver bullet for this one – but are always looking to partner with IT leaders that want to give it a shot and hopefully save a lot of money.

Rob: So is migration practical?

Haynes: Migration makes our clients cringe, but the ones that really try to take it on and make it happen strategically (not once it is too late) regularly reap the benefits of saving their company money and making them heroes to the organization.

Rob: I guess that brings us back to mixing infrastructures.  I know that public clouds have interconnect with colos that make it possible to avoid picking a single vendor.  Are you seeing this too?

Haynes: Hybrid, hybrid, hybrid. No one is the best one-stop shop. We all love 7-11 and it provides a lot of great solutions on the run, but I’m not grocery shopping there. Same reason I don’t run into a Kroger every time I need a bottle of water. Pick the right solution for the right application and workload.

Rob: That makes sense to me, but I see something different in practice.  Teams are too busy keeping the lights on to take advantage of longer-term thinking.  They seem so busy fighting fires that it’s hard to improve.

Haynes:  I TOTALLY agree. I don’t know how to change this. I get it, though. The CEO says, “We need to be in the cloud, yesterday,” and the CIO jumps. Suddenly everyone’s strategic planning is out the window and it is off to the races to find a quick-fix. Like most things, time and planning often reap more productive results.

Thanks for sharing our discussion!  

We’d love to hear your opinions about it.  We both agree that creating multi-site management abstractions could make life easier on IT and relatable to real estate and finance. With all of these organizations working in sync the world would be a better place. The challenge is figuring out how to get there!

0 comments on “How about a CaaPuccino? Krish and Rob discuss containers, platforms, hybrid issues around Kubernetes and OpenStack.”

How about a CaaPuccino? Krish and Rob discuss containers, platforms, hybrid issues around Kubernetes and OpenStack.

CaaPuccino: A frothy mix of containers and platforms.

Check out Krish Subramanian’s (@krishnan) Modern Enterprise podcast (audio here) today for a surprisingly deep and thoughtful discussion about how frothy new technologies are impacting Modern Enterprise IT. Of course, we also take some time to throw some fire bombs at the end. You can use my notes below to jump to your favorite topics.

The key takeaways are that portability is hard and we’re still working out the impact of container architecture.

The benefit of the longer interview is that we really dig into the reasons why portability is hard and discuss ways to improve it. My personal SRE posts and those on the RackN blog describe operational processes that improve portability. These are real concerns for all IT organizations because mixed and hybrid models are a fact of life.

If you are not actively making automation that works against multiple infrastructures then you are building technical debt.

Of course, if you just want the snark, then jump forward to 24:00 minutes in where we talk future of Kubernetes, OpenStack and the inverted intersection of the projects.

Krish, thanks for the great discussion!

Rob’s Podcast Notes (39 minutes)

2:37: Rob intros about Digital Rebar & RackN

4:50: Why our Kubernetes is JUST UPSTREAM

5:35: Where are we going in 5 years > why Rob believes in Hybrid

  • Should not be 1 vendor who owns everything
  • That’s why we work for portability
  • Public cloud vision: you should stop caring about infrastructure
  • Coming to an age when infrastructure can be completely automated
  • Developer rebellion against infrastructure

8:36: Krish believes that Public cloud will be more decentralized

  • Public cloud should be part of everyone’s IT plan
  • It should not be the ONLY thig

9:25: Docker helps create portability, what else creates portability? Will there be a standard

  • Containers are a huge change, but it’s not just packaging
  • Smaller units of work is important for portability
  • Container schedulers & PaaS are very opinionated, that’s what creates portability
  • Deeper into infrastructure loses portability (RackN helps)
  • Rob predicts that Lambda and Serverless creates portability too

11:38: Are new standards emerging?

  • Some APIs become dominate and create de facto APIs
  • Embedded assumptions break portability – that’s what makes automation fragile
  • Rob explains why we inject configuration to abstract infrastructure
  • RackN works to inject attributes instead of allowing scripts to assume settings
  • For example, networking assumptions break portability
  • Platforms force people to give up configuration in ways that break portability

14:50: Why did Platform as a Service not take off?

  • Rob defends PaaS – thinks that it has accomplished a lot
  • Challenge of PaaS is that it’s very restrictive by design
  • Calls out Andrew Clay Shafer’s “don’t call it a PaaS” position
  • Containers provide a less restrictive approach with more options.

17:00: What’s the impact on Enterprise? How are developers being impacted?

  • Service Orientation is a very important thing to consider
  • Encapsulation from services is very valuable
  • Companies don’t own all their IT services any more – it’s not monolithic
  • IT Service Orientation aligns with Business Processes
    Rob says the API economy is a big deal
  • In machine learning, a business’ data may be more valuable than their product

19:30: Services impact?

  • Service’s have a business imperative
  • We’re not ready for all the impacts of a service orientation
  • Challenge is to mix configuration and services
  • Magic of Digital Rebar is that it can mix orchestration of both

22:00: We are having issues with simple, how are we going to scale up?

  • Barriers are very low right now

22:30: Will Kubernetes help us solve governance issues?

  • Kubernetes is doing a go building an ecosystem
  • Smart to focus on just being Kubernetes
  • It will be chaotic as the core is worked out

24:00: Do you think Kubernetes is going in the right direction?

  • Rob is bullish for Kubernetes to be the dominant platform because it’s narrow and specific
  • Google has the right balance of control
  • Kubernetes really is not that complex for what it does
  • Mesos is also good but harder to understand for users
  • Swarm is simple but harder to extend for an ecosystem
  • Kubernetes is a threat to Amazon because it creates portability and ecosystem outside of their platform
  • Rob thinking that Kubernetes could create platform services that compete with AWS services like RDS.
  • It’s likely to level the field, not create a Google advantage

27:00: How does Kubernetes fit into the Digital Rebar picture?

  • We think of Kubernetes as a great infrastructure abstraction that creates portability
  • We believe there’s a missing underlay that cannot abstract the infrastructure – that’s what we do.
  • OpenStack deployments broken because every data center is custom and different – vendors create a lot of consulting without solving the problem
  • RackN is creating composability UNDER Kubernetes so that those infrastructure differences do not break operation automation
  • Kubernetes does not have the constructs in the abstraction to solve the infrastructure problem, that’s a different problem that should not be added into the APIs
  • Digital Rebar can also then use the Kubernetes abstractions?

30:20: Can OpenStack really be managed/run on top of Kubernetes? That seems complex!

  • There is a MESS in the message of Kubernetes under OpenStack because it sends the message that Kubernetes is better at managing application than OpenStack
  • Since OpenStack is just an application and Kubernetes is a good way to manage applications
  • When OpenStack is already in containers, we can use Kubernetes to do that in a logical way
  • “I’m super impressed with how it’s working” using OpenStack Helm Packs (still needs work)
  • Physical environment still has to be injected into the OpenStack on Kubernetes environment

35:05 Does OpenStack have a future?

  • Yes! But it’s not the big “data center operating system” future that we expected in 2010. Rob thinks it a good VM management platform.
  • Rob provides the same caution for Kubernetes. It will work where the abstractions add value but data centers are complex hybrid beasts
  • Don’t “square peg a data center round hole” – find the best fit
  • OpenStack should have focused on the things it does well – it has a huge appetite for solving too many problems.
1 comment on “LinuxKit and Three Concerns with Physical Provisioning of Immutable Images”

LinuxKit and Three Concerns with Physical Provisioning of Immutable Images

DR ProvisionAt Dockercon this week, Docker announced an immutable operating system called LinuxKit which is powered by a Packer-like utility called Moby that RackN CTO, Greg Althaus, explains in the video below.

For additional conference notes, check out Rob Hirschfeld’s Dockercon retro blog post.

Three Concerns with Immutable O/S on Physical

With a mix of excitement and apprehension, the RackN team has been watching physical deployment of immutable operating systems like CoreOS Container Linux and RancherOS.  Overall, we like the idea of a small locked (aka immutable) in-memory image for servers; however, the concept does not map perfectly to hardware.

Note: if you want to provision these operating systems in a production way, we can help you!

These operating systems work on a “less is more” approach that strips everything out of the images to make them small and secure.  

This is great for cloud-first approaches where VM size has a material impact in cost.  It’s particularly matched for container platforms where VMs are constantly being created and destroyed.  In these cases, the immutable image is easy to update and saves money.

So, why does that not work as well on physical?

First:  HA DHCP?!  It’s not as great a map for physical systems where operating system overhead is pretty minimal.  The model requires orchestrated rebooting of your hardware.  It also means that you need a highly available (HA) PXE Provisioning infrastructure (like we’re building with Digital Rebar).

Second: Configuration. That means that they must rely on having cloud-init injected configuration.  In a physical environment, there is no way to create cloud-init like injections without integrating with the kickstart systems (a feature of Digital Rebar Provision).  Further, hardware has a lot more configuration options (like hard drives and network interfaces) than VMs.  That means that we need a robust and system-by-system way to manage these configurations.

Third:  No SSH.  Yes another problem with these minimal images is that they are supposed to eliminate SSH.   Ideally, their image and configuration provides everything required to run the image without additional administration.  Unfortunately, many applications assume post-boot configuration.  That means that people often re-enable SSH to use tools like Ansible.  If it did not conflict with the very nature of the “do-not configure-the-server” immutable model, I would suggest that SSH is a perfectly reasonable requirement for operators running physical infrastructure.

In Summary, even with those issues, we are excited about the positive impact this immutable approach can have on data center operations.

With tooling like Digital Rebar, it’s possible to manage the issues above.  If this appeals to you, let us know!

0 comments on “Open Source Collaboration: The Power of No”

Open Source Collaboration: The Power of No

TL;DR: The days of using open software passively from vendors are past, users need to have a voice and opinion about project governance. This post is a joint effort with Rob Hirschfeld, RackN, and Chris Ferris, IBM, based on their IBM Interconnect 2017 “Open Cloud Architecture: Think You Can Out-Innovate the Best of the Rest?” presentation.

nullIt’s a common misconception that open source collaboration means saying YES to all ideas; however, the reality of successful projects is the opposite.

Permissive open source licenses drive a delicate balance for projects. On one hand, projects that adopt permissive licenses should be accepting of contributions to build community and user base. On the other, maintainers need to adopt a narrow focus to ensure project utility and simplicity. If the project’s maintainers are too permissive, the project bloats and wanders without a clear purpose. If they are too restrictive then the project fails to build community.

It is human nature to say yes to all collaborators, but that can frustrate core developers and users.

For that reason, stronger open source projects have a clear, focused, shared vision. Historically, that vision was enforced by a benevolent dictator for life (BDFL); however, recent large projects have used a consensus of project elders to make the task more sustainable. These roles serve a critical need: they say “no” to work that does not align with the project’s mission and vision. The challenge of defining that vision can be a big one, but without a clear vision, it’s impossible for the community to sustain growth because new contributors can dilute the utility of projects. [author’s note: This is especially true of celebrity projects like OpenStack or Kubernetes that attract “shared glory” contributors]

There is tremendous social and commercial pressure driving this vision vs. implementation balance.

The most critical one is the threat of “forking.” Forking is what happens when the code/collaborator base of a project splits into multiple factions and stops working together on a single deliverable. The result is incompatible products with a shared history. While small forks are required to support releases, and foster development; diverging community forks can have unpredictable impacts for a project.

Forks are not always bad: they provide a control mechanism for communities.

The fundamental nature of open source projects that adopt a permissive license is what allows forks to become the primary governance tool. The nature of permissive licenses allows anyone to create a new line of development that’s different than the original line. Forks can allow special interests in a code base to focus on their needs. That could be new features or simply stabilization. Many times, a major release version of a project evolves into forks where both old and newer versions have independent communities because of deployment inertia. It can also allow new leadership or governance without having to directly displace an entrenched “owner”.

But forking is expensive because it makes it harder for communities to collaborate.

To us, the antidote for forking is not simply vision but a strong focus on interoperability. Interoperability (or interop) means ensuring that different implementations remain compatible for users. A simplified example would be having automation that works on one OpenStack cloud also work on all the others without modification. Strong interop creates an ecosystem for a project by making users confident that their downstream efforts will not be disrupted by implementation variance or version changes.

Good Interop relieves the pressure of forking.

Interop can only work when a project defines what is expected behavior and creates tests that enforce those standards. That activity forces project contributors to agree on project priorities and scope. Projects that refuse to define interop expectations end up disrupting their user and collaborator base in frustrating ways that lead to forking (Rob’s commentary on the potential Docker fork of 2016).

nullUnfortunately, Interop is not a generally a developer priority.

In the end, interoperability is a user feature that competes with other features. Sadly, it is often seen as hurting feature development because new features must work to maintain existing interop standards. For that reason, new contributors may see interop demands as a impediment to forward progress; however, it’s a strong driver for user adoption and growth.

The challenge is that those users are typically more focused on their own implementation and less visible to the project leadership. Vendors have similar disincentives to do work that benefits other vendors in the community. These tensions will undermine the health of communities that do not have strong BDFL or Elders leadership. So, who then provides the adult supervision?

Ultimately, users must demand interop and provide commercial preference for vendors that invest in interop.

Open source has definitely had an enormous impact on the software industry; generally, a change for the better. But, that change comes at a cost – the need for involvement, not just of vendors and individual developers, but, ultimately it demands the participation of consumers/users.

Interop isn’t naturally a vendor priority because it levels the playing field for all vendors; however, vendors do prioritize what their customers want.

Ideally, customer needs translate into new features that have a broad base of consumer interest. Interop ensure that features can be used broadly. Thus interop is an important attribute to consumers not only for vendors, but by the open source communities building the software. This alignment then serves as the foundation upon which (increasingly) that vendor software is based.

Customers should be actively and publicly supportive of interop efforts of projects on which their vendor’s offerings depend. If there isn’t such an initiative in those projects, then they should demand one be started through their vendor partners and in the public forums for the project.

Further, if consumers of an open source project sense that it lacks a strong, focused, vision and is wandering off course, they need to get involved and say so, either directly and/or through their vendor partners.

While open source has changing the IT industry, it also has a cost. The days of using software passively from vendors are past, users need to have a voice and opinion. The need to ensure that their chosen vendors are also supporting the health of the community.

What do you think? Reach out to Rob (@zehicle) and Chris (@christo4ferris) and let us know!

Note: Cross posted on IBM OpenTech site.