0 comments on “Podcast – Dave Blakey of Snapt on Radically Different ADC”

Podcast – Dave Blakey of Snapt on Radically Different ADC

Joining us this week is Dave Blakey, CEO and Co-Founder Snapt.

About Snapt

Snapt develops high-end solutions for application delivery. We provide load balancing, web acceleration, caching and security for critical services.

 Highlights

  • 1 min 28 sec: Introduction of Guest
  • 1 min 59 sec: Overview of Snapt
    • Software solution
  • 3 min 1 sec: New Approach to Firewalls and Load Balancers
    • Driven by customers with micro-services, containers, and dynamic needs
    • Fast scale and massive volume needs
    • Value is in quality of service and visibility into any anomaly
  • 7 min 28 sec: Engaging with DevOps teams for Customer interactions
    • Similar tools across multiple clouds and on-premises drives needs
    • 80% is visibility and 20% is scalability
    • Podcast – Honeycomb Observability
  • 13 min 09 sec: Kubernetes and Istio
    • Use cases remain the same independent of the technology
    • Difference is in the operations not the setup
    • Istio is an API for Snapt to plug into
  • 17 min 29 sec: How do you manage globally delivered application stack?
    • Have to go deep into app services to properly meet demand where needed
    • Immutable deployments?
  • 25 min 24 sec: Eliminate Complexity to Create Operational Opportunity
  • 26 min 29 sec: Corporate Culture Fit in Snapt Team
    • Built Snapt as they needed a product like Snapt
    • Feature and Complexity Creep
  • 28 min 48 sec: Does platform learn?
  • 31 min 20 sec: Lessons about system communication times
    • Lose 25% of audience per 1 second of website load time
  • 34 min 34 sec: Wrap-Up

Podcast Guest:  Dave Blakey, CEO and Co-Founder Snapt.

Dave Blakey founded Snapt in 2012 and currently serves as the company’s CEO.

Snapt now provides load balancing and acceleration to more than 10,000 clients in 50 countries. High-profile clients include NASA, Intel, and various other forward-thinking technology companies.

Today, Dave has evolved into a leading open-source software-defined networking thought leader, with deep domain expertise in high performance (carrier grade) network systems, management, and security solutions.

He is a passionate advocate for advancing South Africa’s start-up ecosystem and expanding the global presence of the country’s tech hub.

0 comments on “Podcast – James Ferguson talks Kubernetes and the future as an Application Platform”

Podcast – James Ferguson talks Kubernetes and the future as an Application Platform

Joining us this week is James Ferguson, Director of Cloud Consulting, JBC Labs.

About JBC Labs

Looking to automate your cloud and on-prem solutions? Needing to see faster CI/CD on AWS, GCP or Azure? Or perhaps ETL processing that makes your team and users of information more productive? JBC Labs has provided solutions for over 20 years to companies big and small. Our solutions make you run faster, more fluid, and provide a spark to drive your innovation. Our team are certified cloud architects and solution providers ready to help you today.

Highlights
• Overview of JBC Labs’ Jump Box Central Kubernetes Solution
• State of Kubernetes Today
• Concept of Kubernetes as an Application Platform
• Functions as a Service
• Service Mesh and Kubernetes

Topic                                                                                              Time (Minutes.Seconds)
Introduction                                                                                  0.0 – 1.24
Jump Box Central Name?                                                          1.24 – 2.06
Where are People Looking for Help in Kubernetes?            2.06 – 4.13
What Problem are you Looking to Solve?                              4.13 – 6.15
What Else Needed to Make Kubernetes Successful?          6.15 – 10.18
Sell Services for Kubernetes?                                                   10.18 – 12.09
Kubernetes Delivery Platform for Apps                                  12.09 – 15.37 (Hospital Example)
Why Function as a Service?                                                      15.37 – 19.05
Does Variety of Options Slow Down Adoption?                    19.05 – 20.05
Is FaaS an App for Kubernetes?                                               20.05 – 22.29 (PB & Chocolate)
Role of Service Mesh                                                                 22.29 – 29.06
Contact Information                                                                   29.06 – END

James Ferguson, Director of Cloud Consulting, JBC Labs.

James Ferguson has been involved in IT, Software Development and Business Management since 1992. During James Career he has created the worlds first mobile agnostic application for SAP and Oracle in the cloud, featured in Gartner and Forrester. Founded two companies and led many others. Industries James has helped companies ranging from Real Estate, Oil and Gas, Utilities, Finance, Insurance and Marketing. James currently helps customers as a principal architect and thought leader for the fortune 500 and SMB. James can be found on Linkedin, email, mobile, or out in the back country hiking.

0 comments on “DC2020: Putting the Data back in the Data Center”

DC2020: Putting the Data back in the Data Center

For the past two decades, data centers have been more about compute than data, but the machine learning and IoT revolutions are changing that focus for the 2020 Data Center (aka DC2020). My experience at IBM Think 2018 suggests that we should be challenging our compute centric view of a data center; instead, we should be considering the flow and processing of data. Since data is not localized, that reinforces our concept of DC2020 as a distributed and integrated environment.

We have defined data centers by the compute infrastructure stored there. Cloud (especially equated with virtualized machines) has been an infrastructure as a service (IaaS) story. Even big data “lakes” are primary compute clusters with distributed storage. This model dominates because data sources locked in application silos control of the compute translates directly to control of the data.

What if control of data is being decoupled from applications? Data is becoming it’s own thing with new technologies like machine learning, IoT, blockchain and other distributed sourcing.

In a data centric model, we are more concerned with movement and access to data than building applications to control it. Think of event driven (serverless) and microservice platforms that effectively operate on data-in-flight. It will become impossible to actually know all the ways that data is manipulated as function as a service progresses because there are no longer boundaries for applications.

This data-centric, distributed architecture model will be even more pronounced as processing moves out of data centers and into the edge. IT infrastructure at the edge will be used for handling latency critical data and aggregating data for centralization. These operations will not look like traditional application stacks: they will be data processing microservices and functions.

This data centric approach relegates infrastructure services to a subordinate role. We should not care about servers or machines except as they support platforms driving data flows.

I am not abandoning making infrastructure simple and easy – we need to do that more than ever! However, it’s easy to underestimate the coming transformation of application architectures based on advanced data processing and sharing technologies. The amount and sources of data have already grown beyond human comprehension because we still think of applications in a client-server mindset.

We’re only at the start of really embedding connected sensors and devices into our environment. As devices from many sources and vendors proliferate, they also need to coordinate. That means we’re reaching a point where devices will start talking to each other locally instead of via our centralized systems. It’s part of the coming data avalanche.

Current management systems will not survive explosive growth.  We’re entering a phase where control and management paradigms cannot keep up.

As an industry, we are rethinking management automation from declarative (“start this”) to intent (“maintain this”) focused systems.  This is the simplest way to express the difference between OpenStack and Kubernetes. That change is required to create autonomous infrastructure designs; however, it also means that we need to change our thinking about infrastructure as something that follows data instead of leads it.

That’s exactly what RackN has solved with Digital Rebar Provision.  Deeply composable, simple APIs and extensible workflows are an essential component for integrated automation in DC2020 to put the data back in data center.

0 comments on “.IO! .IO! It’s off to a Service Mesh you should go [Gluecon 2017 notes]”

.IO! .IO! It’s off to a Service Mesh you should go [Gluecon 2017 notes]

TL;DR: If you are containerizing your applications, you need to be aware of this “service mesh” architectural pattern to help manage your services.

Gluecon turned out to be all about a microservice concept called a “service mesh” which was being promoted by Buoyant with Linkerd and IBM/Google/Lyft with Istio.  This class of services is a natural evolution of the rush to microservices and something that I’ve written microservice technical architecture on TheNewStack about in the past.

servicemeshA service mesh is the result of having a dependency grid of microservices.  Since we’ve decoupled the application internally, we’ve created coupling between the services.  Hard coding those relationships causes serious failure risks so we need to have a service that intermediates the services.  This pattern has been widely socialized with this zipkin graphic (Srdan Srepfler’s microservice anatomy presentation)

IMHO, it’s healthy to find service mesh architecturally scary.

One of the hardest things about scaling software is managing the dependency graph.  This challenge is unavoidable from early days of Windows “DLL Hell” to the mixed joy/terror of working with Ruby Gem, Python Pip and Node.js NPM.  We get tremendous acceleration from using external modules and services, but we also pay a price to manage those dependencies.

For microservice and Cloud Native designs, the service mesh is that dependency management price tag.

A service mesh is not just a service injected between services.  It’s simplest function is to provide a reverse proxy so that multiple services can be consolidated under a single end-point.  That quickly leads to needing load balancers, discovery and encrypted back-end communication.  From there, we start thinking about circuit breaker patterns, advanced logging and A/B migrations.  Another important consideration is that service meshes are for internal services and not end-user facing, that means layers of load balancers.

It’s easy to see how a service mesh becomes a very critical infrastructure component.

If you are working your way through containerization then these may seem like very advanced concepts that you can postpone learning.  That blissful state will not last for long and I highly suggest being aware of the pattern before your development teams start writing their own versions of this complex abstraction layer.  Don’t assume this is a development concern: the service mesh is deeply tied to infrastructure and operations.

The service mesh is one of those tricky dev/ops intersections and should be discussed jointly.

Has your team been working with a service mesh?  We’d love to hear your stories about it!

Related Reading:

4 comments on “Faster, Simpler AND Smaller – Immutable Provisioning with Docker Compose!”

Faster, Simpler AND Smaller – Immutable Provisioning with Docker Compose!

Nearly 10 TIMES faster system resets – that’s the result of fully enabling an multi-container immutable deployment on Digital Rebar.

Docker ComposeI’ve been having a “containers all the way down” month since we launched Digital Rebar deployment using Docker Compose. I don’t want to imply that we rubbed Docker on the platform and magic happened. The RackN team spent nearly a year building up the Consul integration and service wrappers for our platform before we were ready to fully migrate.

During the Digital Rebar migration, we took our already service-oriented code base and broke it into microservices. Specifically, the Digital Rebar parts (the API and engine) now run in their own container and each service (DNS, DHCP, Provisioning, Logging, NTP, etc) also has a dedicated container. Likewise, supporting items like Consul and PostgreSQL are, surprise, managed in dedicated containers too. All together, that’s over nine containers and we continue to partition out services.

We use Docker Compose to coordinate the start-up and Consul to wire everything together. Both play a role, but Consul is the critical glue that allows Digital Rebar components to find each other. These were not random choices. We’ve been using a Docker package for over two years and using Consul service registration as an architectural choice for over a year.

Service registration plays a major role in the functional ops design because we’ve been wrapping datacenter services like DNS with APIs. Consul is a separation between providing and consuming the service. Our previous design required us to track the running service. This worked until customers asked for pluggable services (and every customer needs pluggable services as they scale).

Besides being a faster to reset the environment, there are several additional wins:

  1. more transparent in how it operates – it’s obvious which containers provide each service and easy to monitor them as individuals.
  2. easier to distribute services in the environment – we can find where the service runs because of the Consul registration, so we don’t have to manage it.
  3. possible to have redundant services – it’s easy to spin up new services even on the same system
  4. make services pluggable – as long as the service registers and there’s an API, we can replace the implementation.
  5. no concern about which distribution is used – all our containers are Ubuntu user space but the host can be anything.
  6. changes to components are more isolated – changing one service does not require a lot of downloading.

Docker and microservices are not magic but the benefits are real. Be prepared to make architectural investments to realize the gains.