RackN founding member of LF Edge

Today, the Linux Foundation announced LF Edge to help build open source platforms and community around Edge Infrastructure. We’re just at the start of the effort with projects and vendors working in parallel without coordination. We think this LF Edge initiative is important because the Edge is already a patchwork of different data centers, infrastructure and platforms.

RackN has been designing our DEEP Edge federated control around open Digital Rebar because we know that having a shared abstraction at the physical layer makes everything we build above easier to sustain and more resilient.

We’re excited to be part of this effort and help that you’ll collaborate us to build some amazing Edge capabilities.

Note: We’re also watching the OpenStack Foundation work with Edge also. Hopefully, we’ll see collaboration between these groups since there are already overlapping projects.

0 comments on “Getting Edge-y at OpenStack Summit – 5 ways it’s an easy concept with hard delivery”

Getting Edge-y at OpenStack Summit – 5 ways it’s an easy concept with hard delivery

The 2018 Vancouver OpenStack Summit is very focused on IT infrastructure at the Edge. It’s a fitting topic considering the telcos’ embrace for the project; however, building the highly distributed, small footprint management needed for these environments is very different than OpenStack’s architectural priorities. There is a significant risk that the community’s bias towards it’s current code base (which still has work needed to service hyper-scale and enterprise data centers) will undermine progress in building suitable Edge IT solutions.

There are five significant ways that Edge is different than “traditional” datacenter.  We often discuss this on our L8istSh9y podcast and it’s time to summarize them in a blog post.

IT infrastructure at the Edge is different than “edge” in general. Edge is often used as a superset of Internet of Things (IoT), personal devices (phones) and other emerging smart devices. Our interest here is not the devices but the services that are the next hop back supporting data storage, processing, aggregation and sharing. To scale, these services need to move from homes to controlled environments in shared locations like 5G towers, POP and regional data centers.

Unlike built-to-purpose edge devices, the edge infrastructure will be built on generic commodity hardware.

Here are five key ways that managing IT infrastructure at the edge is distinct from anything we’ve built so far:

  • Highly Distributed – Even at hyper-scale, we’re used to building cloud platforms in terms of tens of data centers; however, edge infrastructure sites will number in the thousands and millions!  That’s distinct management sites, not servers or cores. Since the sites will not have homogeneous hardware specifications, the management of these sites requires zero-touch management that is vendor neutral, resilient and secure.  
  • Low Latency Applications – Latency is the reason why Edge needs to be highly distributed.  Edge applications like A/R, V/R, autonomous robotics and even voice controls interact with humans (and other apps) in ways that require microsecond response times.  This speed of light limitation means that we cannot rely on hyper-scale data centers to consolidate infrastructure; instead, we have to push that infrastructure into the latency range of the users and devices.
  • Decentralized Data – A lot of data comes from all of these interactive edge devices.  In our multi-vendor innovative market, data from each location could end up being sprayed all over the planet.  Shared edge infrastructure provides an opportunity to aggregate this data locally where it can be shared and (maybe?) controlled. This is a very hard technical and business problem to solve.  While it’s easy to inject blockchain as a possible solution, the actual requirements are still evolving.
  • Remote, In-Environment Infrastructure – To make matters even harder, the sites are not traditional raised floor data centers with 24×7 attendants: most will be small, remote and unstaffed sites that require a truck roll for services.  Imagine an IT shed at the base of a vacant lot cell tower behind rusted chain link fences guarded by angry squirrels and monitored by underfunded FCC regulators.
  • Multi-Tenant and Trusted – Edge infrastructure will be a multi-tenant environment because it’s simple economics driving as-a-Service style resource sharing. Unlike buy-on-credit-card public clouds, the participants in the edge will have deeper, trusted relationships with the service providers.  A high degree of trust is required because distributed application and data management must be coordinated between the Edge infrastructure manager and the application authors.  This level of integration requires a deeper trust and inspect than current public clouds require.

These are hard problems!  Solving them requires new thinking and tools that while cloud native in design, are not cloud tools.  We should not expect to lift-and-shift cloud patterns directly into edge because the requirements are fundamentally different.  This next wave of innovation requires building for an even more distributed and automated architecture.

I hope you’re as excited as we are about helping build infrastructure at the edge.  What do you think the challenges are? We’d like to hear from you!

0 comments on “Podcast – Dave Nielsen talks Redis and usage at the Edge”

Podcast – Dave Nielsen talks Redis and usage at the Edge

Joining us this week is Dave Nielsen, Head of Ecosystem Programs at Redis Labs. Dave provides background on the Redis project and discusses ideas for using Redis in edge devices.

Highlights

  • Background of Redis project and Redis Labs
  • Redis and Edge Computing
  • Where is the Edge?
  • Raspberry Pi for edge devices? It’s about management
  • Wasteland of IT management at the edge

Topic                                                                                  Time (Minutes.Seconds)

Introduction                                                                     0.0 – 1.40
What is Redis and Redis Labs?                                    1.40 – 6.18
Redis product                                                                  6.18 – 6.54 (in-memory data store)
Need to store state of service                                      6.54 – 10.40 (queue storage in memory)
Using Redis at edge                                                       10.40 – 15.01(Dave’s definition of edge)
Data generated at edge than can be uploaded       15.01 – 16.55
Redis and other platforms at edge                             16.55 – 18.01 (Kubernetes, Docker)
Does edge need platform and a winner?                  18.01 – 21.01
Global distribution to edge sites                                 21.01 – 24.55 (Where is the edge?)
Difference b/w CDN and containers                          24.55 – 26.10 (Storage vs Compute)
Smaller devices and intermediary edge hub           26.10 – 34.10 (Raspberry Pi)
IoT devices fragmented market and hubs                34.10 – 36.44
Pushing updates at massive scale                             36.44 – 40.55 (infra not data centers)
How get code out to the edge devices?                   40.55 – 44.00 (unchartered territory)
Wrap Up                                                                          44.00 – END

Podcast Guest: Dave Nielsen, Head of Ecosystem Programs at Redis Labs

Dave works for Redis Labs organizing workshops, hackathons, meetups and other events to help developers learn when and how to use Redis. Dave is also the co-founder and lead organizer of CloudCamp, a community of Cloud Computing enthusiasts in over 100 cities around the world. Dave graduated from Cal Poly: San Luis Obispo and has worked in developer relations for 12 years at companies like PayPal, Strikeiron and Platform D. Dave gained modest notoriety when he proposed to his girlfriend in the book “PayPal Hacks.”

Twitter: @davenielsen

 

0 comments on “Standardize your Operational Chaos for Provisioning Bliss”

Standardize your Operational Chaos for Provisioning Bliss

A common side-effect of rapid growth for any organization is the introduction of complexity and one-off solutions to keep things moving regardless of the long-term impact. Over time, these decisions add up to create a chaotic environment for IT teams who find themselves unable to find an appropriate time to stop and reset.  

IT operations teams also struggle in this environment as management knowledge for all these technologies are not often shared appropriately and it is common to have only 1 operator capable of supporting specific technologies. Obviously, enterprises are at great risk when knowledge is not shared and there is no standard process across a team.

Issue :  Infrastructure Management

  • One-Off Operations – Customized operation tooling per service leads to team dysfunction as operators cannot support each due to inexperience with unique tools
  • IT Productivity – Data centers struggle to meet business needs with no standard process or tools; cloud platforms expose this deficiency causing business to go shadow IT

Impact : Delivery Times

  • Costly and Slow – Many data centers operate with dated processes and tools causing significant delays in new service rollout as well as maintaining existing services
  • Cross Platform Support IT teams MUST maintain control over company services by supporting internal data centers as well as cloud deployments from a single platform  

RackN Solution : Global Standard

  • Operations Excellence – RackN’s foundational management ensures IT can operate services regardless of platform (e.g. data center, public cloud, etc)
  • Operational Standardization – RackN delivers a single platform for IT to leverage across deployment vehicles as well as ensure IT team efficiency across services

The RackN team is ready to start you on the path to operations excellence:

Take part in the Digital Rebar Community

%d bloggers like this: