Even before the founders of RackN conceived of RackN as a company, we were struggling with what it really takes to support hardware. How do you get physical servers and other data center infrastructure components fit together as one system?
We wanted to solve this hard problem. However, we found that it isn’t easy to remove the inherent complexity of a data center. When you get down to it, the challenges with physical infrastructure exist because those components are in place to solve other complex systems problems.
Common Data Center Challenges
Keeping firmware updated at the correct version is a challenge. Additionally, bootstrapping a server and installing the right firmware is a challenge. Innovating faster and better network interfaces, while maintaining software over generations of those interfaces, is a huge challenge.
Something seemingly as benign as changing the instruction set of a CPU impacts everything built on top of it. Changing the speed of RAM can have profound impacts downstream on something as innocuous as the Python interpreter. These simple operational tasks mask interlocked problems of high complexity. And yet, solving them in a systematic way is absolutely essential.
Before RackN was founded, we recognized that each server is a snowflake. Even a global server vendor must maintain different generations of servers that have numerous models, with each model having unique problems. The tooling used consistently in one generation may not work consistently in the next. This is because the new generation of servers are sure to have innovations that couldn’t have been anticipated in existing versions.
We founded RackN to address that problem. It didn’t make sense to try and alleviate the problems by simplifying hardware. And we didn’t think it would work to dictate that everybody use the same generation of server, or even the same infrastructure from the same vendor. We felt just the opposite was needed.
We asked ourselves: how do we create consistent, repeatable processes across heterogeneous infrastructure? RackN Digital Rebar is our answer for that. We’ve built a platform that allows consistent, repeatable results across hardware generations, types, and vendors.
Supporting Hardware is Not Simple
Our customers always ask if the hardware and configurations on which they depend can be supported by RackN. However, our answer is not as simple as yes or no. This is because we know supporting hardware is not simple.
Generally, we know the workflows that we build work because we test them. We know the basic boot provision deployment operation should work for most environments. But the key to success is that Digital Rebar is configurable to match unique site environments.
Modularization Is Key to Supporting Hardware
There are many nuanced questions when it comes to supporting hardware:
- Will BIOS be configurable?
- Can out-of-band management protocols work correctly?
It is not practical for RackN to have a lab of every type of hardware and configuration that our customers need, in part because it’s just too much gear. So how does Digital Rebar. provide forward support for infrastructure without having to maintain a lab of every configuration in every server?
We modularize our hardware support. This means our system does not depend on required infrastructure elements such as RAID, BIOS, or out-of-band management configurations. It can use those things and can add them in a modular way, but they are not dependencies. Modularizing our support ensures that the system will not fail if one of those components isn’t available or doesn’t work.
Next, we rely on vendor tools. This keeps Digital Rebar out of the loop to build and maintain support for things like firmware, out-of-band management, raid, BIOS, or cloud interfaces. We’ve built software that is designed to depend on the vendor platform’s own APIs and command lines.
Sometimes relying on vendor tools burns us. They might change a tool that we depend on, or it might not be reliable or have weird behaviors. So we work to accommodate that. Even with this we know that vendors understand their equipment better than we ever will.
We Help Our Customers to be Self-sufficient.
Self-sufficiency is deep in RackN’s DNA. We can’t test every customer scenarios, and we know that our customers will always test and check every configuration in their infrastructure. Not only do we expect this of our customers, but we go further and advise them to create a resilient development test production process that will put their own systems through their paces.
That ensures that they own the operation of their own infrastructure. Ultimately, even if RackN could test every customer’s exact configuration, our customers would still own the operation of that gear. What we’ve done is eliminate the toil that goes into building all those individual pieces and fitting them together. We’ve made running a data center more manageable and practical.
We Bring the Automation Tools
Digital Rebar is a war chest of best practices, processes, standards, verification steps, and proven integrations. We know when vendor tools are used in a consistent, repeatable way (the way they were designed by the vendors), data center operators find repeatable results across their fleets.
This knowledge, along with our extensive catalogue, has allowed our customers to take Dell, Hp, Lenovo, Supermicro, UCS, and Raspberry Pi gear and treat it in a repeatable way that helps them solve their unique hard infrastructure problems.
When we improve in one place, be it our catalogue or our processes, all of our customers get the benefits because we bring in consistency.
Our customers have their own unique specialized configurations that solve their organizations’ hard problems. The modularity that we expose in Digital Rebar has translated into pretty remarkable cross-industry standardization that dramatically accelerates our customer’s delivery of infrastructure.