Joining us this week is Syed Zaeem Hosain, CTO and Founder of Aeris from the KeyBanc Emerging Tech Summit.
Aeris is a technology partner with a proven history of helping companies unlock the value of IoT. For more than a decade, we’ve powered critical projects for some of the most demanding customers of IoT services. Aeris strives to fundamentally improve businesses by dramatically reducing costs, accelerating time-to-market, and enabling new revenue streams. Built from the ground up for IoT and globally tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler.
- 0 min 48 sec: Introduction of Guest
- 1 min 27 sec: Edge is already here for Aeris – Mobile Data Presence
- Support customers who have a need for long distance data transport over cellular
- Focused on device connectivity
- Edge devices will have processing power of their own
- 4 min 31 sec: Car as Edge Data Center Issues
- Better to move processing off the car? Cost issue for sending data via cellular
- Tire Pressure System Example
- 5G Cost may not be dramatically lower as people expect
- 7 min 24sec: Can’t Send all Data Back ~ Need Local Machine Learning
- Great deal of irrelevant data (e.g. Tire pressure)
- Can send lots of data to train models as well – Airplane example
- 12 min 04 sec: Dave McCrory Podcast on Airplane Use Case / Data Gravity
- Security in Data Gathering Algorithms – Must validate the source of data
- Use of aggregated data to monitor data validity
- 17 min 11 sec: Sharing of Data in Edge Models
- Issues with Security, Ownership, etc of data
- Windshield wipers on cars for weather info
- Source of data – how participate in money chain?
- 21 min 45 sec: Billing for Pennies is a Problem
- Billing systems are in issue to track revenue
- ROI in IoT space is an open issue
- 23 min 36 sec: Blockchain can help here?
- Chain would grow to fast in IoT world (e.g. 2 Billion Msgs a day)
- Val Blockchain Podcast
- 25 min 21 sec: What is the ROI for adding more devices into IoT model
- Medical sensors (skin monitoring, pressure points in eye for monitoring)
- Human privacy is a massive issue in this space
- 34 min 02 sec: BOOK – Definitive Guide to IoT for Business (Free)
- 35 min 37 sec: Wrap-Up
Podcast Guest: Syed Zaeem Hosain, CTO and Founder of Aeris
Mr. Hosain is responsible for the architecture and future direction of Aeris’ networks and technology strategy. He joined Aeris in 1996 as Vice President, Engineering and is a member of the founding executive team of Aeris. Mr. Hosain has more than 38 years of experience in the semiconductor, computer, and telecommunications industries, including product development, architecture design, and technical management. Prior to joining Aeris, he held senior engineering and management positions at Analog Devices, Cypress Semiconductor, CAD National, and ESS Technology. Mr. Hosain is Chairman of the International Forum on ANSI‐41 Standards Technology (IFAST) and Chairman of the IoT M2M Council (IMC). He holds a Bachelor of Science degree in Computer Science and Engineering from the Massachusetts Institute of Technology, Cambridge, MA.
“I don’t care about the tech – what I really want to hear is how this product fits in our processes and helps our people get more done.”
That was the message my co-founder and I heard from an executive at a major bank last week. For us, it was both a deja vu and a major relief because we’d just presented at the Cablelabs Summer Showcase about the importance of aligning people, process and technology together. The executive was pleased about how RackN had achieved that balance.
It wasn’t always that way: focusing on usability and simplicity first over features is scary.
One of the most humbling startup lessons is that making great technology is not about the technology. Showing a 10x (or 100x!) improvement in provisioning speed misses the real problem for IT operators. Happily, we had some great early users who got excited about the vision for simple tooling that we built around Digital Rebar Provision v3. Equally important was a deeply experienced team who insisted in building great tests, docs and support tooling from day 0.
We are thrilled to watch as new users are able to learn, adopt and grow their use of our open technology with minimal help from RackN. Even without the 10x performance components RackN has added, they have been able to achieve significant time and automation improvements in their existing operational processes. That means simpler processes, less IT complexity and more time for solving important problems.
The bank executive wanted the people and process benefits: our job with technology was to enable that first and then get out of the way. It’s a much harder job than “make it faster” but, ultimately, much more rewarding.
The 2018 Vancouver OpenStack Summit is very focused on IT infrastructure at the Edge. It’s a fitting topic considering the telcos’ embrace for the project; however, building the highly distributed, small footprint management needed for these environments is very different than OpenStack’s architectural priorities. There is a significant risk that the community’s bias towards it’s current code base (which still has work needed to service hyper-scale and enterprise data centers) will undermine progress in building suitable Edge IT solutions.
There are five significant ways that Edge is different than “traditional” datacenter. We often discuss this on our L8istSh9y podcast and it’s time to summarize them in a blog post.
IT infrastructure at the Edge is different than “edge” in general. Edge is often used as a superset of Internet of Things (IoT), personal devices (phones) and other emerging smart devices. Our interest here is not the devices but the services that are the next hop back supporting data storage, processing, aggregation and sharing. To scale, these services need to move from homes to controlled environments in shared locations like 5G towers, POP and regional data centers.
Unlike built-to-purpose edge devices, the edge infrastructure will be built on generic commodity hardware.
Here are five key ways that managing IT infrastructure at the edge is distinct from anything we’ve built so far:
- Highly Distributed – Even at hyper-scale, we’re used to building cloud platforms in terms of tens of data centers; however, edge infrastructure sites will number in the thousands and millions! That’s distinct management sites, not servers or cores. Since the sites will not have homogeneous hardware specifications, the management of these sites requires zero-touch management that is vendor neutral, resilient and secure.
- Low Latency Applications – Latency is the reason why Edge needs to be highly distributed. Edge applications like A/R, V/R, autonomous robotics and even voice controls interact with humans (and other apps) in ways that require microsecond response times. This speed of light limitation means that we cannot rely on hyper-scale data centers to consolidate infrastructure; instead, we have to push that infrastructure into the latency range of the users and devices.
- Decentralized Data – A lot of data comes from all of these interactive edge devices. In our multi-vendor innovative market, data from each location could end up being sprayed all over the planet. Shared edge infrastructure provides an opportunity to aggregate this data locally where it can be shared and (maybe?) controlled. This is a very hard technical and business problem to solve. While it’s easy to inject blockchain as a possible solution, the actual requirements are still evolving.
- Remote, In-Environment Infrastructure – To make matters even harder, the sites are not traditional raised floor data centers with 24×7 attendants: most will be small, remote and unstaffed sites that require a truck roll for services. Imagine an IT shed at the base of a vacant lot cell tower behind rusted chain link fences guarded by angry squirrels and monitored by underfunded FCC regulators.
- Multi-Tenant and Trusted – Edge infrastructure will be a multi-tenant environment because it’s simple economics driving as-a-Service style resource sharing. Unlike buy-on-credit-card public clouds, the participants in the edge will have deeper, trusted relationships with the service providers. A high degree of trust is required because distributed application and data management must be coordinated between the Edge infrastructure manager and the application authors. This level of integration requires a deeper trust and inspect than current public clouds require.
These are hard problems! Solving them requires new thinking and tools that while cloud native in design, are not cloud tools. We should not expect to lift-and-shift cloud patterns directly into edge because the requirements are fundamentally different. This next wave of innovation requires building for an even more distributed and automated architecture.
I hope you’re as excited as we are about helping build infrastructure at the edge. What do you think the challenges are? We’d like to hear from you!
Background: This post was inspired by a mult-cloud session session at IBM Think2018 where I am attending as a guest of IBM. Providing hybrid solutions is a priority for IBM and it’s customers are clearly looking for multi-cloud options. In this way, IBM has made a choice to support competitive platforms. This post explores why they would do that.
There is considerable angst and hype over the terms multi-cloud and hybrid-cloud. While it would be much simpler if companies could silo into a single platform, innovation and economics requires a multi-party approach. The problem is NOT that we want to have choice and multiple suppliers. The problem is that we are moving so quickly that there is minimal interoperability and minimal efforts to create interoperability.
To drive interoperability, we need a strong commercial incentive to create an neutral ecosystem.
Even something with a clear ANSI standard like SQL has interoperability challenges. It also seems like the software industry has given up on standards in favor of APIs and rapid innovation. The reality on the ground is that technology is fundamentally heterogeneous and changing. For this reason, mono-anything is a myth and hybrid is really status quo.
If we accept multi-* that as a starting point, then we need to invest in portability and avoid platform assumptions when we build automation. Good design is to assume change at multiple points in your stack. Automation itself is a key requirement because it enables rapid iterative build, test and deploy cycles. It is not enough to automate for day 1, the key to working multi-* infrastructure is a continuous deployment pipeline.
Pipelines provide insurance for hybrid infrastructure by exposing issues quickly before they accumulate technical debt.
That means the utility of tools like Terraform, Ansible or Docker is limited to how often you exercise them. Ideally, we’d build abstraction automation layers above these primitives; however, this has proven very difficult in practice. The degrees of variation between environments and pace of innovation make it impossible to standardize without becoming very restrictive. This may be possible for a single company but is not practical for a vendor trying to support many customers with a single platform.
This means that hybrid, while required in the market, carries an integration tax that needs to be considered.
My objective for discussing Data Center 2020 topics is to find ways to lower that tax and improve the outcome. I’m interested in hearing your opinion about this challenge and if you’ve found ways to solve it.
Counterpoint Addendum: if you are in a position to avoid multi-* deployments (e.g. a start-up) then you should consider that option. There is measurable overhead of heterogeneous automation; however, I’ve found the tipping point away from a mono-stack can be surprising low and committing to a vertical stack does make applications less innovation resilient.
Coming direct from Cambodia is a rare podcast with Jim Plamondon, the creator of how software platforms were built at Microsoft via APIs and developer evangelism. In this podcast, he talks about the early history of developer evangelism at Apple and Microsoft, the current state of open source, and the upcoming competitive industry coming from China and its roots in the third world.
- Soviet Agriculture and Technology Market Comparison
- Why NeXT and Apple Failed with Software Industry but iPhone Succeeded
- China Industry Takeover is Coming: Product Price Points
Books referenced in the podcast (links to Amazon, we have no agreement with them based on your click/purchase):
- Game of X v.1: Xbox (Volume 1) by Rusel DeMaria
- Game of X v.2: The Long Road to Xbox (Volume 2) by Rusel DeMaria
Note – If you are easily offended by language please consider skipping this podcast J
Topic Time (Minutes.Seconds)
Introduction 0.0 – 0.33
Creator of Developer Evangelism 0.33 – 4.58
Plamondon Files 4.58 – 5.53
Working with Hostile Community 5.53 – 7.02
Android vs iOS Platform 7.02 – 7.46
Study: Apple vs Windows 7.46 – 9.13
PC Industry – Mostly All Alive 9.13 – 10.00
Open Source has same Struggles 10.00 – 12.21 (Focus on individual not yechnology)
Cargo Cult & Hype Cycle 12.21 – 16.11 (VR and AI are on version 3; not new at all)
Security Breach 16.11 – 17.01
Back to Hype Cycle 17.01 – 19.03 (Markets find a solution that makes profits)
Latest thoughts on Open Source 19.03 – 23.25 (Zipf’s Law)
Time Buying Strategy 23.25 – 25.07 (e.g. IBM Server response to Amazon S3)
Microsoft Anti-Trust & Apple Mgmt 25.07 – 28.45 (NeXT Failure)
iPhone walled Garden Worked 28.45 – 31.10
Android will defeat iPhone 31.10 – 32.33
Internet Competition dead? 32.33 – 36.07 (Here comes China)
Alibaba moves West 36.07 – 39.45 (Take over 3rd world then US/Europe)
Per Capita Income Averages 39.45 – 43.55 (Own tiny consumer market than move up)
China and Open Source 43.55 – 47.18
Western vs Asian Gov’ts 47.18 – 49.50 (Go learn Mandarin)
Wrap Up 49.50 – END
Podcast Guest: Jim Plamondon
Jim Plamondon is a retired Technology Evangelist, noted for formalizing Microsoft’s Technology Evangelism practices in the 1990’s.
Yesterday, AWS confirmed that it actually uses physical servers to run its cloud infrastructure and, gasp, no one was surprised. The actual news about the i3.metal instances by AWS Chief Evangelist Jeff Barr shows that bare metal is being treated as just another AMI managed instance type (see also Geekwire, Techcrunch, Venture Beat). For AWS users, there’s no drama here because it’s an incremental add to processes they are already know well.
Infrastructure as a Service (IaaS) is fundamentally about automation and API not the type of infrastructure.
Lack of drama is a key principle at RackN: provisioning hardware should be as easy to automate as a virtual machine. The addition of bare metal to the AWS instance types validates two important parts of the AWS cloud automation story. First, having control metal is valuable and, second, operations are expected image (AMI) based deployments.
There are interesting AWS specific items to unpack around this bare metal announcement that shows otherwise hidden details about AWS infrastructure.
It took Amazon a long time to create this offering because allowing users to access bare metal requires a specialized degree of isolation inside their massive data center. It’s only recently possible in AWS data centers because of their custom hardware and firmware. These changes provide AWS with a hidden control layer under the operating system abstraction. This does not mean everyone needs this hardware – it’s an AWS specific need based on their architecture.
It’s not a surprise the AWS has built cloud infrastructure optimized hardware. All the major cloud providers design purpose-built machines with specialized firmware to handle their scale network, security and management challenges.
The specialized hardware may create challenges for users compared to regular virtualized servers. There are already a few added requirements for AMIs before they can run on the i3.metal instance. Any image deploy to metal process requires a degree of matching the target server. That’s the reason that Digital Rebar defaults to safer (but slower) kickstart and pre-seed processes.
Overall, this bare metal announcement is signifying nothing dramatic and that’s a very good thing.
Automating every layer of a data center should be the expected default. Our mission has been to make metal just another type of automated infrastructure and we’re glad to have AWS finally get on the same page with us.
Joining this week’s L8ist Sh9y Podcast is Bernard Golden, a long-time tech innovator and visionary and one of the ten most influential people in cloud computing according to Wired.com. Bernard and Rob Hirschfeld discuss the latest blog from Bernard and the impact of Edge Computing and the reality of implementing this concept. We are also introduced to the Container Hotel.
Topic Time (Minutes.Seconds)
Introduction 0.0 – 0.39
Edge Computing Blog 0.39 – 3:35 (Bernard Blog)
Other Non-Control Loop Use Cases 3.35 – 7:10
Environmental Computing / IOT 7:10 – 9:05
Fallacy of Vendor-based Solutions 9:05 – 13:25
How Manage Edge Hardware 13:25 – 16:00
Container Hotel 16:00 – 16:50
No One Cares about Hardware 16:50 – 23:40
Cloud Extensions – Not Mini Clouds 23:40 – 27:05
Like Cloud but Own Data-Center Can’t Do What I Want 27:05 – 29:55
Wrap-Up 29:55 – END
Podcast Guest: Bernard Golden
Bernard Golden is a long-time tech innovator and visionary. Wired.com named him one of the ten most influential people in cloud computing, and his blog has been listed in over a dozen “best of” lists. He is the author/co-author of five books, including Amazon Web Services for Dummies, the best selling cloud computing book ever.
From 2012 to 2015 Bernard served as an executive at two cloud computing software startups: Enstratius (acquired by Dell, 2013) and ActiveState Software (cloud product line acquired by HPE, 2015).
After leaving ActiveState, Bernard began researching and consulting across a number of new technologies, including machine learning, drones, genomics, and 3D printing. One, however, stood out as the next innovation platform that will transform our society: blockchain.
Rob Hirschfeld, CEO/Co-Founder of RackN speaks with David Linthicum, an internationally known cloud computing and SOA expert and Sr VP at Cloud Technology Partners. Rob and David cover a variety of IT topics in this podcast including a Buck Rodgers quote from David.
Introduction & Ask Podcaster 0:00 – 3:20
Lack of Skillsets in IT 3:20 – 5:43
Accumulation of Technical Debt 5:43 – 10:57
DevOps and Automation 10:57 – 14:08
CI and CD 14:08 – 15:48
When Not Go CI and CD 15:48 – 18:00
What to pay attention to in cloud? 18:00 – 20:17
How select right cloud tech? 20:17 – 23:49
Hybrid is best of breed tech 23:49 – 25:39
Are Containers the silver bullet? 25:39 – 29:14
Serverless vs Containers 29:14 – 33:16
Kubernetes – Meso – Docker Opinion 33:16 – 36:04
Predictions and Trends 36:04 – 37:10
Edge Computing 37:10 – 38:25
Wrap Up – where to find David L. 38:25 – END
Podcast Guest – David Linthicum @DavidLinthicum
Dave Linthicum is Sr. VP at Cloud Technology Partners, and an internationally known cloud computing and SOA expert. He is a sought-after consultant, speaker, and blogger. In his career, Dave has formed or enhanced many of the ideas behind modern distributed computing including EAI, B2B Application Integration, and SOA, approaches and technologies in wide use today. In addition, he is the Editor-in-Chief of SYS-CON’s Virtualization Journal.
For the last 10 years, he has focused on the technology and strategies around cloud computing, including working with several cloud computing startups. His industry experience includes tenure as CTO and CEO of several successful software and cloud computing companies, and upper-level management positions in Fortune 500 companies. In addition, he was an associate professor of computer science for eight years, and continues to lecture at major technical colleges and universities, including University of Virginia and Arizona State University. He keynotes at many leading technology conferences, and has several well-read columns and blogs. Linthicum has authored 10 books, including the ground-breaking “Enterprise Application Integration” and “B2B Application Integration.” You can reach him at firstname.lastname@example.org. Or follow him on Twitter. Or view his profile on LinkedIn.
I love great conversations about technology – especially ones where the answer is not very neatly settled into winners and losers (which is ALL of them in IT). I’m excited that RackN has (re)launched the L8ist Sh9y (aka Latest Shiny) podcast around this exact theme.
Please check out the deep and thoughtful discussion I just had with Mark Thiele (notes) of Apcera where we covered Mark’s thought on why public cloud will be under 20% of IT and culture issues head on.
Spoiler: we have David Linthicum coming next, SO SUBSCRIBE.
We feel there’s still room for deep discussions specifically around automated IT Operations in cloud, data center and edge; consequently, we’re branching out to start including deep interviews in addition to our initial stable of IT Ops deep technical topics like Terraform, Edge Computing, GartnerSYM review, Kubernetes and, of course, our own Digital Rebar.