Rishidot Research’s Krishnan Subramanian Interviews EDJX CEO John Cowan

Kirshnan and John discuss distributed infrastructure, serverless, and the future of edge and cloud.

Yesterday, Krishnan Subramanian of Rishidot Research and EDJX CEO John Cowan sat down to talk about all things distributed infrastructure, serverless, and the future of cloud:

Krish (01:31):

Welcome to the Modern Enterprise Tech Show. This is Krish Subramanian from Rishidot Research, where every week we talk about some of the hot topics in the industry. And today we are going to talk about distributed infrastructure. I have with me, John Cohen, who’s CEO of EDJX. He’s going to join us discussing distributed infrastructure, serverless and more.

Krish (02:00):
John, welcome to the show. Can you talk about yourself, talk about what EDJX is doing and then let’s get going.

John (02:07)
Sure, absolutely. Thanks for having me, Krish. It’s good to be on the show. I’m John Cowan, I’m the CEO and co-founder of EDJX, where we are building the world’s largest distributed cloud services platform at the edge for developers building everything from industrial IOT, urban IOT type solutions for the world of connected things.
Krish (02:33): Awesome. I think this episode is going to be interesting from that context. There has been quite a bit of talk about public clouds, hybrid cloud, multi cloud, then hybrid multicloud, you name it! Any type of cloud that becomes a point of discussion. People are talking about public versus private cloud. I think that’s meaningless. I think with edge computing coming into picture, I think we are going beyond giving importance to infrastructure. I’m not saying infrastructure doesn’t matter. I think infrastructure will be there. It doesn’t matter whether we are taking virtual machines and containers from the cloud, or it is something that’s running on the edge network, or even maybe across multiple co-operators. If your application needs a certain type of infrastructure, you go with it. You don’t care whether it’s a single cloud provider or it’s distributed everywhere. Just go with what your application needs. That’s my thinking. And because of this thinking, we just evolved over a period of time. I sort of got attached to what Rob Herschfield, CEO off Rackn said: we should not use the term public cloud, private cloud, multi-cloud, etc. We should rather use the term distributed infrastructure. The infrastructure exists underneath and cloud is about gear, giving some abstraction and removing the complexities. If that is the case, why can’t we just take it all the way up and put an abstraction at the developer level so that the developers can just deploy their applications onto the cloud. We don’t do anything about handling anything. I think in 2012-2013 timeframe we talked about past and now serverless is becoming interesting. I think we have a lot to talk about on distributed infrastructure and how serverless fits in. John what’s your take on this?

John (04:28):
Well, I think that talking about infrastructure in the context of developers would be like automobile drivers talking about asphalt or pavement. It’s really that kind of analogy where as the driver, I really don’t care about, the road necessarily. Sure. Do I care about potholes and do I care about distance and all that? Yeah, absolutely. I do. But I take for granted – as I should – that there is something underneath my tire (so to speak) as I’m hurdling down the road. That’s what infrastructure needs to be. It needs to become in the conversation around cloud, development, etc. My perspective (our perspective) on where infrastructure is going for developers is really about eliminating friction, right, so we can talk about hybrid cloud or multi-cloud or all these kinds of terms (and for the record, I think hybrid cloud was a marketing term developed by large companies that lost the cloud war to Amazon Web Services, but monikers and marketing terms aside, I think at the end of the day, what it really comes down to is eliminating friction for developers in being able to write tests and push planet- scale code and planet-scale apps & data. And so serverless is THAT. That’s what serverless is as we view it for the developer ecosystem. I likened it to the difference between going into a bank to conduct your business versus using online banking. So yes, absolutely you can get in a car, drive five miles to your local bank, wait in line, talk to a human being, get back in your car and go back home. But all of that adds overhead or friction to your day. Why not just log on and do your banking via an internet browser? That’s what serverless means in the context of being able to rapidly write, test and push code and data to whatever platform that you happen to be running on.

Krish (06:35):
Yeah. Before we dig deeply into serverless, I want to talk about the underlying infrastructure. In 2009, I wrote a blog post about P2P code and you also have similar ideas. And I think the cloud computing, as it exists today, where there’s a centralized cloud somewhere up there, and then the applications running there, are sort of accessed by multiple devices, maybe it’s a desktop or a mobile outside or some other form factor. So it was more like a client server model. Then that is my thinking at the time in 2009, we had mobile phones gaining a lot of traction and they had a lot of compute capability which I think was unused. And that’s why I was talking about that P2P kind of a model. And, but now with IoT and edge computing coming into the mixture, I think the way we even think about the infrastructure changes from a more like a centralized cloud approach to a more distributed P2P cloud, kind of approach. What is your thinking on this?

John (07:43):
So that’s a really good summary, Krish. The history of computing has been a constant pendulum swinging between centralized and decentralized architectures, right? Mainframes, highly centralized compute-as-a-service gave way to the client server era in which, every business of any size and shape and form could have its own server in its own broom closet, so to speak. And the explosion of the third generation of the internet gave us gave us cloud computing where any rank and file startup could have at its at its behest the power and scale of Amazon web services or Microsoft or Google, et cetera, on highly centralized basis. And the internet of things is the era of the internet in which that pendulum swings back to a decentralized architecture. And that has everything to do with the fact that computing high performance computing needs to be located in close proximity to the, the fourth generation of internet users, because it’s not you and me Krish, the fourth generation is going to be about machines, consuming resources to do autonomous things, to do automation to do predictive analytics, artificial intelligence, virtual reality, augmented reality – the plethora of new apps, if you will, that kind of make up the next generation of the internet and those, those users, if you will, not only do they number in the high billions to trillions over the next 10 years, but they will require compute located in proximity in order to process information in near real time to do the things that we’re going to expect them to do.

John (09:25):
Some folks I’ve talked to said, “ you know what? We can still use public cloud for that because, you know, latency yeah. Latency matters, but not that much. And I might look at the end of the day, here’s the difference? The difference is that the applications of the future are not about me going online and making a couple of clicks on Facebook or a couple of clicks to pay a bill online, or those kinds of things. The fourth generation of the internet is going to be about my car hurdling down the road by itself and eating the, tell the difference between a pylon and a pedestrian. Can you really live with the kind of latency at scale with millions of cars doing that simultaneously across high dense high density intersections, can you rely on that latency to not kill people to put out one way? And the short answer is no you’re going to need the cloud to exist in very close proximity to where those machines are when they require it.

Krish (10:18):
Absolutely. I totally agree. In fact, latency is definitely a big driver for this kind of a distributed infrastructure, but I also think that not everyone wants to store all the data. So if I don’t want to store my data beyond just the issue of processing, why am I going to send it all the way to cloud and storage that I’m paying a lot more for not only the network bandwidth, but also for storage. I think from from the latency point of view, which is a critical thing, in terms of driving the user experience and also from a security buyer point of view, also the latency becomes very important. But at the same time, you also don’t want to keep the data forever. Some data you can just process it and dispose of it in such cases. Also, you don’t want to move the data all the way to cloud, and you want to take the compute closer to where the data is generated and how deliberate thing.

Krish (11:11):
So with this I totally see that distributed infrastructure structure is going to be a critical, critical thing in the future, but that’s a lot of complexities. One is the operational complexity. And so, again, even for developers, deploying applications is going to be a problem. So we need to sort of solve these issues like that, solve the operational complexity of managing all the infrastructure. And more importantly, we need to make sure that we empower the developers by giving the right abstraction for them to deploy the application. And the fact that future applications are going to be more disposable. It’s not just the infrastructure that’s disposable, even the applications are going to be disposable. So if you have to write such small inner functions that did do a particular task at the age and probably get rid of the application itself, then probably serverless is the right abstraction for that hold up in this distributed world. How does serverless freedom?

John (12:17):
So the important thing to recognize why serverless is such a fantastic coupling or pairing with edge and IoT is because serverless by nature is distributed. And James Thomason, my co-founder, and I, we dreamt about this a couple of years ago, which is why we started EDJX. We said, “why is it so complicated? Why is it so complex to build planet-scale applications? It is extremely difficult as developers. I don’t know, a single developer, except for the ones that like to tinker (to use James’ quote from his presentation at edge computing world). But aside from the developers that really like to tinker with infrastructure, I don’t know, a single developer that wants to have to spend a bunch of time wrangling orchestrators, right? I mean every minute that I have to spend thinking about provisioning and orchestration and end points and all that kind of stuff is time. I’m not spending it writing code, which is what drives value either for my enterprise IoT project for my startup, or what have you. So from our perspective, it’s really about trying to take the complexity out of things for developers in terms of the velocity at which decentralized applications written in a serverless framework can be pushed to a global network.

Krish (13:40):
Yeah, I know you’re going to talk about an announcement you recently made, but before we go there, I want to understand what kind of use cases does having serverless distributed across a dependent skill actually solve. Can you list out some of the use cases, please?

John (14:02):
Yeah. So I’ll describe one without mentioning certain names. I’ll describe a couple of things I think are kind of fascinating. So today, machine learning systems exist to do fascinating things like audio and acoustic analysis for something like a gunshot. Okay. We can do that, we as the industry can do fascinating things. As post-mortem analysis, we have partners and customers with technology that can tell the difference between the kind of firearm that was fired and the proximal distance from the audio sensor. Really, really insightful analytics on say something like a gunshot, but what’s very difficult to apply those learning models in real-time, such that the inference can be impactful to whoever it needs to be informed about that information.

John (15:01):
For example, let’s just say a first responder or a police officer of some kind, or a soldier of some kind, telling the difference between a firecracker and an AK 47, can mean the difference between a civilian life or a police officer’s life lost. Those are the kinds of things that serverless at edge where you’re able to process information in real time and be able to display on a vehicle’s heads up display, a mobile phone or a tablet of some kind. The results of real time analysis to inform threat response is just one example. But that’s just one. I think those kinds of solutions are going to be important. As we move forward as a society, and it’s kind of top of mind for a lot of folks, there are plenty of others in the intelligent traffic system space thinking about how cars are interacting with traffic control systems and pedestrians and cyclists and emergency response vehicles. All of that becomes super important as we look to the internet of things, if you will.

Krish (16:21):
Awesome, and just sort of brings us to the announcement you folks meet at the edge computing world conference this week. I attended a gym society keynote and a demo on the first day. So can you talk about the announcement you made, and probably talk about how it is important from the point of view of a enterprises modernizing themselves to take advantage of the agenda.

John (16:49):
Yeah so we made a few different announcements. We released our the first commercial version of our platform. The EDJX network, the distributed, cloud services at the edge is a real thing. We used to, you and I, used to get together and talk about this stuff a few years ago as man Can you imagine if, and when this happens, well, yesterday’s announcement from us was that it’s here that you can begin to use it. And so that was an important announcement. We increased our funding by another $3 million. And we also added a peer of ours, yours and mine, Joe Weinman to our advisory board. I have a tremendous amount of respect for Joe, have loved to collaborating with him at arms length over the last decade, plus as one of the foremost thought leaders in cloud, and now fog computing or edge computing.

John (17:45):
And so it’s been fun to work with him in the last several collaborating around different ideas that culminated in him formally joining our team. So that was part of our announcement. And then, the other sort of subsequent announcement we made was around a concept that we call EdjBlock, and EdjBlock is the result of a collaboration between our company EDJX; Virtual Power Systems, which is now led by Dean Nelson, who was also an advisor to EDJX; and IT Renew which is a company run by a fantastic thought leader, Ali Fenn. IT Renew basically has the capability of taking infrastructure out of hyperscale data center environments that is anywhere from a year to three years old. And they have the ability to remanufacture that into something else.

John (18:39):
And so we got together and said, “well, the edge, if you think about how much infrastructure has to exist at scale to meet processing demands, it is going to be extremely expensive for customers to roll out.” At retail price for infrastructure, it’s not tenable, but even worse than that, the amount of infrastructure that has to be manufactured in to satisfy the needs at the edge can only mean bad things for the planet. And so we thought what if we could actually put together a fully integrated edge server, which we call EdjBlockl that’s entirely made out of recycled server and storage infrastructure from hyperscalers, and it’s plug and play ready to gofor the cost conscious. I’m Canadian so I can say this for the cost conscious: we can do it for less than half the cost of buying net new infrastructure with full warranty support.

John (19:35):
We think that’s a pretty big deal because, again, you part of the, the cruft or the friction in building your IoT project out is figuring out how. First, before I even build an IoT app, how am I going to deliver infrastructure and cloud services. Whether it’s at the factory floor, the natural gas pipelines in the middle of West Texas, my unmanned wiring cabinet in my commercial real estate building, or my shopping mall: how am I going to actually build that infrastructure and deliver cloud services before I even build my apps? Well, what we’ve done is kind of shortened that gap with EdjBlock. It’s a simple point, click deploy, plug, and play, cloud delivered where you need it when you need it for your developers.

Krish (20:22):
Awesome. I think I’m looking forward to seeing how that takes off because the cost is a very important factor as people think about using edge computing in their organization and by recycling the servers you’re putting in place is somewhat interesting to me. Let’s see how it shapes up.

John (20:46):
Krish, we have a form factor that’ll run on 110 volt power and requires no active air conditioning. So, you know what, I’m going to ship you one, and you’re going to plug it into that home office of yours, and you will be part of the distributed network.

Krish (21:00):
Awesome. I know that in fact, that was going to be my follow up question. I also wanted to ask you a whole lower power. We can go with these hyperconverged servers. So you already answered that. So if someone want to find out more information about your company and the platform, what are your contacts?

John (21:21):
EDJX.io is always a good starting point. There’ll be lots of documentation, lots of things to read and learn about and reach out to us. We’re happy to connect and get you on to the platform. If you’re a developer we can help you accelerate your IoT infrastructure project by getting you some edge EdjBlocks to build around.

Krish (21:47):
A great guy does a great conversation. And did you notice what I did for the background? I know coming into the show, I sort of like thought I’ll put a beach picture for the background.

John (21:59):
Yeah. You know that I’m Canadian, but my heart is always in the Caribbean having spent nearly 12 years there. So I appreciate you playing to my my strengths there Krish!

Krish (22:09):
Awesome. Thanks, John. Thanks for being in the show. It was great having you and looking forward to having you in the future so that we can take stock on a whole how it gets us doing.

John (22:22):
Yeah, absolutely. Thanks for having me, Chris. I appreciate it, man.

Krish (22:26):
Yeah. This was a great conversation with John on distributed infrastructure and how serverless can threaten and take it off. Take away some of the complexities that these developers deal with such as distributed infrastructure. Next week, we are going to continue on serverless. More ton that, I talk to the Kennedy community on what they’re doing. So if you are interested in listening to what is happening in the Kennedy community, and we also want to learn more about the governance model, which they announced recently, check out at Modern Enterprise Tech Show, which you will find we always stream at the same time at 1:00 PM Pacific on Wednesday. Check out the. It’s going to be interesting. You can also watch the show at live.tv. Til next week. Thank you very much.

About EDJX
EDJX is an edge computing platform that makes it easy to write edge and IoT applications using serverless computing, accelerate content delivery, increase the responsiveness of edge applications, and secure edge data at the source. EDJX helps businesses handle the explosive demand for data processing to serve real-world edge computing applications, including industrial IoT, artificial intelligence, augmented reality, and robotics. Led by cloud industry veterans John Cowan and James Thomason, EDJX is a privately held company based in Raleigh, NC. To learn more about EDJX, visit https://edjx.io and follow EDJX on LinkedIn and Twitter.
Scroll to Top