Picture yourself in the role of CIO at Roblox in 2017.
At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).
Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.
Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.
When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.
At its Think Digital conference, IBM and Red Hat today announced a number of new services that all center around 5G edge and AI. The fact that the company is focusing on these two areas doesn’t come as a surprise, given that both edge and AI are two of the fastest-growing businesses in enterprise computing. Virtually every telecom company is now looking at how to best capitalize on the upcoming 5G rollouts, and most forward-looking enterprises are trying to figure out how to best plan around this for their own needs.
As IBM’s recently minted president Jim Whitehurst told me ahead of today’s announcement, he believes that IBM (in combination with Red Hat) is able to offer enterprises a very differentiated service because, unlike the large hyper clouds, IBM isn’t interested in locking these companies into a homogeneous cloud.
“Where IBM is competitively differentiated, is around how we think about helping clients on a journey to what we call hybrid cloud,” said Whitehurst, who hasn’t done a lot of media interviews since he took the new role, which still includes managing Red Hat. “Honestly, everybody has hybrid clouds. I wish we had a more differentiated term. One of the things that’s different is how we’re talking about how you think about an application portfolio that, by necessity, you’re going to have in multiple ways. If you’re a large enterprise, you probably have a mainframe running a set of transactional workloads that probably are going to stay there for a long time because there’s not a great alternative. And there’s going to be a set of applications you’re going to want to run in a distributed environment that need to access that data — all the way out to you running a factory floor and you want to make sure that the paint sprayer doesn’t have any defects while it’s painting a door.”
BARCELONA, CATALONIA, SPAIN – 2019/02/25: The IBM logo is seen during MWC 2019. (Photo by Paco Freire/SOPA Images/LightRocket via Getty Images)
He argues that IBM, at its core, is all about helping enterprises think about how to best run their workloads software, hardware and services perspective. “Public clouds are phenomenal, but they are exposing a set of services in a homogeneous way to enterprises,” he noted, while he argues that IBM is trying to weave all of these different pieces together.
Later in our discussion, he argued that the large public clouds essentially force enterprises to fit their workloads to those clouds’ service. “The public clouds do extraordinary things and they’re great partners of ours, but their primary business is creating these homogeneous services, at massive volumes, and saying ‘if your workloads fit into this, we can run it better, faster, cheaper etc.’ And they have obviously expanded out. They’ve added services. They are not saying we can put a box on-premise, but you’re still fitting into their model.”
On the news side, IBM is launching new services to automate business planning, budgeting and forecasting, for example, as well as new AI-driven tools for building and running automation apps that can handle routine tasks either autonomously or with the help of a human counterpart. The company is also launching new tools for call-center automation.
The most important AI announcement is surely Watson AIOps, though, which is meant to help enterprises detect, diagnose and respond to IT anomalies in order to reduce the effects of incidents and outages for a company.
On the telco side, IBM is launching new tools like the Edge Application Manager, for example, to make it easier to enable AI, analytics and IoT workloads on the edge, powered by IBM’s open-source Open Horizon edge computing project. The company is also launching a new Telco Network Cloud manager built on top of Red Hat OpenShift and the ability to also leverage the Red Hat OpenStack Platform (which remains to be an important platform for telcos and represents a growing business for IBM/Red Hat). In addition, IBM is launching a new dedicated IBM Services team for edge computing and telco cloud to help these customers build out their 5G and edge-enabled solutions.
Telcos are also betting big on a lot of different open-source technologies that often form the core of their 5G and edge deployments. Red Hat was already a major player in this space, but the acquisition has only accelerated this, Whitehurst argued. “Since the acquisition […] telcos have a lot more confidence in IBM’s capabilities to serve them long term and be able to serve them in mission-critical context. But importantly, IBM also has the capability to actually make it real now.”
A lot of the new telco edge and hybrid cloud deployments, he also noted, are built on Red Hat technologies but built by IBM, and neither IBM nor Red Hat could have really brought these to fruition in the same way. Red Hat never had the size, breadth and skills to pull off some of these projects, Whitehurst argued.
Whitehurst also argued that part of the Red Hat DNA that he’s bringing to the table now is helping IBM to think more in terms of ecosystems. “The DNA that I think matters a lot that Red Hat brings to the table with IBM — and I think IBM is adopting and we’re running with it — is the importance of ecosystems,” he said. “All of Red Hat’s software is open source. And so really, what you’re bringing to the table is ecosystems.”
It’s maybe no surprise then that the telco initiatives are backed by partners like Cisco, Dell Technologies, Juniper, Intel, Nvidia, Samsung, Packet, Equinix, Hazelcast, Sysdig, Turbonomics, Portworx, Humio, Indra Minsait, EuroTech, Arrow, ADLINK, Acromove, Geniatech, SmartCone, CloudHedge, Altiostar, Metaswitch, F5 Networks and ADVA.
In many ways, Red Hat pioneered the open-source business model and Whitehurst argued that having Red Hat as part of the IBM family means it’s now easier for the company to make the decision to invest even more in open source. “As we accelerate into this hybrid cloud world, we’re going to do our best to leverage open-source technologies to make them real,” he added.
Nvidia today announced its plans to acquire Cumulus Networks, an open-source centric company that specializes in helping enterprises optimize their data center networking stack. Cumulus offers both its own Linux distribution for network switches, as well as tools for managing network operations. With Cumulus Express, the company also offers a hardware solution in the form of its own data center switch.
The two companies did not announce the price of the acquisition, but chances are we are talking about a considerable amount, given that Cumulus had raised $134 million since it was founded in 2010.
Mountain View-based Cumulus already had a previous partnership with Mellanox, which Nvidia acquired for $6.9 billion. That acquisition closed only a few days ago. As Mellanox’s Amit Katz notes in today’s announcement, the two companies first met in 2013 and they formed a first official partnership in 2016. Cumulus, it’s worth noting, was also an early player in the OpenStack ecosystem.
Having both Cumulus and Mellanox in its stable will give Nvidia virtually all of the tools it needs to help enterprises and cloud providers build out their high-performance computing and AI workloads in their data centers. While you may mostly think about Nvidia because of its graphics cards, the company has a sizable data center group, which delivered close to $1 billion in revenue in the last quarter, up 43 percent from a year ago. In comparison, Nvidia’s revenue from gaming was just under $1.5 billion.
“With Cumulus, NVIDIA can innovate and optimize across the entire networking stack from chips and systems to software including analytics like Cumulus NetQ, delivering great performance and value to customers,” writes Katz. “This open networking platform is extensible and allows enterprise and cloud-scale data centers full control over their operations.”
Cartesiam, a startup that aims to bring machine learning to edge devices powered by microcontrollers, has launched a new tool for developers who want an easier way to build services for these devices. The new NanoEdge AI Studio is the first IDE specifically designed for enabling machine learning and inferencing on Arm Cortex-M microcontrollers, which power billions of devices already.
As Cartesiam GM Marc Dupaquier, who co-founded the company in 2016, told me, the company works very closely with Arm, given that both have a vested interest in having developers create new features for these devices. He noted that while the first wave of IoT was all about sending data to the cloud, that has now shifted and most companies now want to limit the amount of data they send out and do a lot more on the device itself. And that’s pretty much one of the founding theses of Cartesiam. “It’s just absurd to send all this data — which, by the way, also exposes the device from a security standpoint,” he said. “What if we could do it much closer to the device itself?”
The company first bet on Intel’s short-lived Curie SoC platform. That obviously didn’t work out all that well, given that Intel axed support for Curie in 2017. Since then, Cartesiam has focused on the Cortex-M platform, which worked out for the better, given how ubiquitous it has become. Since we’re talking about low-powered microcontrollers, though, it’s worth noting that we’re not talking about face recognition or natural language understanding here. Instead, using machine learning on these devices is more about making objects a little bit smarter and, especially in an industrial use case, detecting abnormalities or figuring out when it’s time to do preventive maintenance.
Today, Cartesiam already works with many large corporations that build Cortex-M-based devices. The NanoEdge Studio makes this development work far easier, though. “Developing a smart object must be simple, rapid and affordable — and today, it is not, so we are trying to change it,” said Dupaquier. But the company isn’t trying to pitch its product to data scientists, he stressed. “Our target is not the data scientists. We are actually not smart enough for that. But we are unbelievably smart for the embedded designer. We will resolve 99% of their problems.” He argues that Cartesiam reduced time to market by a factor of 20 to 50, “because you can get your solution running in days, not in multiple years.”
One nifty feature of the NanoEdge Studio is that it automatically tries to find the best algorithm for a given combination of sensors and use cases and the libraries it generates are extremely small and use somewhere between 4K to 16K of RAM.
NanoEdge Studio for both Windows and Linux is now generally available. Pricing starts at €690/month for a single user or €2,490/month for teams.
On Anbox Cloud, Android becomes the guest operating system that runs containerized applications. This opens up a range of use cases, ranging from bespoke enterprise app to cloud gaming solutions.
The result is similar to what Google does with Android apps on Chrome OS, though the implementation is quite different and is based on the LXD container manager, as well as a number of Canonical projects like Juju and MAAS for provisioning the containers and automating the deployment. “LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity,” the company points out in its announcements.
Anbox itself, it’s worth noting, is an open-source project that came out of Canonical and the wider Ubuntu ecosystem. Launched by Canonical engineer Simon Fels in 2017, Anbox runs the full Android system in a container, which in turn allows you to run Android application on any Linux-based platform.
What’s the point of all of this? Canonical argues that it allows enterprises to offload mobile workloads to the cloud and then stream those applications to their employees’ mobile devices. But Canonical is also betting on 5G to enable more use cases, less because of the available bandwidth but more because of the low latencies it enables.
“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, Director of Product at Canonical, in today’s announcement. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”
Outside of the enterprise, one of the use cases that Canonical seems to be focusing on is gaming and game streaming. A server in the cloud is generally more powerful than a smartphone, after all, though that gap is closing.
Canonical also cites app testing as another use case, given that the platform would allow developers to test apps on thousands of Android devices in parallel. Most developers, though, prefer to test their apps in real — not emulated — devices, given the fragmentation of the Android ecosystem.
Anbox Cloud can run in the public cloud, though Canonical is specifically partnering with edge computing specialist Packet to host it on the edge or on-premise. Silicon partners for the project are Ampere and Intel .