Posted on

How Roblox completely transformed its tech stack

And now has full control of its technological destiny

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Read More

Posted on

Smart Home: On the Rise?

Amid the pandemic, many are wondering if the use of technology is going to continue to rise. In many instances, the answer is yes. Such is the case with smart homes.

A new report points to the importance of incorporating smart-home technology. LexisNexis Risk Solutions released an insurance claims study revealing that in-line water shutoff systems correlate with a decrease in water claims events by 96%.

The study measured the changes in the number and severity of water-related home insurance claims with the Flo by Moen Smart Water Shutoff device against an uninstalled control group of homes in the same geolocation one year before and after installation.

Here is what it found: Prior to installation, 2,306 Flow homes had an average claims severity far greater than the control group two years prior to the installation of the device. The study also found a corresponding 72% decrease in claims severity one year after installation of the device, indicating that smart water shutoff systems are working.

The key takeaway here is that water leak mitigation and the time and money saved could help drive adoption of these smart home devices, ultimately reducing loss costs, improving the customer experience, and more.

This is in line with other reports that the smart homes market, in general, is on the rise. Mordor Intelligence says the market was valued at $64.6 billion in 2019 and is expected to reach $246.42 billion by 2025, a forecasted 25% growth rate, even amid a pandemic. The research shows there is a greater need for security and wireless controls. Further advancements in the IoT (Internet of Things) have resulted in price drops of sensors and processors, which are expected to fuel automation in the home.

While there is much to consider when it comes to smart-home technologies, research points to a continued rise in the years to come.

Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #futureofwork #infrastructure 

Read More

Posted on

Effort to Fund National Research Cloud for AI Advances


A national research cloud to support US efforts to maintain and grow a lead in AI is at the proposal stage in Congress. (GETTY IMAGES)

By AI Trends Staff

A bipartisan group of legislators in the US House and Senate proposed a bill in the first week of June that would direct the federal government to develop a national cloud computing infrastructure for AI research.

This idea originated with a proposal from Stanford University in 2019.

The legislation was introduced by Sens. Rob Portman, R-Ohio, and Martin Heinrich, D-NM, is called the National Cloud Computing Task Force Act. It would convene a mix of technical experts across academia, industry and government, to plan for how the US should build, deploy, govern and maintain a national research cloud for AI.

“With China focused on toppling the United States’ leadership in AI, we need to redouble our efforts with a sustained commitment to the best and brightest by developing a national research cloud to ensure our technical researchers get the tools they need to succeed,” stated Portman, according to an account in  Nextgov. “By democratizing access to computing power we ensure that any American with computer science talent can pursue their good ideas.”

“Artificial Intelligence is likely to be one of the most transformative technologies of all time. If we defer its development to other nations, important ethical, safety, and privacy principles will be at risk, which not only harms the United States, but also the international community as a whole,” stated Sen. Heinrich.

A companion bill was introduced the same week in the House, filed by Reps. Anna Eshoo, D-Calif., and Anthony Gonzalez, R-Ohio.

Original Suggestion for National Research Cloud From Stanford

A project to support a National Research Cloud was suggested by John Etchemendy, co-director of the Stanford Institute for Human Centered AI (HAI), and Fei-Fei Li, also a co-director of HAE and a computer science professor at Stanford. Etchemendy is retired as provost of Stanford, a position he held for 17 years, stepping down in 2017. Li was the director of Stanford’s AI Lab from 2013 to 2018; she served as VP of and Chief Scientist of AI/ML at Good Cloud during a sabbatical from January 2017 to September 2018.

In an update published on the HAE blog at Stanford University in March, the authors outlined how the advance of AI by US companies is a direct outgrowth of federally-funded university research, furthered by exceptional R&D in the private sector. Then a warning: “Today, the research prowess that’s powered decades of growth and prosperity is at risk.”

The Stanford Institute for Human-Centered Design (HAI), founded in March 2019, aims to advance AI practice to improve the human condition.

The two primary reasons are: university researchers especially lack access to compute power; and meaningful datasets are scarce. These two resources are “prerequisites for advanced AI research,” the authors stated.

Today’s AI, the researchers note, requires massive amounts of compute power, huge volumes of data and high expertise to train the gigantic machine learning models underlying the most advanced research. “There is a wide gulf between the few companies that can afford these resources and everyone else,” the authors stated. In an example, Google said the company itself spent $1.5 million in computer cycles to train the Meena chatbot announced earlier this year. “Such costs for a single research project are out of reach for most corporations, let alone for academic researchers,” the authors stated.

Meanwhile, the large data sets required to train AI algorithms are mostly controlled by industry or government, hobbling academic researchers, important partners in the American research partnership.

Here is the call: “It is for this reason that we are calling for the creation of a US Government-led task force from academia, government, and industry to establish a National Research Cloud. Support from Congress and the President could have a meaningful impact on American innovation through the creation of such a task force. Indeed, we believe that this could be one of the most strategic research investments the federal government has ever made.”

After HAI launched the initiative last year, the presidents and provosts of 22 universities nationwide signed a joint letter to the President and Congress in support of the effort. But HAE held off on issuing the letter until now, with the nation absorbed in responding to the pandemic.

The co-directors of the Stanford Institute for Human-Centered Design (HAI) are John Etchemendy and Fei-Fei Li. He retired as provost in 2017; she is a computer science professor at Stanford. (Drew Kelly for Stanford Institute for Human-Centered Artificial Intelligence)

Eric Schmidt Suggests Building on CloudBank of the NSF

Former Google CEO Eric Schmidt put his support behind a national cloud effort at a hearing of the House Science, Space and Technology committee in late January. The hearing was to consider actions the US could take to maintain and extend its technological leadership in the world.

Now the chair of the Defense Innovation Board and the National Security Commission on Artificial Intelligence (AI), Schmidt spoke about the CloudBank program launched last year by the National Science Foundation to provide public cloud allocations and associated training to support projects.

The committee considered the importance of collaboration between government, industry, and academia in the effort to sustain and grow US competitiveness. Schmidt suggested that CloudBank “could expand into a nation-wide National Research Cloud,” according to a recent account in Meritalk.

“Congress should also explore tax incentives for companies to share data and provide computing capabilities to research institutions, and accelerate efforts to make government datasets more widely available,” Schmidt stated.

Schmidt offered the following recommendations to boost US tech competitiveness:

  • More Federal research and development funding. “For AI, the scale of investment should be multiple times current levels,” he stated, adding, “Simply put, we need to place big bets.”
  • Federal investment in nationwide infrastructure. That should include a secure alternative to 5G network equipment made by China-based Huawei, investing in high-performance computing, and emulating previous national models like the National Nanotechnology Initiative.
  • Boosting public confidence in advanced technology. “If we do not earn the public’s trust in the benefits of new technologies, especially AI, doubts will hold us back,” Schmidt stated.

The committee seemed to support the concept of public-private-academic partnerships to achieve the recommended outcomes.

Read the source articles at Nextgov., the HAE blog at Stanford University and in Meritalk.

Read More

Posted on

April: IoT Connectivity Leads to Better Road Safety

As with any relatively new technology, there are risks to its use. Lynch says the first risk to consider is the privacy of individuals using connected vehicles, especially when it comes to location tracking. Another concern is the security of these systems. “Wireless communications opens a vehicle to the world,” he says. “If cybersecurity is not sufficiently robust, some bad people could access the vehicle and jeopardize its safety.”

According to Na Jiao, technology analyst at IDTechEx, self-driving technologies and connected and autonomous vehicles add another layer of vulnerability to cyberattacks. The concern is not only the vehicles themselves, but also the environment in which the vehicles operate. “The threats to autonomous cars can come through any system connected to the vehicle’s sensors, communication applications, processors, control systems, and external inputs from other cars, infrastructure, and mapping and GPS data systems,” she says.

Chris Greer, director of the NIST (National Institute of Standards and Technology) Smart Grid and Cyber-Physical Systems Program, says by contributing greater and more accurate awareness to vehicles, technology can indeed produce a roadway environment that exhibits better driver and driverless vehicle decisions, but there is also risk. “The key to managing risk is to ensure that we’re able to measure those risks reliably and accurately,” Greer says.

As long as the industry considers risks and continuously looks for ways to mitigate these risks, it’s safe to assume connectivity will revolutionize road safety. It may even shift the transportation paradigm.

Read More

Posted on

Russia to Loan $217mn to Moldova for Infrastructure Projects

Moldovan President Igor Dodon said on his Facebook page on Friday that his country has signed an agreement to receive a loan of 200 million euros ($217.5 million) from Russia, Gazeta.ru reported.

“Moldovan Ambassador to Russia Andrei Neguta signed a loan agreement with top officials of the Russian Finance Ministry. The document envisages issuing the first installment of the 200-million-euro loan to our country,” the president said, adding that “it was the first concrete financial assistance” that Moldova received from its partners in connection with the current crisis.

“The funds will be directed to Moldova’s state budget, to cover the country’s needs, including infrastructure projects,” he said.

Dodon thanked Russian partners and personally Russian President Vladimir Putin for the aid.

The text of the agreement, which is yet to be ratified by the Moldovan parliament, says that “the first installment of 100 million euro [will be provided] no later than 30 days since the agreement enters force, <…> the second installment of 100 million euro – no later than October 31, 2020,” he said.

The money will be allocated “for budgetary support goals” for a period of 10 years, with an annual interest rate of 2%.

Moldovan Prime Minister Ion Chicu said that due to the current crisis, triggered by the novel coronavirus outbreak, the country’s state budget deficit is expected to grow to approximately 825 million euro ($897 million), or 7.5% of the country’s GDP.

Read More

Posted on

AIOps is Marching into the Mainstream, Replacing IT Ops


The growth in complexity of IT operations in the era of hybrid cloud computing has given rise to the growth of AIOps, using AI to manage the computing infrastructure. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

Artificial intelligence for IT operations, AIOps, refers to the application of machine learning and data science to IT operations. AIOps systems monitor huge volumes of log and performance data typically generated in a large enterprise, to gain visibility into dependencies and solve problems.

An AIOps platform should include these three capabilities, suggests a recent report in TechTarget:

Automate routine practices. These include user requests and non-critical IT system alerts. For example, a help desk system can process and fulfill a user request to provision a resource automatically. The system is also able to evaluate alerts and determine which ones require action, and which are based on metrics and supporting data within normal parameters.

Recognize serious issues faster and with greater accuracy than humans. The system should be able to detect behavior out of the norm, especially on critical servers, by processing volumes of data not possible for humans to monitor on their own.

Streamline the interactions between data center groups and teams. AIOps provides each functional IT group with relevant data and perspectives. The AIOps system learns what analysis and monitoring data from the large pool of resource metrics to show each group or team.

AIOps is suited to complex IT operations typical of large enterprises, involving hybrid cloud platforms for example. Data comes from multiple sources including log files, metrics, monitoring tools and help desk ticketing systems. Big data technology is used to aggregate and organize the output into a useful form. Analytics techniques are used to interpret the raw data and spot trends and patterns that can identify and isolate problems, including capacity issues.

Algorithms in the system codify the organization’s IT expertise, business policies and goals, so the platform can deliver the most desirable outcomes or actions. The algorithms are used to prioritize security-related events and teach the platform what application performance decisions are appropriate. These algorithms form the foundation for machine learning; they establish a baseline of normal behaviors and activity, and they can learn and evolve as the environment changes over time.

Automation enables the AIOps tools to take action, triggered by outcomes of the analytic and machine learning. A tool’s predictive analytics and ML may determine that an application needs more storage, for example. An automated process can then be initiated to add storage in increments consistent with the rules embedded in the algorithms.

Visualization tools deliver dashboards, reports, graphics and other output so that human operators and managers can see the changes and events in the environment. These typically allow human operators to take actions that require decision-making capabilities beyond those of the AIOps software.

The technology underlying AIOps is fairly mature, and the field is poised to enter the next phase of maturity in combining the technologies for practical use. The amount of time and effort needed to implement, maintain and manage an AIOps platform can be substantial. Results can vary.

Gartner Coined AIOps Term

The origin of the term AIOps is credited to Gartner, the analyst firm, which coined it in 2016. In a recent report on How to Get Started With AIOps, Gartner analyst Padraig Bryne explained the origin. “IT operations are challenged by the rapid growth in data volumes generated by IT infrastructure and applications that must be captured, analyzed and acted on,” he stated. “Coupled with the reality that IT operations teams often work in disconnected silos, this makes it challenging to ensure that the most urgent incident at any given time is being addressed.”

Padraig Bryne, Senior Director and Analyst, Gartner

Gartner sees AIOps functionality as partially replacing primary IT operations functions such as availability and performance monitoring, event correlation and analysis and IT service management and automation. Gartner predicts that large enterprise exclusive use of AIOps and digital experience monitoring tools to monitor applications and infrastructure will rise from 5% in 2018 to 30% in 2023. Market size today Gartner estimates to be in the $300M to $500M range.

According to Byrne, the long-term impact of AIOps on IT operations will be transformative. “IT leaders are enthusiastic about the promise of applying AI to IT operations, but as with moving a large object, it will be necessary to overcome inertia to build velocity,” stated Byrne. “The good news is that AI capabilities are advancing, and more real solutions are becoming available every day.”

Moogsoft Capitalizing on Trend

Among vendors competing in the AIOps space is Moogsoft, co-founded in 2012 by Phil Tee, CEO, the past chairman and CTO of Riversoft, which eventually became part of the IBM Tivoli Netcool suite. He was also the co-founder and CTO of Micromuse, a supplier of network management software.

Phil Tee, CEO and co-founder of Moogsoft

“I have been in the IT operations market for 30 years,” said Tee in an interview with AI Trends. Moogsoft was formed to help companies keep their digital infrastructure available. “Our customers, including Verizon, have billions invested in their infrastructure. The particular nuance today is that the environment is so complex and so large, we need to bring in AI and machine learning as part of the set of tools to help. The complexity is not possible for the human mind to comprehend.”

Fundamentally the firm’s software, rebranded to AIOps after Gartner coined the term, is an operations tool used by IT professionals responsible for delivering more services and keeping the infrastructure healthy. The firm’s software was written from scratch and has 50 patents pending on it, Tee said.

It is a cloud-resident software service with pricing based on volume, based on the number of events being managed. The company has raised approximately $100 million from investors including Goldman Sachs, and is close to being profitable, Tee said.

Customers include American Airlines, Verizon Media and GoDaddy. “The market is growing and maturing,” Tee said. “In two or three years, it will have completely supplanted the traditional IT operations base,” served for example by IBM, BMC and the former CA (now CA Technologies, owned by Broadcom).

A case study published by Moogsoft on GoDaddy outlined how the company needed to replace its IT operations infrastructure after making a commitment to Amazon AWS as its primary platform. GoDaddy had been relying on CA Spectrum, a legacy event management system. Customers were experiencing issues before the GoDaddy operational teams were aware of them, not a best practice.

“We came to Moogsoft because we’re experiencing significant business growth and our legacy event management systems couldn’t scale, nor could they accommodate new data sources like AWS,” stated Felix Gorodishter, Principal Architect for Monitoring at GoDaddy. “Moogsoft was initially deployed as a layer on top of these systems, but now it is completely replacing the legacy systems.”

The migration enables GoDaddy to be able to deploy software globally in minutes. The company is expecting to see a 50 percent reduction in mean time to resolve issues through earlier detection of incidents, and fewer support escalations. GoDaddy’s existing monitoring tools and AWS cloud tools have now been integrated with Moogsoft.

Read the source articles and reports at TechTarget, Gartner, at Moogsoft and in Moogsoft’s GoDaddy case study.

Read More

Posted on

Who’s Eyeing Quantum Clouds

There has been a lot of talk, as of late, that the cloud has become a resource to turn to when a company needs large amounts of digital storage, constantly updated SaaS (software-as-a-service), or high-performance computing capabilities. Couple this with the expansion of AI (artificial intelligence) in applications, even smaller …

Read More

Posted on

Finland-Russia Railway Transport Halted Due to Virus Outbreak

Finnish railway operator VR said on Tuesday that passenger train services between Finland and Russia will be temporarily halted amid the coronavirus upon the two countries’ decision on limiting travel through the border, RIA Novosti reports. 
“According to Russia’s decision, foreigners won’t be able to get there after 23:00 …

Read More

Posted on

These gorgeous designs guard against flooding

Architect Ruurd Gietema lives in the Netherlands, a country perennially trying to hold back the sea. He said his homeland has paid a price for the high dikes and tall dunes it built to thwart rising waters and prevent flooding.”Protection was a high priority, but landscapes were erased,” Gietema …

Read More

Posted on

Overcoming Hurdles to Autonomous Cities

The technology to power connected cities exists today—and continued growth is predicted. Will all our cities soon be connected? Or do hurdles stand in the way? Perhaps one of the biggest challenge will be overcoming regulatory hurdles that could slow the progress down.Technavio says the autonomous bus market, …

Read More