Fylamynt, a new service that helps businesses automate their cloud workflows, today announced both the official launch of its platform as well as a $6.5 million seed round. The funding round was led by Google’s AI-focused Gradient Ventures fund. Mango Capital and Point72 Ventures also participated.
At first glance, the idea behind Fylamynt may sound familiar. Workflow automation has become a pretty competitive space, after all, and the service helps developers connect their various cloud tools to create repeatable workflows. We’re not talking about your standard IFTTT- or Zapier -like integrations between SaaS products, though. The focus of Fylamynt …
The European Union said today that it wants to work with US counterparts on a common approach to tech governance — including pushing to standardize rules for applications of technologies like AI and pushing big tech to be more responsible for what their platforms amplify.
EU lawmakers are anticipating rebooted transatlantic relations under the incoming administration of president-elect Joe Biden.
The Commission has published a new EU-US agenda with the aim of encouraging what it bills as “global cooperation — based on our common values, interests and global influence” in a number of areas, from tackling the coronavirus pandemic to addressing climate …
AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.
What Babelfish does is provide a translation layer for SQL Server’s proprietary …
3D-rendered faces are a big part of any major movie or game now, but the task of capturing and animating them in a natural way can be a tough one. Disney Research is working on ways to smooth out this process, among them a machine learning tool that makes it much easier to generate and manipulate 3D faces without dipping into the uncanny valley.
Of course this technology has come a long way from the wooden expressions and limited details of earlier days. High-resolution, convincing 3D faces can be animated quickly and well, but the subtleties of human expression are …
URUMQI, China — At the end of a desolate road rimmed by prisons, deep within a complex bristling with cameras, American technology is powering one of the most invasive parts of China’s surveillance state.
The computers inside the complex, known as the Urumqi Cloud Computing Center, are among the world’s most powerful. They can watch more surveillance footage in a day than one person could in a year. They look for faces and patterns of human behavior. They track cars. They monitor phones.
The Chinese government uses these computers to watch untold numbers of people in Xinjiang, a western region of China where Beijing has unleashed a campaign of surveillance and suppression in the name of combating terrorism.
Chips made by Intel and Nvidia, the American semiconductor companies, have powered the complex since it opened in 2016. By 2019, at a time when reports said that Beijing was using advanced technology to imprison and track Xinjiang’s mostly Muslim minorities, new U.S.-made chips helped the complex join the list of the world’s fastest supercomputers. Both Intel and Nvidia say they were unaware of what they called misuse of their technology.
Powerful American technology and its potential misuse cut to the heart of the decisions the Biden administration must face as it tackles the country’s increasingly bitter relationship with China. The Trump administration last year banned the sale of advanced semiconductors and other technology to Chinese companies implicated in national security or humans rights issues. A crucial early question for Mr. Biden will be whether to firm up, loosen or rethink those restrictions.
Some figures in the technology industry argue that the ban went too far, cutting off valuable sales of American product with plenty of harmless uses and spurring China to create its own advanced semiconductors. Indeed, China is spending billions of dollars to develop high-end chips.
By contrast, critics of the use of American technology in repressive systems say that buyers exploit workarounds and that the industry and officials should track sales and usage more closely.
Companies often point out that they have little say over where their products end up. The chips in the Urumqi complex, for example, were sold by Intel and Nvidia to Sugon, the Chinese company backing the center. Sugon is an important supplier to Chinese military and security forces, but it also makes computers for ordinary companies.
That argument is not good enough anymore, said Jason Matheny, the founding director of Georgetown University’s Center for Security and Emerging Technology and a former U.S. intelligence official.
“Government and industry need to be more thoughtful now that technologies are advancing to a point where you could be doing real-time surveillance using a single supercomputer on millions of people potentially,” he said.
There is no evidence the sale of Nvidia or Intel chip, which predate the Trump order, broke any laws. Intel said it no longer sells semiconductors for supercomputers to Sugon. Still, both continue to sell chips to the Chinese firm.
The Urumqi complex’s existence and use of U.S. chips are no secret, and there was no shortage of clues that Beijing was using it for surveillance in Xinjiang. Since 2015, when the complex began development, state media and Sugon had boasted of its ties to the police.
In five-year old marketing materials distributed in China, Nvidia promoted the Urumqi complex’s capabilities and boasted that the “high capacity video surveillance application” there had won customer satisfaction.
Nvidia said that the materials referred to older versions of its products and that video surveillance then was a normal part of the discussion around “smart cities,” an effort in China to use technology to solve urban issues like pollution, traffic and crime. A spokesman for Nvidia said the company had no reason to believe its products would be used “for any improper purpose.”
The spokesman added that Sugon “hasn’t been a significant Nvidia customer” since last year’s ban. He also said that Nvidia had not provided technical assistance for Sugon since then.
A spokesman for Intel, which still sells Sugon lower-end chips, said it would restrict or stop business with any customer that it found had used its products to violate human rights.
Publicity over Intel’s China business appears to have had an impact within the company. One business unit last year drafted ethics guidelines for its technology’s A.I. applications, according to three people familiar with the matter who asked not to be named because Intel had not made the guidelines public.
Sugon said in a statement that the complex was originally aimed at tracking license plates and managing other smart city tasks, but its systems proved ineffective and were switched to other uses. But as recently as September, official Chinese government media described the complex as a center for processing video and images for managing cities.
Advances in technology have given the authorities around the world substantial power to watch and sort people. In China, leaders have pushed technology to an even greater extreme. Artificial intelligence and genetic testing are used to screen people to see whether they are Uighurs, one of Xinjiang’s minority groups. Chinese companies and the authorities claim their systems can detect religious extremism or opposition to the Communist Party.
The Urumqi Cloud Computing Center — also sometimes called the Xinjiang Supercomputing Center — broke onto the list of the world’s fastest computers in 2018, ranking No. 221. In November 2019, new chips helped push its computer to No. 135.
Two data centers run by Chinese security forces sit next door, a way to potentially cut down on lag time, according to experts. Also nearby are six prisons and re-education centers.
When a New York Times reporter tried to visit the center in 2019, he was followed by plainclothes police officers. A guard turned him away.
The official Chinese media and Sugon’s previous statements depict the complex as a surveillance center, among other uses. In August 2017, local officials said that the center would support a Chinese police surveillance project called Sharp Eyes and that it could search 100 million photos in a second. By 2018, according to company disclosures, its computers could connect to 10,000 video feeds and analyze 1,000 simultaneously, using artificial intelligence.
“With the help of cloud computing, big data, deep learning and other technologies, the intelligent video analysis engine can integrate police data and applications from video footage, Wi-Fi hot spots, checkpoint information, and facial recognition analysis to support the operations of different departments” within the Chinese police, Sugon said in a 2018 article posted to an official social media account.
On the occasion of a visit by local Communist Party leaders to the complex that year, it wrote on its website that the computers had “upgraded the thinking from after-the-fact tracking to before-the-fact predictive policing.”
In Xinjiang, predictive policing often serves as shorthand for pre-emptive arrests aimed at behavior deemed disloyal or threatening to the party. That could include a show of Muslim piety, links to family living overseas or owning two phones or not owning a phone, according to Uighur testimony and official Chinese policy documents.
Technology helps sort vast amounts of data that humans cannot process, said Jack Poulson, a former Google engineer and founder of the advocacy group Tech Inquiry.
“When you have something approaching a surveillance state, your primary limitation is on your ability to identify events of interest within your feeds,” he said. “The way you scale up your surveillance is through machine learning and large scale A.I.”
The Urumqi complex went into development before reports of abuses in Xinjiang were widespread. By 2019, governments around the world were protesting China’s conduct in Xinjiang. That year, the Sugon computer appeared on the international supercomputing rankings, using Intel Xeon Gold 5118 processors and Nvidia Tesla V100 advanced artificial intelligence chips.
It is not clear how or whether Sugon will obtain chips powerful enough keep the Urumqi complex on that list. But lesser technology typically used to run harmless tasks can also be used for surveillance and suppression. Customers can also use resellers in other countries or chips made by American companies overseas.
Last year, the police in two Xinjiang counties, Yanqi and Qitai, purchased surveillance systems that ran on lower-level Intel chips, according to government procurement documents. The Kizilsu Kyrgyz Autonomous Prefecture public security bureau in April purchased a computing platform that used servers running less-powerful Intel chips, according to the documents, though the agency had been placed on a Trump administration blacklist last year for its involvement in surveillance.
China’s dependence on American chips has, for now, helped the world push back, said Maya Wang, a China researcher with Human Rights Watch.
“I’m afraid in a few years time, Chinese companies and government will find their own way to develop chips and these capabilities,” Ms. Wang said. “Then there will be no way to get a handle on trying to stop these abuses.”
Paul Mozur reported from Urumqi, China, and Don Clark from San Francisco.
You wait ages for foot scanning startups to help with the tricky fit issue that troubles online shoe shopping and then two come along at once: Launching today in time for Black Friday sprees is Xesto — which like Neatsy, which we wrote about earlier today, also makes use of the iPhone’s TrueDepth camera to generate individual 3D foot models for shoe size recommendations.
The Canadian startup hasn’t always been focused on feet. It has a long-standing research collaboration with the University of Toronto, alma mater of its CEO and co-founder Sophie Howe (its other co-founder and chief scientist, Afiny Akdemir, is also pursuing a Math PhD there) — and was actually founded back in 2015 to explore business ideas in human computer interaction.
But Howe tells us it moved into mobile sizing shortly after the 2017 launch of the iPhone X — which added a 3D depth camera to Apple’s smartphone. Since then Apple has added the sensor to additional iPhone models, pushing it within reach of a larger swathe of iOS users. So you can see why startups are spying a virtual fit opportunity here.
“This summer I had an aha! moment when my boyfriend saw a pair of fancy shoes on a deep discount online and thought they would be a great gift. He couldn’t remember my foot length at the time, and knew I didn’t own that brand so he couldn’t have gone through my closet to find my size,” says Howe. “I realized in that moment shoes as gifts are uncommon because they’re so hard to get correct because of size, and no one likes returning and exchanging gifts. When I’ve bought shoes for him in the past, I’ve had to ruin the surprise by calling him – and I’m not the only one. I realized in talking with friends this was a feature they all wanted without even knowing it… Shoes have such a cult status in wardrobes and it is time to unlock their gifting potential!”
Howe slid into this TechCrunch writer’s DMs with the eye-catching claim that Xesto’s foot-scanning technology is more accurate than Neatsy’s — sending a Xesto scan of her foot compared to Neatsy’s measure of it to back up the boast. (Aka: “We are under 1.5 mm accuracy. We compared against Neatsy right now and they are about 1.5 cm off of the true size of the app,” as she put it.)
Another big difference is Xesto isn’t selling any shoes itself. Nor is it interested in just sneakers; its shoe-type agnostic. If you can put it on your feet it wants to help you find the right fit, is the idea.
Right now the app is focused on the foot scanning process and the resulting 3D foot models — showing shoppers their feet in a 3D point cloud view, another photorealistic view as well as providing granular foot measurements.
There’s also a neat feature that lets you share your foot scans so, for example, a person who doesn’t have their own depth sensing iPhone could ask to borrow a friend’s to capture and takeaway scans of their own feet.
Helping people who want to be bought (correctly fitting) shoes as gifts is the main reason they’ve added foot scan sharing, per Howe — who notes shoppers can create and store multiple foot profiles on an account “for ease of group shopping”.
“Xesto is solving two problems: Buying shoes [online] for yourself, and buying shoes for someone else,” she tells TechCrunch. “Problem 1: When you buy shoes online, you might be unfamiliar with your size in the brand or model. If you’ve never bought from a brand before, it is very risky to make a purchase because there is very limited context in selecting your size. With many brands you translate your size yourself.
“Problem 2: People don’t only buy shoes for themselves. We enable gift and family purchasing (within a household or remote!) by sharing profiles.”
Xesto is doing its size predictions based on comparing a user’s (<1.5mm accurate) foot measurements to brands’ official sizing guidelines — with more than 150 shoe brands currently supported.
Howe says it plans to incorporate customer feedback into these predictions — including by analyzing online reviews where people tend to specify if a particular shoe sizes larger or smaller than expected. So it’s hoping to be able to keep honing the model’s accuracy.
“What we do is remove the uncertainty of finding your size by taking your 3D foot dimensions and correlate that to the brands sizes (or shoe model, if we have them),” she says. “We use the brands size guides and customer feedback to make the size recommendations. We have over 150 brands currently supported and are continuously adding more brands and models. We also recommend if you have extra wide feet you read reviews to see if you need to size up (until we have all that data robustly gathered).”
Asked about the competitive landscape, given all this foot scanning action, Howe admits there’s a number of approaches trying to help with virtual shoe fit — such as comparative brand sizing recommendations or even foot scanning with pieces of paper. But she argues Xesto has an edge because of the high level of detail of its 3D scans — and on account of its social sharing feature. Aka this is an app to make foot scans you can send your bestie for shopping keepsies.
“What we do that is unique is only use 3D depth data and computer vision to create a 3D scan of the foot with under 1.5mm accuracy (unmatched as far as we’ve seen) in only a few minutes,” she argues. “We don’t ask you any information about your feet, or to use a reference object. We make size recommendations based on your feet alone, then let you share them seamlessly with loved ones. Size sharing is a unique feature we haven’t seen in the sizing space that we’re incredibly excited about (not only because we will get more shoes as gifts :D).”
Xesto’s iOS app is free for shoppers to download. It’s also entirely free to create and share your foot scan in glorious 3D point cloud — and will remain so according to Howe. The team’s monetization plan is focused on building out partnerships with retailers, which is on the slate for 2021.
“Right now we’re not taking any revenue but next year we will be announcing partnerships where we work directly within brands ecosystems,” she says, adding: “[We wanted to offer] the app to customers in time for Black Friday and the holiday shopping season. In 2021, we are launching some exciting initiatives in partnership with brands. But the app will always be free for shoppers!”
Since being founded around five years ago, Howe says Xesto has raised a pre-seed round from angel investors and secured national advanced research grants, as well as taking in some revenue over its lifetime. The team has one patent granted and one pending for their technologies, she adds.
A new Mac-optimized fork of machine learning environment TensorFlow posts some major performance increases. Although a big part of that is that until now the GPU wasn’t used for training tasks (!), M1-based devices see even further gains, suggesting a spate of popular workflow optimizations like this one are incoming.
Announced on both TensorFlow and Apple’s blogs, the improved Mac version shows in the best case more than a 10x improvement in speed for common training tasks.
That’s worth celebrating on its own for anyone who works in ML and finds themselves constantly waiting for their models to bake. But the fact that previous versions of TF only utilized the CPU on Macs and not the powerful parallel processors in the GPU probably limited the pool of people who inflict that problem on themselves in the first place. (Most large-scale ML training is done using cloud computing.)
The change from CPU-only to CPU+GPU could account for a great deal of the improvement, as the benchmarks on an Intel-based Mac Pro show huge gains on the same hardware. Training times once in the 6-8 second range are now measured in fractions of a second.
That’s not to say the M1 isn’t capable, but the new M1 Macs also have new GPUs, meaning the jump from nearly 10 seconds for a task on a 2019 MacBook Pro to less than 2 on a new M1 machine can only be partly attributed to Apple’s fancy first-party silicon.
I contacted Apple for more information, such as a number for an M1 device running non-optimized code (which would elucidate the improvements nicely) but a representative said it does not have those numbers.
At any rate perhaps more important for developers will be the improved battery life and heat management of the M1 devices. Performance bumps are all well and good, but if it made your machine into a hot plate, blasted your fan and made you run for the outlet in under an hour — not so good. Fortunately the M1 seems to be demonstrating remarkable efficiency under load, neither draining its reserves or heating up too much.
You can probably expect a lot of these “now works better on M1” stories now that the new Macs are out and all the major companies can ship the updates they’ve been sitting on for the last few months.
Google is launching a major redesign of its Google Pay app on both Android and iOS today. Like similar phone-based contactless payment services, Google Pay — or Android Pay as it was known then — started out as a basic replacement for your credit card. Over time, the company added a few more features on top of that but the overall focus never really changed. After about five years in the market, Google Pay now has about 150 million users in 30 countries. With today’s update and redesign, Google is keeping all the core features intact but also taking the service in a new direction with a strong emphasis on helping you manage your personal finances (and maybe get a deal here and there as well).
Google is also partnering with 11 banks to launch a new kind of bank account in 2021. Called Plex, these mobile-first bank accounts will have no monthly fees, overdraft charges or minimum balances. The banks will own the accounts but the Google Pay app will be the main conduit for managing these accounts. The launch partners for this are Citi and Stanford Federal Credit Union.
Image Credits: Google
“What we’re doing in this new Google Pay app, think of it is combining three things into one,” Google director of product management Josh Woodward said as he walked me through a demo of the new app. “The three things are three tabs in the app. One is the ability to pay friends and businesses really fast. The second is to explore offers and rewards, so you can save money at shops. And the third is getting insights about your spending so you can stay on top of your money.”
Paying friends and businesses was obviously always at the core of Google Pay — but the emphasis here has shifted a bit. “You’ll notice that everything in the product is built around your relationships,” Caesar Sengupta, Google’s lead for Payments and Next Billion Users, told me. “It’s not about long lists of transactions or weird numbers. All your engagements pivot around people, groups, and businesses.”
It’s maybe no surprise then that the feature that’s now front and center in the app is P2P payments. You can also still pay and request money through the app as usual, but as part of this overhaul, Google is now making it easier to split restaurant bills with friends, for example, or your rent and utilities with your roommates — and to see who already paid and who is still delinquent. Woodward tells me that Google built this feature after its user research showed that splitting bills remains a major pain point for its users.
In this same view, you can also find a list of companies you have recently transacted with — either by using the Google Pay tap-and-pay feature or because you’ve linked your credit card or bank account with the service. From there, you can see all of your recent transactions with those companies.
Image Credits: Google
Maybe the most important new feature Google is enabling with this update is indeed the ability to connect your bank accounts and credit cards to Google Pay so that it can pull in information about your spending. It’s basically Mint-light inside the Google Pay app. This is what enables the company to offer a lot of the other new features in the app. Google says it is working with “a few different aggregators” to enable this feature, though it didn’t go into details about who its partners are. It’s worth stressing that this, like all of the new features here, is off by default and opt-in.
Image Credits: Google
The basic idea here is similar to that of other personal finance aggregators. At its most basic, it lets you see how much money you spent and how much you still have. But Google is also using its smarts to show you some interesting insights into your spending habits. On Monday, it’ll show you how much you spent on the weekend, for example.
“Think of these almost as like stories in a way,” Woodward said. “You can swipe through them so you can see your large transactions. You can see how much you spent this week compared to a typical week. You can look at how much money you’ve sent to friends and which friends and where you’ve spent money in the month of November, for example.”
This also then enables you to easily search for a given transaction using Google’s search capabilities. Since this is Google, that search should work pretty well and in a demo, the team showed me how a search for ‘Turkish’ brought up a transaction at a kebab restaurant, for example, even though it didn’t have ‘Turkish’ in its name. If you regularly take photos of your receipts, you can also now search through these from Google Pay and drill down to specific things you bought — as well as receipts and bills you receive in your Gmail inbox.
Also new inside of Google Pay is the ability to see and virtually clip coupons that are then linked to your credit card, so you don’t need to do anything else beyond using that linked credit card to get extra cashback on a given transaction, for example. If you opt in, these offers can also be personalized.
Image Credits: Google
The team also worked with the Google Lens team to now let you scan products and QR codes to look for potential discounts.
As for the core payments function, Google is also enabling a new capability that will let you use contactless payments at 30,000 gas stations now (often with a discount). The partners for this are Shell, ExxonMobil, Phillips 66, 76 and Conoco.
In addition, you’ll also soon be able to pay for parking in over 400 cities inside the app. Not every city is Portland, after all, and has a Parking Kitty. The first cities to get this feature are Austin, Boston, Minneapolis, and Washington, D.C., with others to follow soon.
It’s one thing to let Google handle your credit card transaction but it’s another to give it all of this — often highly personal — data. As the team emphasized throughout my conversation with them, Google Pay will not sell your data to third parties or even the rest of Google for ad targeting, for example. All of the personalized features are also off by default and the team is doing something new here by letting you turn them on for a three-month trial period. After those three months, you can then decide to keep them on or off.
In the end, whether you want to use the optional features and have Google store all of this data is probably a personal choice and not everybody will be comfortable with it. The rest of the core Google Pay features aren’t changing, after all, so you can still make your NFC payments at the supermarket with your phone just like before.
AI startup RealityEngines.AI changed its name to Abacus.AI in July. At the same time, it announced a $13 million Series A round. Today, only a few months later, it is not changing its name again, but it is announcing a $22 million Series B round, led by Coatue, with Decibel Ventures and Index Partners participating as well. With this, the company, which was co-founded by former AWS and Google exec Bindu Reddy, has now raised a total of $40.3 million.
In addition to the new funding, Abacus.AI is also launching a new product today, which it calls Abacus.AI Deconstructed. Originally, the idea behind RealityEngines/Abacus.AI was to provide its users with a platform that would simplify building AI models by using AI to automatically train and optimize them. That hasn’t changed, but as it turns out, a lot of (potential) customers had already invested into their own workflows for building and training deep learning models but were looking for help in putting them into production and managing them throughout their lifecycle.
“One of the big pain points [businesses] had was, ‘look, I have data scientists and I have my models that I’ve built in-house. My data scientists have built them on laptops, but I don’t know how to push them to production. I don’t know how to maintain and keep models in production.’ I think pretty much every startup now is thinking of that problem,” Reddy said.
Image Credits: Abacus.AI
Since Abacus.AI had already built those tools anyway, the company decided to now also break its service down into three parts that users can adapt without relying on the full platform. That means you can now bring your model to the service and have the company host and monitor the model for you, for example. The service will manage the model in production and, for example, monitor for model drift.
Another area Abacus.AI has long focused on is model explainability and de-biasing, so it’s making that available as a module as well, as well as its real-time machine learning feature store that helps organizations create, store and share their machine learning features and deploy them into production.
As for the funding, Reddy tells me the company didn’t really have to raise a new round at this point. After the company announced its first round earlier this year, there was quite a lot of interest from others to also invest. “So we decided that we may as well raise the next round because we were seeing adoption, we felt we were ready product-wise. But we didn’t have a large enough sales team. And raising a little early made sense to build up the sales team,” she said.
Reddy also stressed that unlike some of the company’s competitors, Abacus.AI is trying to build a full-stack self-service solution that can essentially compete with the offerings of the big cloud vendors. That — and the engineering talent to build it — doesn’t come cheap.
Image Credits: Abacus.AI
It’s no surprise then that Abacus.AI plans to use the new funding to increase its R&D team, but it will also increase its go-to-market team from two to ten in the coming months. While the company is betting on a self-service model — and is seeing good traction with small- and medium-sized companies — you still need a sales team to work with large enterprises.
Come January, the company also plans to launch support for more languages and more machine vision use cases.
“We are proud to be leading the Series B investment in Abacus.AI, because we think that Abacus.AI’s unique cloud service now makes state-of-the-art AI easily accessible for organizations of all sizes, including start-ups,” Yanda Erlich, a p artner at Coatue Ventures told me. “Abacus.AI’s end-to-end autonomous AI service powered by their Neural Architecture Search invention helps organizations with no ML expertise easily deploy deep learning systems in production.”
Seldon is a U.K. startup that specializes in the rarified world of development tools to optimize machine learning. What does this mean? Well, dear reader, it means that the “AI” that companies are so fond of trumpeting does actually end up working.
It has now raised a £7.1 million Series A round co-led by AlbionVC and Cambridge Innovation Capital . The round also includes significant participation from existing investors Amadeus Capital Partners and Global Brain, with follow-on investment from other existing shareholders. The £7.1 million funding will be used to accelerate R&D and drive commercial expansion, take Seldon Deploy — a new enterprise solution — to market and double the size of the team over the next 18 months.
More accurately, Seldon is a cloud-agnostic machine learning (ML) deployment specialist which works in partnership with industry leaders such as Google, Red Hat, IBM and Amazon Web Services.
Key to its success is that its open-source project Seldon Core has more than 700,000 models deployed to date, drastically reducing friction for users deploying ML models. The startup says its customers are getting productivity gains of as much as 92% as a result of utilizing Seldon’s product portfolio.
Alex Housley, CEO and founder of Seldon speaking to TechCrunch explained that companies are using machine learning across thousands of use cases today, “but the model actually only generates real value when it’s actually running inside a real-world application.”
“So what we’ve seen emerge over these last few years are companies that specialize in specific parts of the machine learning pipeline, such as training version control features. And in our case we’re focusing on deployment. So what this means is that organizations can now build a fully bespoke AI platform that suits their needs, so they can gain a competitive advantage,” he said.
In addition, he said Seldon’s open-source model means that companies are not locked-in: “They want to avoid locking as well they want to use tools from various different vendors. So this kind of intersection between machine learning, DevOps and cloud-native tooling is really accelerating a lot of innovation across enterprise and also within startups and growth-stage companies.”
Nadine Torbey, an investor at AlbionVC, added: “Seldon is at the forefront of the next wave of tech innovation, and the leadership team are true visionaries. Seldon has been able to build an impressive open-source community and add immediate productivity value to some of the world’s leading companies.”
Vin Lingathoti, partner at Cambridge Innovation Capital, said: “Machine learning has rapidly shifted from a nice-to-have to a must-have for enterprises across all industries. Seldon’s open-source platform operationalizes ML model development and accelerates the time-to-market by eliminating the pain points involved in developing, deploying and monitoring machine learning models at scale.”