Posted on

Recycling robotics company AMP Robotics could raise up to $70M

AMP Robotics, the recycling robotics technology developer backed by investors including Sequoia Capital and Sidewalk Infrastructure Partners, is close to closing on as much as $70 million in new financing, according to multiple sources with knowledge of the company’s plans.

The new financing speaks to AMP Robotics’ continued success in pilot projects and with new partnerships that are exponentially expanding the company’s deployments.

Earlier this month the company announced a new deal that represented its largest purchase order for its trash sorting and recycling robots.

That order, for 24 machine learning-enabled robotic recycling systems with the waste handling company Waste Connections, was a showcase for the efficacy of the company’s recycling technology.

That comes on the back of a pilot program earlier in the year with one Toronto apartment complex, where the complex’s tenants were able to opt into a program that would share recycling habits monitored by AMP Robotics with the building’s renters in an effort to improve their recycling behavior.

The potential benefits of AMP Robotic’s machine learning enabled robots are undeniable. The company’s technology can sort waste streams in ways that traditional systems never could and at a cost that’s far lower than most waste handling facilities.

As TechCrunch reported earlier the tech can tell the difference between high-density polyethylene and polyethylene terephthalate, low-density polyethylene, polypropylene and polystyrene. The robots can also sort for color, clarity, opacity and shapes like lids, tubs, clamshells and cups — the robots can even identify the brands on packaging.

AMP’s robots already have been deployed in North America, Asia and Europe, with recent installations in Spain and across the U.S. in California, Colorado, Florida, Minnesota, Michigan, New York, Texas, Virginia and Wisconsin.

At the beginning of the year, AMP Robotics  worked with its investor, Sidewalk Labs on a pilot program that provided residents of a single apartment building representing 250 units in Toronto with detailed information about their recycling habits. Sidewalk Labs is transporting the waste to a Canada Fibers material recovery facility where trash is sorted by both Canada Fibers employees and AMP Robotics.

Once the waste is categorized, sorted and recorded, Sidewalk communicates with residents of the building about how they’re doing in their recycling efforts.

It was only last November that the Denver-based AMP Robotics raised a $16 million round from Sequoia Capital and others to finance the early commercialization of its technology.

As TechCrunch reported at the time, recycling businesses used to be able to rely on China to buy up any waste stream (no matter the quality of the material). However, about two years ago, China decided it would no longer serve as the world’s garbage dump and put strict standards in place for the kinds of raw materials it would be willing to receive from other countries.

The result has been higher costs at recycling facilities, which actually are now required to sort their garbage more effectively. At the time, unemployment rates put the squeeze on labor availability at facilities where trash was sorted. Over the past year, the COVID-19 pandemic has put even more pressure on those recycling and waste handling facilities, despite their identification as “essential workers”.

Given the economic reality, recyclers are turning to AMP’s technology — a combination of computer vision, machine learning and robotic automation to improve efficiencies at their facilities.

And, the power of AMP’s technology to identify waste products in a stream has other benefits, according to chief executive Matanya Horowitz.

“We can identify… whether it’s a Coke or Pepsi can or a Starbucks cup,” Horowitz told TechCrunch last year. “So that people can help design their product for circularity… we’re building out our reporting capabilities and that, to them, is something that is of high interest.”

AMP Robotics declined to comment for this article.

Read More

Posted on

Abacus.AI raises another $22M and launches new AI modules

AI startup RealityEngines.AI changed its name to Abacus.AI in July. At the same time, it announced a $13 million Series A round. Today, only a few months later, it is not changing its name again, but it is announcing a $22 million Series B round, led by Coatue, with Decibel Ventures and Index Partners participating as well. With this, the company, which was co-founded by former AWS and Google exec Bindu Reddy, has now raised a total of $40.3 million.

Abacus co-founder Bindu Reddy, Arvind Sundararajan and Siddartha Naidu. Image Credits: Abacus.AI

In addition to the new funding, Abacus.AI is also launching a new product today, which it calls Abacus.AI Deconstructed. Originally, the idea behind RealityEngines/Abacus.AI was to provide its users with a platform that would simplify building AI models by using AI to automatically train and optimize them. That hasn’t changed, but as it turns out, a lot of (potential) customers had already invested into their own workflows for building and training deep learning models but were looking for help in putting them into production and managing them throughout their lifecycle.

“One of the big pain points [businesses] had was, ‘look, I have data scientists and I have my models that I’ve built in-house. My data scientists have built them on laptops, but I don’t know how to push them to production. I don’t know how to maintain and keep models in production.’ I think pretty much every startup now is thinking of that problem,” Reddy said.

Image Credits: Abacus.AI

Since Abacus.AI had already built those tools anyway, the company decided to now also break its service down into three parts that users can adapt without relying on the full platform. That means you can now bring your model to the service and have the company host and monitor the model for you, for example. The service will manage the model in production and, for example, monitor for model drift.

Another area Abacus.AI has long focused on is model explainability and de-biasing, so it’s making that available as a module as well, as well as its real-time machine learning feature store that helps organizations create, store and share their machine learning features and deploy them into production.

As for the funding, Reddy tells me the company didn’t really have to raise a new round at this point. After the company announced its first round earlier this year, there was quite a lot of interest from others to also invest. “So we decided that we may as well raise the next round because we were seeing adoption, we felt we were ready product-wise. But we didn’t have a large enough sales team. And raising a little early made sense to build up the sales team,” she said.

Reddy also stressed that unlike some of the company’s competitors, Abacus.AI is trying to build a full-stack self-service solution that can essentially compete with the offerings of the big cloud vendors. That — and the engineering talent to build it — doesn’t come cheap.

Image Credits: Abacus.AI

It’s no surprise then that Abacus.AI plans to use the new funding to increase its R&D team, but it will also increase its go-to-market team from two to ten in the coming months. While the company is betting on a self-service model — and is seeing good traction with small- and medium-sized companies — you still need a sales team to work with large enterprises.

Come January, the company also plans to launch support for more languages and more machine vision use cases.

“We are proud to be leading the Series B investment in Abacus.AI, because we think that Abacus.AI’s unique cloud service now makes state-of-the-art AI easily accessible for organizations of all sizes, including start-ups,” Yanda Erlich, a p artner at Coatue Ventures  told me. “Abacus.AI’s end-to-end autonomous AI service powered by their Neural Architecture Search invention helps organizations with no ML expertise easily deploy deep learning systems in production.”

Read More

Posted on

AI-tool maker Seldon raises £7.1M Series A from AlbionVC and Cambridge Innovation Capital

Seldon is a U.K. startup that specializes in the rarified world of development tools to optimize machine learning. What does this mean? Well, dear reader, it means that the “AI” that companies are so fond of trumpeting does actually end up working.

It has now raised a £7.1 million Series A round co-led by AlbionVC and Cambridge Innovation Capital . The round also includes significant participation from existing investors Amadeus Capital Partners and Global Brain, with follow-on investment from other existing shareholders. The £7.1 million funding will be used to accelerate R&D and drive commercial expansion, take Seldon Deploy — a new enterprise solution — to market and double the size of the team over the next 18 months.

More accurately, Seldon is a cloud-agnostic machine learning (ML) deployment specialist which works in partnership with industry leaders such as Google, Red Hat, IBM and Amazon Web Services.

Key to its success is that its open-source project Seldon Core has more than 700,000 models deployed to date, drastically reducing friction for users deploying ML models. The startup says its customers are getting productivity gains of as much as 92% as a result of utilizing Seldon’s product portfolio.

Alex Housley, CEO and founder of Seldon speaking to TechCrunch explained that companies are using machine learning across thousands of use cases today, “but the model actually only generates real value when it’s actually running inside a real-world application.”

“So what we’ve seen emerge over these last few years are companies that specialize in specific parts of the machine learning pipeline, such as training version control features. And in our case we’re focusing on deployment. So what this means is that organizations can now build a fully bespoke AI platform that suits their needs, so they can gain a competitive advantage,” he said.

In addition, he said Seldon’s open-source model means that companies are not locked-in: “They want to avoid locking as well they want to use tools from various different vendors. So this kind of intersection between machine learning, DevOps and cloud-native tooling is really accelerating a lot of innovation across enterprise and also within startups and growth-stage companies.”

Nadine Torbey, an investor at AlbionVC, added: “Seldon is at the forefront of the next wave of tech innovation, and the leadership team are true visionaries. Seldon has been able to build an impressive open-source community and add immediate productivity value to some of the world’s leading companies.”

Vin Lingathoti, partner at Cambridge Innovation Capital, said: “Machine learning has rapidly shifted from a nice-to-have to a must-have for enterprises across all industries. Seldon’s open-source platform operationalizes ML model development and accelerates the time-to-market by eliminating the pain points involved in developing, deploying and monitoring machine learning models at scale.”

Read More

Posted on

Construction tech startups are poised to shake up a $1.3-trillion-dollar industry

In the wake of COVID-19 this spring, construction sites across the nation emptied out alongside neighboring restaurants, retail stores, offices and other commercial establishments. Debates ensued over whether the construction industry’s seven million employees should be considered “essential,” while regulations continued to shift on the operation of job sites. Meanwhile, project demand steadily shrank.

Amidst the chaos, construction firms faced an existential question: How will they survive? This question is as relevant today as it was in April. As one of the least-digitized sectors of our economy, construction is ripe for technology disruption.

Construction is a massive, $1.3 trillion industry in the United States — a complex ecosystem of lenders, owners, developers, architects, general contractors, subcontractors and more. While each construction project has a combination of these key roles, the construction process itself is highly variable depending on the asset type. Roughly 41% of domestic construction value is in residential property, 25% in commercial property and 34% in industrial projects. Because each asset type, and even subassets within these classes, tends to involve a different set of stakeholders and processes, most construction firms specialize in one or a few asset groups.

Regardless of asset type, there are four key challenges across construction projects:

High fragmentation: Beyond the developer, architect, engineer and general contractor, projects could involve hundreds of subcontractors with specialized expertise. As the scope of the project increases, coordination among parties becomes increasingly difficult and decision-making slows.

Poor communication: With so many different parties both in the field and in the office, it is often difficult to relay information from one party to the next. Miscommunication and poor project data accounts for 48% of all rework on U.S. construction job sites, costing the industry over $31 billion annually according to FMI research.

Lack of data transparency: Manual data collection and data entry are still common on construction sites. On top of being laborious and error-prone, the lack of real-time data is extremely limited, therefore decision-making is often based on outdated information.

Skilled labor shortage: The construction workforce is aging faster than the younger population that joins it, resulting in a shortage of labor particularly for skilled trades that may require years of training and certifications. The shortage drives up labor costs across the industry, particularly in the residential sector, which traditionally sees higher attrition due to its more variable project demand.

A construction tech boom

Too many of the key processes involved in managing multimillion-dollar construction projects are carried out on Excel or even with pen and paper. The lack of tech sophistication on construction sites materially contributes to job delays, missed budgets and increased job site safety risk. Technology startups are emerging to help solve these problems.

Here are the main categories in which we’re seeing construction tech startups emerge.

1. Project conception

  • How it works today: During a project’s conception, asset owners and/or developers develop site proposals and may work with lenders to manage the project financing.
  • Key challenges: Processes for managing construction loans are cumbersome and time intensive today given the complexity of the loan draw process.
  • How technology can address challenges: Design software such as Spacemaker AI can help developers create site proposals, while construction loan financing software such as Built Technologies and Rabbet are helping lenders and developers manage the draw process in a more efficient manner.

2. Design and engineering

  • How it works today: Developers work with design, architect and engineering teams to turn ideas into blueprints.
  • Key challenges: Because the design and engineering teams are often siloed from the contractors, it’s hard for designers and engineers to know the real-time impact of their decisions on the ultimate cost or timing of the project. Lack of coordination with construction teams can lead to time-consuming changes.
  • How technology can address challenges: Of all the elements of the construction process, the design and engineering process itself is the most technologically sophisticated today, with relatively high adoption of software like Autodesk to help with design documentation, specification development, quality assurance and more. Autodesk is moving downstream to offer a suite of solutions that includes construction management, providing more connectivity between the teams.

Read More

Posted on

Sequoia-backed recycling robot maker AMP Robotics gets its largest purchase order

AMP Robotics, the manufacturer of robotic recycling systems, has received its largest purchase order from the publicly traded North American waste handling company, Waste Connections.

The order, for 24 machine learning enabled robotic recycling systems, will be used on container, fiber and residue lines across numerous materials recovery facilities, the company said.

The AMP technology can be used to recover plastics, cardboard, paper, cans, cartons and many other containers and packaging types reclaimed for raw material processing.

The tech can tell the difference between high-density polyethylene and polyethylene terephthalate, low-density polyethylene, polypropylene, and polystyrene. The robots can also sort for color, clarity, opacity and shapes like lids, tubs, clamshells, and cups — the robots can even identify the brands on packaging.

So far, AMP’s robots have been deployed in North America, Asia, and Europe with recent installations in Spain, and across the US in California, Colorado, Florida, Minnesota, Michigan, New York, Texas, Virginia and Wisconsin.

In January, before the pandemic began, AMP Robotics worked with its investor, Sidewalk Labs on a pilot program that would provide residents of a single apartment building representing 250 units in Toronto with detailed information about their recycling habits.

Working with the building and a waste hauler, Sidewalk Labs  would transport the waste to a Canada Fibers material recovery facility where trash will be sorted by both Canada Fibers employees and AMP Robotics. Once the waste is categorized, sorted, and recorded Sidewalk will communicate with residents of the building about how they’re doing in their recycling efforts.

Sidewalk says that the tips will be communicated through email, an online portal, and signage throughout the building every two weeks over a three-month period.

For residents, it was an opportunity to have a better handle on what they can and can’t recycle and Sidewalk Labs is betting that the information will help residents improve their habits. And for folks who don’t want their trash to be monitored and sorted, they could opt out of the program.

Recyclers like Waste Connections should welcome the commercialization of robots tackling industry problems. Their once-stable business has been turned on its head by trade wars and low unemployment. About two years ago, China decided it would no longer serve as the world’s garbage dump and put strict standards in place for the kinds of raw materials it would be willing to receive from other countries. The result has been higher costs at recycling facilities, which actually are now required to sort their garbage more effectively.

At the same time, low unemployment rates are putting the squeeze on labor availability at facilities where humans are basically required to hand-sort garbage into recyclable materials and trash.

AMP Robotics is backed by Sequoia Capital,  BV, Closed Loop Partners, Congruent Ventures  and Sidewalk Infrastructure Partners, a spin-out from Alphabet that invests in technologies and new infrastructure projects.

Read More

Posted on

Provizio closes $6.2M seed round for its car safety platform using sensors and AI

Provizio, a combination hardware and software startup with technology to improve car safety, has closed a seed investment round of $6.2million. Investors include Bobby Hambrick (the founder of Autonomous Stuff); the founders of Movidius; the European Innovation Council (EIC); ACT Venture Capital.

The startup has a “five-dimensional” sensory platform that — it says — perceives, predicts and prevents car accidents in real time and beyond the line-of-sight. Its “Accident Prevention Technology Platform” combines proprietary vision sensors, machine learning and radar with ultra-long range and foresight capabilities to prevent collisions at high speed and in all weather conditions, says the company. The Provizio team is made up of experts in robotics, AI and vision and radar sensor development.

Barry Lunn, CEO of Provizio said: “One point three five road deaths to zero drives everything we do at Provizio. We have put together an incredible team that is growing daily. AI is the future of automotive accident prevention and Provizio 5D radars with AI on-the-edge are the first step towards that goal.”

Also involved in Provizio is Dr. Scott Thayer and Prof. Jeff Mishler, formally of Carnegie Mellon robotics, famous for developing early autonomous technologies for Google / Waymo, Argo, Aurora and Uber.

Read More

Posted on

Cough-scrutinizing AI shows major promise as an early warning system for COVID-19

Asymptomatic spread of COVID-19 is a huge contributor to the pandemic, but of course if there are no symptoms, how can anyone tell they should isolate or get a test? MIT research has found that hidden in the sound of coughs is a pattern that subtly, but reliably, marks a person as likely to be in the early stages of infection. It could make for a much-needed early warning system for the virus.

The sound of one’s cough can be very revealing, as doctors have known for many years. AI models have been built to detect conditions like pneumonia, asthma and even neuromuscular diseases, all of which alter how a person coughs in different ways.

Before the pandemic, researcher Brian Subirana had shown that coughs may even help predict Alzheimer’s — mirroring results from IBM research published just a week ago. More recently, Subirana thought if the AI was capable of telling so much from so little, perhaps COVID-19 might be something it could suss out as well. In fact, he isn’t the first to think so.

He and his team set up a site where people could contribute coughs, and ended up assembling “the largest research cough dataset that we know of.” Thousands of samples were used to train up the AI model, which they document in an open access IEEE journal.

The model seems to have detected subtle patterns in vocal strength, sentiment, lung and respiratory performance, and muscular degradation, to the point where it was able to identify 100% of coughs by asymptomatic COVID-19 carriers and 98.5% of symptomatic ones, with a specificity of 83% and 94% respectively, meaning it doesn’t have large numbers of false positives or negatives.

“We think this shows that the way you produce sound, changes when you have COVID, even if you’re asymptomatic,” said Subirana of the surprising finding. However, he cautioned that although the system was good at detecting non-healthy coughs, it should not be used as a diagnosis tool for people with symptoms but unsure of the underlying cause.

I asked Subirana for a bit more clarity on this point.

“The tool is detecting features that allow it to discriminate the subjects that have COVID from the ones that don’t,” he wrote in an email. “Previous research has shown you can pick up other conditions too. One could design a system that would discriminate between many conditions but our focus was on picking out COVID from the rest.”

For the statistics-minded out there, the incredibly high success rate may raise some red flags. Machine learning models are great at a lot of things, but 100% isn’t a number you see a lot, and when you do you start thinking of other ways it might have been produced by accident. No doubt the findings will need to be proven on other data sets and verified by other researchers, but it’s also possible that there’s simply a reliable tell in COVID-induced coughs that a computer listening system can hear quite easily.

The team is collaborating with several hospitals to build a more diverse data set, but is also working with a private company to put together an app to distribute the tool for wider use, if it can get FDA approval.

Read More

Posted on

Deep Science: Alzheimer’s screening, forest-mapping drones, machine learning in space, more

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.

This week, a startup that’s using UAV drones for mapping forests, a look at how machine learning can map social media networks and predict Alzheimer’s, improving computer vision for space-based sensors and other news regarding recent technological advances.

Predicting Alzheimer’s through speech patterns

Machine learning tools are being used to aid diagnosis in many ways, since they’re sensitive to patterns that humans find difficult to detect. IBM researchers have potentially found such patterns in speech that are predictive of the speaker developing Alzheimer’s disease.

The system only needs a couple minutes of ordinary speech in a clinical setting. The team used a large set of data (the Framingham Heart Study) going back to 1948, allowing patterns of speech to be identified in people who would later develop Alzheimer’s. The accuracy rate is about 71% or 0.74 area under the curve for those of you more statistically informed. That’s far from a sure thing, but current basic tests are barely better than a coin flip in predicting the disease this far ahead of time.

This is very important because the earlier Alzheimer’s can be detected, the better it can be managed. There’s no cure, but there are promising treatments and practices that can delay or mitigate the worst symptoms. A non-invasive, quick test of well people like this one could be a powerful new screening tool and is also, of course, an excellent demonstration of the usefulness of this field of tech.

(Don’t read the paper expecting to find exact symptoms or anything like that — the array of speech features aren’t really the kind of thing you can look out for in everyday life.)

So-cell networks

Making sure your deep learning network generalizes to data outside its training environment is a key part of any serious ML research. But few attempt to set a model loose on data that’s completely foreign to it. Perhaps they should!

Researchers from Uppsala University in Sweden took a model used to identify groups and connections in social media, and applied it (not unmodified, of course) to tissue scans. The tissue had been treated so that the resultant images produced thousands of tiny dots representing mRNA.

Normally the different groups of cells, representing types and areas of tissue, would need to be manually identified and labeled. But the graph neural network, created to identify social groups based on similarities like common interests in a virtual space, proved it could perform a similar task on cells. (See the image at top.)

“We’re using the latest AI methods — specifically, graph neural networks, developed to analyze social networks — and adapting them to understand biological patterns and successive variation in tissue samples. The cells are comparable to social groupings that can be defined according to the activities they share in their social networks,” said Uppsala’s Carolina Wählby.

It’s an interesting illustration not just of the flexibility of neural networks, but of how structures and architectures repeat at all scales and in all contexts. As without, so within, if you will.

Drones in nature

The vast forests of our national parks and timber farms have countless trees, but you can’t put “countless” on the paperwork. Someone has to make an actual estimate of how well various regions are growing, the density and types of trees, the range of disease or wildfire, and so on. This process is only partly automated, as aerial photography and scans only reveal so much, while on-the-ground observation is detailed but extremely slow and limited.

Treeswift aims to take a middle path by equipping drones with the sensors they need to both navigate and accurately measure the forest. By flying through much faster than a walking person, they can count trees, watch for problems and generally collect a ton of useful data. The company is still very early-stage, having spun out of the University of Pennsylvania and acquired an SBIR grant from the NSF.

[embedded content]

“Companies are looking more and more to forest resources to combat climate change but you don’t have a supply of people who are growing to meet that need,” Steven Chen, co-founder and CEO of Treeswift and a doctoral student in Computer and Information Science (CIS) at Penn Engineering said in a Penn news story. “I want to help make each forester do what they do with greater efficiency. These robots will not replace human jobs. Instead, they’re providing new tools to the people who have the insight and the passion to manage our forests.”

Another area where drones are making lots of interesting moves is underwater. Oceangoing autonomous submersibles are helping map the sea floor, track ice shelves and follow whales. But they all have a bit of an Achilles’ heel in that they need to periodically be picked up, charged and their data retrieved.

Purdue engineering professor Nina Mahmoudian has created a docking system by which submersibles can easily and automatically connect for power and data exchange.

A yellow marine robot (left, underwater) finds its way to a mobile docking station to recharge and upload data before continuing a task. (Purdue University photo/Jared Pike)

The craft needs a special nosecone, which can find and plug into a station that establishes a safe connection. The station can be an autonomous watercraft itself, or a permanent feature somewhere — what matters is that the smaller craft can make a pit stop to recharge and debrief before moving on. If it’s lost (a real danger at sea), its data won’t be lost with it.

You can see the setup in action below:

https://youtu.be/kS0-qc_r0

Sound in theory

Drones may soon become fixtures of city life as well, though we’re probably some ways from the automated private helicopters some seem to think are just around the corner. But living under a drone highway means constant noise — so people are always looking for ways to reduce turbulence and resultant sound from wings and propellers.

It looks like it’s on fire, but that’s turbulence.

Researchers at the King Abdullah University of Science and Technology found a new, more efficient way to simulate the airflow in these situations; fluid dynamics is essentially as complex as you make it, so the trick is to apply your computing power to the right parts of the problem. They were able to render only flow near the surface of the theoretical aircraft in high resolution, finding past a certain distance there was little point knowing exactly what was happening. Improvements to models of reality don’t always need to be better in every way — after all, the results are what matter.

Machine learning in space

Computer vision algorithms have come a long way, and as their efficiency improves they are beginning to be deployed at the edge rather than at data centers. In fact it’s become fairly common for camera-bearing objects like phones and IoT devices to do some local ML work on the image. But in space it’s another story.

Image Credits: Cosine

Performing ML work in space was until fairly recently simply too expensive power-wise to even consider. That’s power that could be used to capture another image, transmit the data to the surface, etc. HyperScout 2 is exploring the possibility of ML work in space, and its satellite has begun applying computer vision techniques immediately to the images it collects before sending them down. (“Here’s a cloud — here’s Portugal — here’s a volcano…”)

For now there’s little practical benefit, but object detection can be combined with other functions easily to create new use cases, from saving power when no objects of interest are present, to passing metadata to other tools that may work better if informed.

In with the old, out with the new

Machine learning models are great at making educated guesses, and in disciplines where there’s a large backlog of unsorted or poorly documented data, it can be very useful to let an AI make a first pass so that graduate students can use their time more productively. The Library of Congress is doing it with old newspapers, and now Carnegie Mellon University’s libraries are getting into the spirit.

CMU’s million-item photo archive is in the process of being digitized, but to make it useful to historians and curious browsers it needs to be organized and tagged — so computer vision algorithms are being put to work grouping similar images, identifying objects and locations, and doing other valuable basic cataloguing tasks.

“Even a partly successful project would greatly improve the collection metadata, and could provide a possible solution for metadata generation if the archives were ever funded to digitize the entire collection,” said CMU’s Matt Lincoln.

A very different project, yet one that seems somehow connected, is this work by a student at the Escola Politécnica da Universidade de Pernambuco in Brazil, who had the bright idea to try sprucing up some old maps with machine learning.

The tool they used takes old line-drawing maps and attempts to create a sort of satellite image based on them using a Generative Adversarial Network; GANs essentially attempt to trick themselves into creating content they can’t tell apart from the real thing.

Image Credits: Escola Politécnica da Universidade de Pernambuco

Well, the results aren’t what you might call completely convincing, but it’s still promising. Such maps are rarely accurate but that doesn’t mean they’re completely abstract — recreating them in the context of modern mapping techniques is a fun idea that might help these locations seem less distant.

Read More

Posted on

Egnyte introduces new features to help deal with security/governance during pandemic

The pandemic has put stress on companies dealing with a workforce that is mostly — and sometimes suddenly — working from home. That has led to rising needs for security and governance tooling, something that Egnyte is looking to meet with new features aimed at helping companies cope with file management during the pandemic.

Egnyte is an enterprise file storage and sharing (EFSS) company, though it has added security services and other tools over the years.

“It’s no surprise that there’s been a rapid shift to remote work, which has I believe led to mass adoption of multiple applications running on multiple clouds, and tied to that has been a nonlinear reaction of exponential growth in data security and governance concerns,” Vineet Jain, co-founder and CEO at Egnyte, explained.

There’s a lot of data at stake.

Egnyte’s announcements today are in part a reaction to the changes that COVID has brought, a mix of net-new features and capabilities that were on its road map, but accelerated to meet the needs of the changing technology landscape.

What’s new?

The company is introducing a new feature called Smart Cache to make sure that content (wherever it lives) that an individual user accesses most will be ready whenever they need it.

“Smart Cache uses machine learning to predict the content most likely to be accessed at any given site, so administrators don’t have to anticipate usage patterns. The elegance of the solution lies in that it is invisible to the end users,” Jain said. The end result of this capability could be lower storage and bandwidth costs, because the system can make this content available in an automated way only when it’s needed.

Another new feature is email scanning and governance. As Jain points out, email is often a company’s largest data store, but it’s also a conduit for phishing attacks and malware. So Egnyte is introducing an email governance tool that keeps an eye on this content, scanning it for known malware and ransomware and blocking files from being put into distribution when it identifies something that could be harmful.

As companies move more files around it’s important that security and governance policies travel with the document, so that policies can be enforced on the file wherever it goes. This was true before COVID-19, but has only become more true as more folks work from home.

Finally, Egnyte is using machine learning for auto-classification of documents to apply policies to documents without humans having to touch them. By identifying the document type automatically, whether it has personally identifying information or it’s a budget or planning document, Egnyte can help customers auto-classify and apply policies about viewing and sharing to protect sensitive materials.

Egnyte is reacting to the market needs as it makes changes to the platform. While the pandemic has pushed this along, these are features that companies with documents spread out across various locations can benefit from regardless of the times.

The company is over $100 million ARR today, and grew 22% in the first half of 2020. Whether the company can accelerate that growth rate in H2 2020 is not yet clear. Regardless, Egnyte is a budding IPO candidate for 2021 if market conditions hold.

Read More

Posted on

New Oxford machine learning-based COVID-19 test can provide results in under 5 minutes

Oxford scientists working out of the school’s Department of Physics have developed a new type of COVID-19 test that can detect SARS-CoV-2 with a high degree of accuracy, directly in samples taken from patients, using a machine learning-based approach that could help sidestep test supply limitations, and that also offers advantages when it comes to detecting actual virus particles, instead of antibodies or other signs of the presence of the virus which don’t necessarily correlate to an active, transmissible case.

The test created by the Oxford researchers also offer significant advantages in terms of speed, providing results in under five minutes, without any sample preparation required. That means it could be among the technologies that unlock mass testing – a crucial need not only for getting a handle on the current COVID-19 pandemic, but also on helping us deal with potential future global viral outbreaks, too. Oxford’s method is actually well-designed for that, too, since it can potentially be configured relatively easily to detect a number of viral threats.

The technology that makes this possible works by labelling any virus particles found in a sample collected by a patient using short, fluorescent DNA strands that act as markers. A microscope images the sample and the labelled viruses present, and then machine learning software takes over using algorithmic analysis developed by the team to automatically identify the virus, using differences that each one produces in terms of its fluorescent light emitted owing to their different physical surface makeup, size and individual chemical composition.

This technology, including the sample collection equipment, the microscopic imager and the flourescence insertion tools, as well as the compute capabilities, can be miniaturized to the point where it’s possible to be used just about anywhere, according to the researchers – including “businesses, music venues, airports,” and more. The focus now is to create a spinout company for the purposes of commercializing the device in a format that integrates all the components together.

The researchers anticipate being able to form the company, and start product development by early next year, with the potentially of having a device approved for use and ready for distribution around six months after that. It’s a tight timeline for development of a new diagnostic device, but timelines have changed already amply in the face of this pandemic, and will continue to do so as we’re unlikely to see if fade away anytime in the near future.

Read More