Posted on

AI Being Applied in Agriculture to Help Grow Food, Support New Methods

AI is being applied to many areas of agriculture, including vertical farming, where crops are grown vertically-stacked in a controlled environment. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

AI continues to have an impact in agriculture, with efforts underway to help grow food, combat disease and pests, employ drones and other robots with computer vision, and use machine learning to monitor soil nutrient levels.

In Leones, Argentina, a drone with a special camera flies low over 150 acres of wheat checking each stock, one-by-one, looking for the beginnings of a fungal infection that could threaten this year’s crop.

The flying robot is powered by a computer vision system incorporating AI supplied by Taranis, a company founded in 2015 in Tel Aviv, Israel by a team of agronomists and AI experts. The company is focused on bringing precision and control to the agriculture industry through a system it refers to as an “agriculture intelligence platform.”

The platform relies on sophisticated computer vision, data science and deep learning algorithms to generate insights aimed at preventing crop yield loss from diseases, insects, weeds and nutrient deficiencies. The Taranis system is monitoring millions of farm acres across the US, Argentina, Brazil, Russia, Ukraine and Australia, the company states. The company has raised some $30 million from investors.

“Today, to increase yields in our lots, it’s essential to have a technology that allows us to make decisions immediately,” Ernesto Agüero, the producer on San Francisco Farm in Argentina, stated in an account in Business Insider.

Elsewhere, a fruit-picking robot named Virgo is using computer vision to decide which tomatoes are ripe and how to pick them gently, so that just the ripe tomatoes are harvested and the rest keep growing. Boston-based startup Root AI developed the robot to assist indoor farmers.

“Indoor growing powered by artificial intelligence is the future,” stated Josh Lessing, co-founder and CEO of Root AI. This year the company is currently installing systems in commercial greenhouses in Canada.

More indoor farming is happening, with AI heavily engaged. 80 Acres Farms of Cincinnati opened a fully-automated indoor growing facility last year, and currently has seven sites in the US. AI is used to monitor every step of the growing process.

“We can tell when a leaf is developing and if there are any nutrient deficiencies, necrosis, whatever might be happening to the leaf,” stated Mike Zelkind, CEO of 80 Acres. “We can identify pest issues and a variety of other things with vision systems today.” The crops grow faster indoors and have the potential to be more nutrient-dense, he suggests.

A subset of indoor farming is “vertical farming,” the practice of growing crops in vertically-stacked layers, often incorporating a controlled environment which aims to optimize plant growth. It may also use an approach without soil, such as hydroponics, aquaponics and aeroponics.

Austrian Researchers Studying AI in Vertical Farming

Researchers at the University of Applied Sciences Burgenland in Austria are involved in a research project to leverage AI to help make the vertical farming industry viable, according to an account in Hortidaily.

The team has built a small experimental factory, a 2.5 x 3 x 2.5-meter cube, double-walled with light-proof insulation. No sun is needed inside the cube. Light and temperature are controlled. Cultivation is based on aeroponics, with roots suspended in the air and nutrients delivered via a fine mist, using a fraction of the amount of water required for conventional cultivation. The fine mist is mixed with nutrients, causing the plants to grow faster than when in soil.

The program, called Agri-Tec 4.0, is run by Markus Tauber, head of the Cloud Computing Engineering program at the university. His team contributes expertise in sensors and sensor networking, and plans to develop algorithms to ensure optimal plant growth.

Markus Tauber, head of the Cloud Computing Engineering program, University of Applied Sciences Burgenland, Austria

The software architecture bases actions based on five points: monitoring, analysis, planning, execution and existing knowledge. In addition to coordinating light, temperature, nutrients and irrigation, the wind must also be continuously coordinated, even though the plants grow inside a dark cube.

“In the case of wind control, we monitor the development of the plant using the sensor and our knowledge. We use image data for this. We derive the information from the thickness and inclination of the stem. From a certain thickness and inclination, more wind is needed again,” Tauber stated.

The system uses an irrigation robot supplied by PhytonIQ Technology of Austria. Co-founder Martin Parapatits cited the worldwide trend to combine vertical farming and AI. “Big players are investing but there is no ready-made solution yet,” he stated.

He seconded the importance of wind control. “Under the influence of wind ventilation or different wavelengths of light, plants can be kept small and bushy or grown tall and slender,” Parapatits stated. “At the same time, the air movement dries out the plants’ surroundings. This reduces the risk of mold and encourages the plant to breathe.”

San Francisco Startup Trace Genomics Studies Soil

Soil is still important for startup Trace Genomics of San Francisco, founded in 2015 to provide soil analysis services using machine learning to assess soil strengths and weaknesses. The goal is to prevent defective crops and optimize the potential to produce healthy crops.

Services are provided in packages which include a pathogen screening based on bacteria and  fungi, and a comprehensive pathogen evaluation, according to an account in emerj.

Co-founders Diane Wu and Poornima Parameswaran met in a laboratory at Stanford University in 2009, following their passions for pathology and genetics. The company has raised over $35 million in funding so far, according to its website.

Trace Genomics was recently named a World Economic Forum Technology Partner, in recognition of its use of deep science and technology to tackle the challenge of soil degradation.

Poornima Parameswaran, Co-founder and Senior Executive, Trace Genomics

“This planet can easily feed 10 billion people, but we need to collaborate across the food and agriculture system to get there,” stated Parameswaran in a press release. “Every stakeholder in food and agriculture – farmers, input manufacturers, retail enterprises, consumer packaged goods companies – needs science-backed soil intelligence to unlock the full potential of the last biological frontier, our living soil. Together, we can discover and implement new and improved agricultural practices and solutions that serve the dual purpose of feeding the planet while preserving our natural resources and positioning agriculture as a solution for climate change.”

Read the source articles in Business Insider, Hortidaily and emerj.

Read More

Posted on

Three Big Tech Players Back Out of Facial Recognition Market

MIT researcher Joy Buolamwini has found racial and gender bias in facial analysis tools that have a hard time recognizing certain faces, especially darker-skinned women. (STEVEN SENNE, ASSOCIATED PRESS)

By John P. Desmond, AI Trends Editor

In the span of 72 hours, both IBM and Amazon backed out of the facial recognition business this week.

It’s a chess match on the geopolitical playing board, with AI ethics and data bias in play.

IBM moved first, closely followed by Amazon.

 (And then two days later Microsoft announced its intention to also exit the market; see below.)

The moves came after demonstrations were held across both the US and the world, in response to police mistreatment of black Americans. Facial recognition software has been called out by privacy and AI ethics groups as having higher error rates for people of color.

New IBM CEO Arvind Krishna stated in a letter to Congress on June 8, “IBM firmly opposes and will not condone uses of any technology, including facial-recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency.”

Amazon then announced on Wednesday that it is implementing a one-year moratorium on police use of its Rekognition technology, but it would still allow organizations focused on stopping human trafficking to continue to use the technology.

On its THINKPolicy Blog, IBM posted the letter from CEO Krishna submitted to Congress. It states in part, “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Amazon’s blog release stated in part, “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”

Amazon’s move was seen as “smart PR” in an account in Fast Company, which was skeptical that the 12-month moratorium would result in a significant change. Nicole Ozer, the technology and civil liberties director with the ACLU of Northern California, was quoted as stating, “This surveillance technology’s threat to our civil rights and civil liberties will not disappear in a year. Amazon must fully commit to a blanket moratorium on law enforcement use of face recognition until the dangers can be fully addressed, and it must press Congress and legislatures across the country to do the same.”

Nicole Ozer, technology and civil liberties director, ACLU of Northern California

Ozer went on to argue that Amazon “should also commit to stop selling surveillance systems like Ring that fuel the over-policing of communities of color.” She added “Face recognition technology gives governments the unprecedented power to spy on us wherever we go. It fuels police abuse. This surveillance technology must be stopped.”

A recent account from the Electronic Frontier Foundation warned consumers that video from Ring is in the Amazon cloud and is potentially accessible by Amazon employees and law enforcement agencies with agreements in place with Amazon.

Joy Buolamwini Led Research that Discovered Bias

In a piece in AI Trends last year, Facial Recognition Software Facing Challenges, described the work of MIT researcher Joy Buolamwini. Today she refers to herself as a “poet of code” and fighter for “algorithmic justice.”

The study from MIT Media Lab researchers​ in February 2018 found that Microsoft, IBM and China-based Megvii (FACE++) tools had high error rates when identifying darker-skin women compared to lighter-skin men. Buolamwini got the attention of the technology giants, members of Congress and other AI scholars.

“There needs to be a choice,” stated Buolamwini in an account in the Denver Post. “Right now, what’s happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late.”

She caught some flak from the tech giants. Amazon challenged what it called Buolamwini’s “erroneous claims” and said the study confused facial analysis with facial recognition, improperly measuring the former with techniques for evaluating the latter.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Matt Wood, general manager of artificial intelligence for Amazon’s cloud-computing division, wrote in a January 2019 blog post.

Buolamwini, who has founded a coalition of scholars, activists and others called the Algorithmic Justice League, has blended her scholarly investigations with activism. She has said a major message of her research is that AI systems need to be carefully reviewed and consistently monitored if they’re going to be used on the public. Not just to audit for accuracy, she said, but to ensure face recognition isn’t abused to violate privacy or cause other harms.

“We can’t just leave it to companies alone to do these kinds of checks,” she said.

No news of any change in China, where the government has embraced facial recognition and is expanding its use.

Read the source articles in Fortune, CNBC, TechCrunch, Fast Company and AI Trends.

Joy Buolamwini Comments for AI Trends

AI Trends reached out to Joy Buolamwini on these recent developments to get her reaction. She sent this response:

“With IBM’s decision and Amazon’s recent announcement, the efforts of so many civil liberties organizations, activists, shareholders, employees and researchers to end harmful use of facial recognition are gaining even more momentum. Given Amazon’s public dismissals of research showing racial and gender bias in their facial recognition and analysis systems, including research I coauthored with Deborah Raji, this is a welcomed though unexpected announcement.

“Microsoft also needs to take a stand. More importantly our lawmakers need to step up. We cannot rely on self-regulation or hope companies will choose to reign in harmful deployments of the technologies they develop. I reiterate a call for a federal moratorium on all government use of facial recognition technologies. The Algorithmic Justice League recently released a white paper calling for a federal office to set redlines and guidelines for this complex set of technologies which offers a pathway forward. The first step is to press pause, not just company wide, but nationwide.

“I also call on all companies that substantially profit from AI — including  IBM. Amazon, Microsoft, Facebook, Google, and Apple— to commit at least 1 million dollars each towards advancing racial justice in the tech industry. The money should go directly as unrestricted gifts to support organizations like the Algorithmic Justice League, Black in AI, and Data for Black Lives that have been leading this work for years. Racial justice requires algorithmic justice.

From Joy Buolamwini at the Algorithmic Justice League.

Update: Microsoft Also Exits the Market

Later the same day Buolamwini sent AI Trends these comments, Thursday, June 11, Microsoft President Brad Smith confirmed in a Washington Post live event that Microsoft would also exit the facial recognition market, according to an account in The Washington Post.  

“We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology,” Smith stated, making clear the Microsoft decision is for now a moratorium.

Read More

Posted on

AI Being Employed to Predict Wildfires with Greater Accuracy

As wildfire risk has grown larger and more destructive in recent years, especially in the drought-plagued West, AI is being employed to assess wildfire risks more accurately. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

As wildfire risk has grown larger and more destructive in recent years, especially in the drought-plagued West, AI is being employed to assess wildfire risks more accurately and sound the alarm hours earlier after fire breaks out.

For example, the Nature Conservancy is using controlled burning techniques in forests in California’s Sierra Nevada susceptible to wildfires. This summer, on a 28,000-acre plot near Lake Tahoe, the group plans to test an AI program designed to assess how well the thinning plan will prevent fires.

“Nothing is going to completely replace the human brain to make decisions, but AI can help us make better decisions across a much larger area,” stated Edward Smith, forest ecology and fire manager at the Nature Conservancy, based in Arlington, Va., in a recent account in The Wall Street Journal.

Edward Smith, forest ecology and fire manager at the Nature Conservancy, Arlington, Va.

The AI program will use satellite imagery of pre- and post-thinning work to make the assessments. This has been made possible by the availability of small satellites taking more photos of forests, and powerful computers needed to process the data.

Microsoft’s launch in 2017 of “AI for Earth,” which allowed the general public access to AI tools that can be used to process data from satellites as a cloud service, gave a boost to the work. Microsoft issues some grants to help defray the cost of the research for certain issues, including wildfire prevention.

“We’re flying blind now,” stated Lucas Joppa, chief environmental officer for Microsoft. “We have no idea what the state of our national forests are in the US, yet I can point you to a coffee shop in a millisecond.”

SilviaTerra LLC of San Francisco began using the Microsoft network about a year ago to build a map of the nation’s forest based on data from satellites, including the Landsat program run by the US Geological Survey and NASA. The firm hopes to map more than 400 million acres of forest for the project, called Basemap, to help guide thinning work.

Lucas Joppa, chief environmental officer for Microsoft

AI is helping the company do with a workforce of 10 what it would take thousands to do without. It. “One person can measure maybe 20 acres in a day,” stated Zack Parisa, co-founder of SilviaTerra. “With AI, you can do a whole forest.”

Salo Sciences of San Francisco is developing a product incorporating AI to map areas of highest wildfire risk, based on an analysis of dead and dying trees. The firm is concentrating on California, where an estimated 150 million trees died during a five-year drought earlier in the decade.

“Some of the data that goes into state wildfire risk maps is 15 years old,” stated Dr. Dave Marvin, co-founder and CEO. He formed the company with Christopher Anderson, a graduate student from Stanford University. “We saw we needed to bring together a new framework of how you take satellite imagery and data and more rapidly inform conservation efforts,” stated Dr. Marvin.

In similar work, a team of experts in hydrology, remote sensing and environmental engineering have developed a deep-learning model that maps fuel moisture levels in fine detail across 12 western states, from Colorado, Montana, Texas, and Wyoming to the Pacific Coast, according to a recent press release from Stanford University.

The technique is described in the August 2020 issue of Remote Sensing of Environment. The senior author of the paper, Stanford University ecohydrologist Alexandra Konings, the new dataset produced by the model could “massively improve fire studies.”

The paper’s lead author, Krishna Rao, a PhD student in Earth system science at Stanford, said the model needs more testing to be used in fire management decisions, but is revealing previously invisible patterns. Over time, it should help to “chart out candidate locations for prescribed burns,” Rao stated.

The Stanford model uses a recurrent neural network, an AI system that can recognize patterns in vast volumes of data. The scientists trained the model using data from the National Fuel Moisture Database, which has compiled some 200,000 measurements since the 1970s using a painstaking manual method. The researchers added measurements of visible light bouncing off Earth, and the return of microwave radar signals, which can penetrate through leafy branches all the way to the ground.

“Now, we are in a position where we can go back and test what we’ve been assuming for so long – the link between weather and live fuel moisture – in different ecosystems of the western United States,” Rao stated.

“One of our big breakthroughs was to look at a newer set of satellites that are using much longer wavelengths, which allows the observations to be sensitive to water much deeper into the forest canopy and be directly representative of the fuel moisture content,” stated researcher Konings.

Read the source article in The Wall Street Journal and read the Stanford University press release.

Read More

Posted on

Maritime Shipping Industry Ripe for AI Disruption

The maritime shipping industry is ripe for disruption by AI, with startups positioning to help established shippers exploit the potential. (GETTY IMAGES)

By AI Trends Staff

The maritime shipping industry is ripe for disruption by AI, with startups positioning to help established shippers exploit the potential.

The industry naturally produces huge amounts of data, and opportunities exist at every step of the supply chain for stakeholders to use AI to augment their operation with positive effects. The convergence of AI and the Internet of Things (IoT) also offers the potential of a more connected intelligence.

At every step in the supply chain, there are opportunities for stakeholders to use AI to positively augment their operation.

This will include predictive analytics – what will happen – prescriptive analytics – what should we do – and adaptive analytics – how should the system adapt to the latest changes, according to an account from Pacific Green Technologies Group. Major cargo shipping companies including Kongsberg, Rolls Royce, Maersk and Wartsila all know that change and developments brought on by AI in the field are poised to leap exponentially.

Startup Sea Machines of Boston is currently testing its perception and situational awareness technology aboard one of Maersk’s newest Winter Palace ice-class container ships. Several other installations are scheduled.

While it would not make the ship autonomous, it is a step towards self-steering vessels, employing technology similar to that of self-driving car features, collecting streams of data from a vessel’s environmental surroundings, identifying and tracking potential conflicts, and displaying the knowledge in the wheelhouse.

Sea Machine’s team includes experienced managers from marine construction, salvage, offshore oil and gas, world-class automation engineers, and autonomy scientists.The company has raised $12.3 million in funding so far, according to Crunchbase.

Orca AI of Tel Aviv offers a collision avoidance system used in marine navigation. The system applies AI to data provided by vision and other sensors. The company is committed to reducing human errors in maritime shipping through use of intelligent, automated vessels. The system helps the captain and navigation crew get an accurate view of the environment in real time, thus assisting in making decisions.

Major shipping companies are also involved in developing their own AI systems for navigation. Wartsila Guidance Marine, a unit of the shipping company Wartsila, in 2018 launched SceneScan, a system that uses laser position reference sensors to guide navigation in harbors. Tracking information is provided relative to structures within the sensor field of view. The system matches its current observation of the scene against a map generated from previous observations of the scene.

Wartsila Guidance Marine successfully completed sea trials of the SceneScan system in April 2019 aboard the Topaz Citadel, a vessel owned by Topaz Energy and Marine, a leading international offshore support vessel company.

Disruption Likely to Include Job Loss in Marine Industry

Longer term, autonomous shipping in the maritime shipping industry is leading to disruption including significant job loss, suggests a recent report in Sea News.

“Autonomous shipping is the future of the maritime industry. As disruptive as the smartphone, the smart ship will revolutionize the landscape of ship design and operations,” stated Mikael Mäkinen, President, Marine at Rolls-Royce Plc.

Mikael Mäkinen, President, Marine at Rolls-Royce Plc.

The timeline for delivery on the promise of autonomous ships is stretched out. Estimates are that the first remote-controlled, unmanned coastal vessels will not be launched until 2025. Fully-autonomous unmanned coastal vessels are not expected until 2035, according to a report by Nautix of Copenhagen, a company offering marine fleet management software.

The three founders of Nautix started their careers at the Singapore Maritime Academy in 2003. In ensuring years, they gained experience in the maritime industry working as deck officers, engineers, superintendents and software innovation managers.

“We’ve felt the pain of our colleagues being let down by the sub-standard tools they’ve been provided. We want to change the status quo. We have the software expertise and the technical knowledge to make a difference,” states Tarang Valecha, co-founder and CEO of Nautix on the company’s website.

Tarang Valecha, co-founder and CEO of Nautix

Serious challenges remain, not just technical in nature. International guidelines and regulations regarding autonomous ships are not likely to be agreed upon within the next decade. The International Transport Workers Federation (ITF) has suggested remote control vessels will lack the skills, knowledge and experience of professional seafarers, so that in the event of an accident or incident requiring immediate attention, the autonomous vessel could be at risk.

The ITF and the International Federal of Shipmasters’ Associations (IFSMA) are very concerned about job loss. Today the industry employs an estimated 1.6 million people on ships and land, who carry out 90 percent of world trade. More than 80 percent of seafarers surveyed by these two organizations have anxiety about possible job losses with the advent of AI and automation.

A study from Oxford University estimated that 47% of US jobs in the maritime industry could be lost over the next 20 years, low-skilled and high-skilled jobs.

Read the source articles and studies from  Pacific Green Technologies Group and in Sea News.

Read More

Posted on

Private Industry, Accademia Stepping Up in AI Fight Against Covid-19

Innovative private enterprises and selected academics are employing AI in the fight against the spread of the Covid-29 virus, each hoping for the breakthrough idea. (GETTY IMAGES)
By John P. Desmond, AI Trends Editor
The worldwide Covid-19 spread is a tragic experience that is spurring innovators to try many approaches …

Read More

Posted on

Autonomous Freight Trains Powered by AI Coming

By AI Trends Staff
Driverless trains powered by AI are coming. Driverless train software produced by New York Air Brake was used in a demonstration last summer of a 30-car freight train traveling 48 miles at a research and testing facility owned by the Association of American Railroads, according to a …

Read More

Posted on

AI Employed to Model Progression, Research Drugs, Therapies to Fight Coronavirus

AI is being employed to fight the COVID-19 coronavirus on all fronts, including mapping progression of the virus and finding drugs and therapies to help in treatment. (GETTY IMAGES)
By John P. Desmond, AI Trends Editor
AI is being used on multiple fronts to combat the coronavirus (COVID-19), including for …

Read More

Posted on

Insurance Companies Using AI to Build Safety Systems, Optimize Rates 

Leading companies in the $500 billion/year auto insurance business are employing AI to gain competitive advantage, while startups are using AI to gain a foothold. Credit: Getty Images 

By AI Trends Staff 

Leading insurance companies in the $500 billion/year insurance industry are studying what types of ML applications to try to gain a business advantage, and startups are using AI to disrupt the industry.  

Safety is a big focus, timely considering that motor-vehicle fatalities in 2016 peaked at 40,200; the highest amount recorded in nearly a decade. The estimated healthcare costs to people injured in car crashes totaled over $80 billion. Insurance adjusters who assess auto damage earned an average salary over $63,000 in 2016, according to the Bureau of Labor Statistics.  

A look at AI initiatives at four leading insurance companies was recently published in emerj. 

State Farm launched an online competition in 2016 to help develop a system using computer vision to identify distracted drivers. Some 1,440 participated; the company offered $650,000 total in three prize levels. State Farm provided a dataset of photos of drivers from dashboard cameras. The challenge was to classify the perceived behavior of each driver user categories including texting, talking on the phone and operating the radio. 

Scores were compiled using a metric of a minimum value of zero to a maximum value of one. The goal is to achieve a score as close to zero as possible, indicating higher accuracy. The winning application achieved a score of .08739 using two neural network models and image classification based on regions of the image, including the bottom right quarter where the driver’s hand is usually visible.  

State Farm has launched a Drive Safe & Save program that provides discounts of up to 30% to motorists who enroll.  

Liberty Mutual in 2017 announced plans to develop apps with AI capability and products aimed at improving driver safety. The company established an innovation incubator, Solaria Labs, to develop an open API developer portal to help the effort along. The company is believed to be working on an app to help drivers involved in a car accident quickly assess the damage to their car using the smartphone camera. A database of thousands of car crash images will be referenced to generate a repair estimate. 

Liberty Mutual has also launched a $150 million venture capital initiative, Liberty Mutual Strategic Ventures (LMSV), to focus on innovative technology and services for the insurance industry. Among the companies receiving investment is Snapsheet, a smartphone application that enables users to receive auto repair bids from local body shops within 24 hours. The company uses AI and machine learning to support its data analysis.  

Many startups see an opportunity to disrupt the auto insurance business; a number were mentioned in a recent account in builtin.  

Insurify, for example, was founded in 2013 as a spinoff of the MIT $100k Pitch competition; it’s official site launched in 2016. The site enables insurance shoppers to take coverage needs into their own hands. Founder and CEO Snejina Zacharia has led the company to over $25 million in funding and secured $10 billion in insurance coverage to date. The company has achieved a 4.8 out of 5.0 possible rating on ShopperApproved, averaging responses from 2,500 reviews.  

INSHUR is aimed at helping rideshare drivers using Uber or Lyft, and limousine drivers, to find competitive rates for auto insurance. Founded in 2016, the company is based in New York City, is backed by Munich Re Digital Partners, and launched in the UK in 2018. INSHUR has signed up over 40,000 drivers. The company supports liability and physical damage policies with minimum limits of insurance as required by the NYC Taxi & Limousine Commission (TLC) for limousines, which is also compatible with requirements for ride sharing services. 

Nauto aims to help commercial fleets avoid collisions using the AI-powered Driver Behavior Learning Platform to reduce driver distraction and prevent collisions. The system includes dual-facing cameras, computer vision and proprietary algorithms to assess how drivers are interacting with vehicles and the road, to pinpoint and prevent risky behavior in real time. Nauto has analyzed billions of data points from over 400 million video miles analyzed with AI; its machine learning algorithms are continuously improving. Commercial fleets using Nauto have avoided an estimated 25,000 collisions, resulting in an estimated $400 million in savings. 

Read the source articles in emerj and builtin. 

Source: AI Trends

Posted on

10 Emerging Companies in AI

Here are 10 companies harnessing AI to make lives easier and more efficient. (GETTY IMAGES)

Contributed Commentary by Rahim Rasool, Data Science Dojo

The simplest way to think about Artificial Intelligence is in the context of a human. It forms systems that work intelligently and independently. It can perform in complex environments, autonomously, and adapts to that environment by learning. From SIRI to self-driving cars, AI has taken the world by storm and has the potential to disrupt nearly every sector one can think of. AI has disrupted nearly every industry to boost efficiency in this competitive market. As part of our education program efforts, we prepared a list of 10 emerging companies in AI that in our judgement harnessing the true power of AI to make our lives easier and more efficient.


Founded by ex-employees of Google who worked as principal engineers, Dave Ferguson and Jiajun Zhu founded Nuro in 2016. Nuro is focused on deliveries, specifically the ones that are low-speed, local, and last-mile. This includes groceries, laundry, or your food take-out order. It’s targeting to design a new type of vehicle altogether rather than reconfiguring autonomous vehicles. Nuro launched its electric-powered vehicle, the R1, in 2018 and hopes the R1 will present retailers, both large and small, with an eco-friendly, on-demand delivery alternative whose cost notably does not include paying a driver.


Who doesn’t like lemonade? I bet all of us have some time in our childhood days setup the stalls to sell lemonade. How simple the name sounds and easy to grasp, so is this AI startup that is changing the way how we use insurance claims. Lemonade sells renters and homeowners insurance. It is a licensed policy carrier itself. It uses a chatbot to collect customer information and work through claims—30% of which don’t require human involvement to be resolved. It has managed to attract first-time insurance buyers, and its total pool of customers has reached half a million. They take a fixed fee from your premium, use the remainder to pay claims, and unclaimed premiums go to a charity designated by the user. They’ve achieved a 3-sec world record claim handling.


Scale was founded by 22-year-old Alex Wang, an MIT college dropout. Scale is a labeling data platform that takes a first pass at marking up pictures before handing them off to a network of some 30,000 contract workers, who then perform the finishing touches. Scale is primarily setting its eyes on the autonomous vehicle market which is expected to reach $77 billion by 2035. Companies are using Scale to turn raw information into human-annotated training data. Scale’s workforce will label their text, audio, images, and videos to power the customer’s AI applications. According to Scale’s website, “Scale uses a combination of high-quality human task work, smart tools, statistical confidence checks, and machine learning to consistently return scalable, precise data.” With this human-in-the-loop strategy, Scale ensures that the data it is offering is accurate and free of bias. Some of their clients include big names like Uber, Waymo, and Airbnb.


Carving out space in the security market, this cloud-connected security camera is equipped with AI-driven features like object and movement detection. As stated on its website, “Verkada brings the ease of use that consumer security cameras provide, to the levels of scale and protection that businesses require. By building high-end hardware on an intuitive, software platform, modern enterprises can strengthen the safety and productivity of their surveillance operations.” What does this mean? Verkada creates state-of-the-art security cameras that leverage modern software to keep your employees, customers, and business safe. The company’s wide-ranging list of clients includes fitness club Equinox, Vancouver Mall and more than 500 school districts, which use the cameras for anything from monitoring student safety to tracking food deliveries.


It will be unfair not to include one of the leading companies in the medical industry. It aims to advance pathology with machine learning and deep learning techniques that predict the optimal treatment for a given patient based on all the data you can extract from that patient. PathAI wants to use AI to make pathologists more accurate in their diagnoses and to help guide the right treatment of diseases like cancer. Working with leading life science companies and researchers, they also aim to use AI to help pathologists better predict how patients will respond to their therapy based on the characteristics of their tissue.

Standard Cognition:

We all have envisioned a future where one can walk into a retail store, pick up items and simply walk out without interacting with any store employee or without being observed by any human altogether. Well, we are almost there! With a valuation of over half a billion, Standard Cognition has opened a pop-up store in San Francisco where one can experience this futuristic technology at work. The company uses overhead cameras to track individuals and items continuously, therefore, allowing shoppers to pick up items and check out without scanning using Standard Cognition’s autonomous checkout system. The company has said that it works to anonymize customer’s data, so there isn’t any kind of product tracking that might chase you around the Internet that you might find on other platforms.

Brain Corp.:

“Anything with wheels can be turned into a fully autonomous, self-driving robot using the BrainOS operating system, provided that the speeds are slow and stopping is never a safety concern…” said Eugene Izhikevich, Brain Corp.’s CEO. Brain Corp wants machinery to be run by robotic software. They started with floor-sweepers, considering it as a good place to begin because they operate indoors, work in a controlled environment, and are a market that is ripe for disruption. The CEO believes his company can be as central to the robotic revolution as Microsoft was to personal computers. They have raised over $125 million in funding from organizations including Qualcomm and Softbank.

Blue Hexagon:

The company offers real-time threat detection platform using deep learning technologies. The company claims that it can detect and block threats in under a second. Blue Hexagon earned a perfect score in the network threat protection by the independent malware testing lab PCSL, with a 100% detection efficacy, 0% false-positive rate, and 125 ms average detection time. The company’s platform addresses the limitations of perimeter defenses like intrusion detection systems (IDS) and sandboxes that cannot keep up with the daily onslaught of malicious malware variants. Their models are frequently learning patterns in connections from attackers.


The high-paced developments in AI offer a lot of promising features that may positively impact many lives, especially in the area of assistive technology. Aira (Artificial Intelligence and remote assistance) is leveraging AI to assist the visually impaired in the most effective manner. With big ambitions to soon launch computer-vision-based navigation, Aira helps people with vision problems to better see the world. It combines real human beings with an AI-powered agent through its app or custom smart glasses. At the grocery store chains like Wegmans, for instance, blind and low-vision shoppers can activate the Aira app to connect them with professionals who help them move around the store, find what they want, and direct them to the shortest checkout lines.


Weather forecasting is among those areas that AI promised to tackle. ClimaCell is attempting to lead in this regard. Unlike most forecasting apps, ClimaCell ensures highly accurate street-by-street, minute-by-minute forecasts. It uses data collected from connected cars, airplanes, drones, and IoT devices and combines that with other meteorological sources to create a more up-to-date and a fine-grained view of the weather around you. According to the company, its correct forecast of the severity of the Chicago snowstorm, in the previous year, allowed one airline to better manage its schedules and minimize losses stemming from delays and diversions.

Rahim Rasool is an Associate Data Scientist at Data Science Dojo where he assists in creating educational content for its data science bootcamp. Rahim holds a bachelor’s in electrical engineering from National University of Sciences and Technology. He possesses a great interest in machine learning, astronomy, and history.

Source: AI Trends

Posted on

Rometty Out as CEO of IBM; Foray into AI in Medicine with Watson Health a Legacy

The success of IBM Watson on the Jeopardy television game show helped to launch the company’s AI in medicine strategy. (WIRED)

By AI Trends Staff

Ginni Rometty has announced she is stepping down as CEO of IBM in April after an eight-year run. Her legacy at IBM is tied to the growth of AI as a business strategy, the success of Watson on Jeopardy, and the move into healthcare with IBM Watson Health and its subsequent retrenchment.

Rometty, 62, will be succeeded by Arvind Krishna, who has been head of IBM’s cloud and cognitive software division, reported The Wall Street Journal. Jim Whitehurst, the chief executive of Red Hat, the open source software company acquired by IBM last year for $33 billion, was appointed president of IBM.

It will be the first time in decades that IBM shares a dual leadership structure at the top. Rometty will continue as board chair through the end of the year, when she will retire after four decades at IBM. She is one of the highest profile female executives in the technology business, whose leadership is dominated by men.

Ginni Rometty, CEO of IBM

IBM’s share price fell 25% during Rometty’s tenure unfortunately, versus a 500% rise in Microsoft’s value and the tech-heavy Nasdaq Composite Index rising 250% in that time.

“She made a lot of changes,” stated David Grossman, an analyst at Stifel Financial Corp. “You could argue that she didn’t make enough changes quickly enough, but I think the business has transformed during that period.”

The Red Hat acquisition’s outcome will be important to Rometty’s legacy.

Riding Watson’s Jeopardy Success into Health Care

The day after Watson defeated two human champions in the television game show Jeopardy!, IBM announced Watson was heading into the medical field. IBM would take its ability to understand natural language that it showed off on television, and apply it to medicine. The first commercial offerings would be available in 18 to 24 months, the company promised, according to an account in IEEE Spectrum.

Eight years later, IBM has announced many more efforts to develop AI-powered medical technology and spent billions on acquisitions to assist the effort, but the jury is out on the results.

“Reputationally, I think they’re in some trouble,” stated Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of the 2015 book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age (McGraw-Hill).

IBM was the first company to make a major push to bring AI to medicine. The results on Jeopardy! and a posh lower Manhattan headquarters for the AI division, where prospects were wowed with fancy graphics on curved screens, gave the IBM AI salesforce a launching pad. “They came in with marketing first, product second, and got everybody excited,” stated Wachter.  “Then the rubber hit the road. This is an incredibly hard set of problems, and IBM, by being first out, has demonstrated that for everyone else.”

Rometty told an audience of health IT professionals at a 2017 conference that “AI is mainstream, it’s here and it can change almost everything about health care.” She like many saw the potential for AI to help transform the healthcare industry.

Watson had used advances in natural language processing to win at Jeopardy! The Watson team used machine learning on a training dataset of Jeopardy! clues and responses. To enter the healthcare market, IBM tried using text recognition on medical records to build its knowledge base. Unstructured data such as doctors’ notes full of jargon and shorthand may account for 80% of a patient’s record. It was challenging.

The effort was to build a diagnostic tool. IBM formed the Watson Health division in 2015. The unit made $4 billion worth of acquisitions. The search continued for the medical business case to justify the investments. Many projects were launched around decision support using large medical data sets. A focus on oncology to personalize cancer treatment for patients looked promising.

Physicians at the University of Texas MD Anderson Cancer Center in Houston, worked with IBM to create a tool called Oncology Expert Advisor. MD Anderson got the tool to test stage in the leukemia department; it never became a commercial product.

The project did not end well; it was cancelled. A 2016 audit by the University of Texas found the cancer center had spent $62 million on the project. The IEEE Spectrum authors said the project revealed “a fundamental mismatch between the promise of machine learning and the reality of medical care,” something that would be useful to today’s doctors.

Watson for Oncology continues to be developed and sold by IBM. Hospitals in India, South Korea, and Thailand have adopted it. A study in India found Watson’s treatment recommendations were in agreement with human physicians at a 73% rate. While positive, IBM needs the use case that shows Watson helped a patient or saved a hospital money. Those experienced in AI counsel that the system will get better as it learns over time.

“It’s a long haul, but it’s worth it,” stated Mark Kris, a lung specialist at Memorial Sloan Kettering Cancer Center in New York City; he has led the institution’s collaboration with IBM Watson since 2012.

Read the source article in  The Wall Street Journal and in  IEEE Spectrum.

Source: AI Trends