Posted on

Russian surveillance tech startup NtechLab nets $13M from sovereign wealth funds

NtechLab, a startup that helps analyze footage captured by Moscow’s 100,000 surveillance cameras, just closed an investment of more than 1RUB billion ($13 million) to further global expansion.

The five-year-old company sells software that recognizes faces, silhouettes and actions on videos. It’s able to do so on a vast scale in real time, allowing clients to react promptly to situations It’s a key “differentiator” of the company, co-founder Artem Kukharenko told TechCrunch.

“There could be systems which can process, for example, 100 cameras. When there are a lot of cameras in a city, [these systems] connect 100 cameras from one part of the city, then disconnect them and connect another hundred cameras in another part of the city, so it’s not so interesting,” he suggested.

The latest round, financed by Russia’s sovereign wealth fund, the Russian Direct Investment Fund, and an undisclosed sovereign wealth fund from the Middle East, certainly carries more strategic than financial importance. The company broke even last year with revenue reaching $8 million, three times the number from the previous year, ane expects to finish 2020 at a similar growth pace.

Nonetheless, the new round will enable the startup to develop new capabilities such as automatic detection of aggressive behavior and vehicle recognition as it seeks new customers in its key markets of the Middle East, Southeast Asia and Latin America. City contracts have a major revenue driver for the firm, but it has plans to woo non-government clients, such as those in the entertainment industry, finance, trade and hospitality.

The company currently boasts clients in 30 cities across 15 countries in the Commonwealth of Independent States (CIS) bloc, Middle East, Latin America, Southeast Asia and Europe.

These customers may procure from a variety of hardware vendors featuring different graphic processing units (GPUs) to carry out computer vision tasks. As such, NtechLab needs to ensure it’s constantly in tune with different GPU suppliers. Ten years ago, Nvidia was the go-to solution, recalled Kukharenko, but rivals such as Intel and Huawei have cropped up in recent times.

The Moscow-based startup began life as a consumer software that allowed users to find someone’s online profile by uploading a photo of the person. It later pivoted to video and has since attracted government clients keen to deploy facial recognition in law enforcement. For instance, during the COVID-19 pandemic, the Russian government uses NtechLab’s system to monitor large gatherings and implement access control.

Around the world, authorities have rushed to implement similar forms of public health monitoring and tracking for virus control. While these projects are usually well-meaning, they inspire a much-needed debate around privacy, discrimination, and other consequences brought by the scramble for large-scale data solutions. NtechLab’s view is that when used properly, video surveillance generally does more good than harm.

“If you can monitor people quite [effectively], you don’t need to close all people in the city… The problem is people who don’t respect the laws. When you can monitor these people and [impose] a penalty on them, you can control the situation better,” argued Alexander Kabakov, the other co-founder of the company.

As it expands globally, NtechLab inevitably comes across customers who misuse or abuse its algorithms. While it claimed to keep all customer data private and have no control over how its software is used, the company strives to “create a process that can be in compliance with local laws,” said Kukharenko.

“We vet our partners so we can trust them, and we know that they will not use our technology for bad purposes.”

Read More

Posted on

Google’s AI-powered flood alerts now cover all of India and parts of Bangladesh

India, the world’s second most populated nation, sees more than 20% of the global flood-related fatalities each year as overrun riverbanks sweep tens of thousands of homes with them. Two years ago, Google volunteered to help.

In 2018, the company began its flood forecasting pilot initiative in Patna — the capital of the Indian state of Bihar, which has historically been the most flood-prone region in the nation with over 100 fatalities each year — to provide accurate real-time flood forecasting information to people in the region.

The company’s AI model analyzes historical flood data gleaned from several river basins in different parts of the world to make accurate prediction for any river basin.

For this project, Google has not worked in isolation. Instead, it has collaborated with India’s Central Water Commission, Israel Institute of Technology, and Bar-Ilan University. It also works with the Indian government to improve how New Delhi collects data on water levels. They have installed new, electronic sensors that automatically transmit data to water authorities.

Thrilled by the initial results, two years later, Google’s Flood Forecasting Initiative now covers all of India, Google announced on Tuesday.

The company also said it has partnered with the Water Development Board of Bangladesh, which sees more floods than any other country in the world, to expand its initiative to parts of India’s neighboring nation. This is the first time Google is bringing Flood Forecasting Initiative outside of India.

Alerts for flood forecasting

Part of the job is to deliver this potentially life-changing information to people. In India, the company said it has sent out more tthan 30 million notifications to date in flood-affected areas. It says its initiative can help better protect more than 200 million people across more than 250,000 square kilometers (96,525 square miles). In Bangladesh, Google’s model is able to cover more than 40 million people and the company is working to extend this to the whole nation.

“We’re providing people with information about flood depth: when and how much flood waters are likely to rise. And in areas where we can produce depth maps throughout the floodplain, we’re sharing information about depth in the user’s village or area,” wrote Yossi Matias, VP of Engineering and Crisis Response Lead at Google.

But the company says it found that there was room for more improvement. This year, Google said it overhauled the way its alerts look and function to make it more accessible to people. It also added support for Hindi, Bengali, and seven other languages, and further localized the messaging in the alerts. It has also rolled out a new forecasting model that doubles the warning time of many of its alerts.

Moving forward, the company said its charitable arm Google.org has started a collaboration with the International Federation of Red Cross and Red Crescent Societies to build local networks and deliver alerts to people who otherwise wouldn’t receive smartphone alerts directly.

“There’s much more work ahead to strengthen the systems that so many vulnerable people rely on—and expand them to reach more people in flood-affected areas. Along with our partners around the world, we will continue developing, maintaining and improving technologies and digital tools to help protect communities and save lives,” wrote Matias.

Read More

Posted on

Adobe tests an AI recommendation tool for headlines and images

Team members at Adobe have built a new way to use artificial intelligence to automatically personalize a blog for different visitors.

This tool was built as part of the Adobe Sneaks program, where employees can create demos to show off new ideas, which are then showcased (virtually, this year) at the Adobe Summit. While the Sneaks start out as demos, Adobe Experience Cloud Senior Director Steve Hammond told me that 60% of Sneaks make it into a live product.

Hyman Chung, a senior product manager for Adobe Experience Cloud, said that this Sneak was designed for content creators and content marketers who are probably seeing more traffic during the coronavirus pandemic (Adobe says that in April, its own blog saw a 30% month-over-month increase), and who may be looking for ways to increase reader engagement while doing less work.

So in the demo, the Experience Cloud can go beyond simple A/B testing and personalization, leveraging the company’s AI technology Adobe Sensei to suggest different headlines, images (which can come from a publisher’s media library or Adobe Stock) and preview blurbs for different audiences.

Image Credits: Adobe

For example, Chung showed me a mocked-up blog for a tourism company, where a single post about traveling to Australia could be presented differently to thrill-seekers, frugal travelers, partygoers and others. Human writers and editors can still edit the previews for each audience segment, and they can also consult a Snippet Quality Score to see the details behind Sensei’s recommendation.

Hammond said the demo illustrates Adobe’s general approach to AI, which is more about applying automation to specific use cases rather than trying to build a broad platform. He also noted that the AI isn’t changing the content itself — just the way the content is promoted on the main site.

“This is leveraging the creativity you’ve got and matching it with content,” he said. “You can streamline and adapt the content to different audiences without changing the content itself.”

From a privacy perspective, Hammond noted that these audience personas are usually based on information that visitors have opted to share with a brand or website.

Read More

Posted on

Disney Research neural face swapping technique can provide photorealistic, high resolution video

A new paper published by Disney Research in partnership with ETH Zurich describes a fully automated, neural network-based method for swapping faces in photos and videos – the first such method that results in high-resolution, megapixel resolution final results according to the researchers. That could make it suited for use in film and TV, where high resolution results are key to ensuring that the final product is good enough to reliably convince viewers as to their reality.

The researchers specifically intend this tech for use in replacing an existing actor’s performance with a substitute actor’s face, for instance when de-aging or increasing the age of someone, or potentially when portraying an actor who has passed away. They also suggest it could be used for replacing the faces of stunt doubles in cases where the conditions of a scene call for them to be used.

This new method is unique from other approaches in a number of ways, including that any face used in the set can be swapped with any recorded performance, making it possible to relatively easily re-image the actors on demand. The other is that it kindles contrast- and light conditions in a compositing step to ensure the actor looks like they were actually present in the same conditions as the scene.

You can check out the results for yourself in the video below (as the researchers point out, the effect is actually much better in moving video than in still images). There’s still a hint of ‘uncanny valley’ effect going on here, but the researchers also acknowledge that, calling this “a major step toward photo-realistic face swapping that can successfully bridge the uncanny valley” in their paper. Basically it’s a lot less nightmare fuel than other attempts I’ve seen, especially when you’ve seen the side-by-side comparisons with other techniques in the sample video. And, most notably, it works at much higher resolution, which is key for actual entertainment industry use.

[embedded content]

The examples presented are a super small sample, so it remains to be seen how broadly this can be applied. The subjects used appear to be primarily white, for instance. Also, there’s always the question of the ethical implication of any use of face-swapping technology, especially in video, since it could be used to fabricate credible video or photographic ‘evidence’ of something that didn’t actually happen.

Given, however, that the technology is now in development from multiple quarters, it’s essentially long past the time for debate about the ethics of its development and exploration. Instead, it’s welcome that organizations like Disney Research are following the academic path and sharing the results of their work, so that others concerned about its potential malicious use can determine ways to flag, identify and protect against any bad actors.

Read More

Posted on

Biased AI perpetuates racial injustice

The murder of George Floyd was shocking, but we know that his death was not unique. Too many Black lives have been stolen from their families and communities as a result of historical racism. There are deep and numerous threads woven into racial injustice that plague our country that have come to a head following the recent murders of George Floyd, Ahmaud Arbery and Breonna Taylor.

Just as important as the process underway to admit to and understand the origin of racial discrimination will be our collective determination to forge a more equitable and inclusive path forward. As we commit to address this intolerable and untenable reality, our discussions must include the role of artificial intelligence (AI) . While racism has permeated our history, AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine. In reality, AI is a mirror that reflects and magnifies the bias in our society.

I had the privilege of working with Deputy Attorney General Sally Yates to introduce implicit bias training to federal law enforcement at the Department of Justice, which I found to be as educational for those working on the curriculum as it was to those participating. Implicit bias is a fact of humanity that both facilitates (e.g., knowing it’s safe to cross the street) and impedes (e.g., false initial impressions based on race or gender) our activities. This phenomenon is now playing out at scale with AI.

As we have learned, law enforcement activities such as predictive policing have too often targeted communities of color, resulting in a disproportionate number of arrests of persons of color. These arrests are then logged into the system and become data points, which are aggregated into larger data sets and, in recent years, have been used to create AI systems. This process creates a feedback loop where predictive policing algorithms lead law enforcement to patrol and thus observe crime only in neighborhoods they patrol, influencing the data and thus future recommendations. Likewise, arrests made during the current protests will result in data points in future data sets that will be used to build AI systems.

This feedback loop of bias within AI plays out throughout the criminal justice system and our society at large, such as determining how long to sentence a defendant, whether to approve an application for a home loan or whether to schedule an interview with a job candidate. In short, many AI programs are built on and propagate bias in decisions that will determine an individual and their family’s financial security and opportunities, or lack thereof — often without the user even knowing their role in perpetuating bias.

This dangerous and unjust loop did not create all of the racial disparities under protest, but it reinforced and normalized them under the protected cover of a black box.

This is all happening against the backdrop of a historic pandemic, which is disproportionately impacting persons of color. Not only have communities of color been most at risk to contract COVID-19, they have been most likely to lose jobs and economic security at a time when unemployment rates have skyrocketed. Biased AI is further compounding the discrimination in this realm as well.

This issue has solutions: diversity of ideas and experience in the creation of AI. However, despite years of promises to increase diversity — particularly in gender and race, from those in tech who seem able to remedy other intractable issues (from putting computers in our pockets and connecting with machines outside the earth to directing our movements over GPS) — recently released reports show that at Google and Microsoft, the share of technical employees who are Black or Latinx rose by less than a percentage point since 2014. The share of Black technical workers at Apple has not changed from 6%, which is at least reported, as opposed to Amazon, which does not report tech workforce demographics.

In the meantime, ethics should be part of a computer science-related education and employment in the tech space. AI teams should be trained on anti-discrimination laws and implicit bias, emphasizing that negative impacts on protected classes and the real human impacts of getting this wrong. Companies need to do better in incorporating diverse perspectives into the creation of its AI, and they need the government to be a partner, establishing clear expectations and guardrails.

There have been bills to ensure oversight and accountability for biased data and the FTC recently issued thoughtful guidance holding companies responsible for understanding the data underlying AI, as well as its implications, and to provide consumers with transparent and explainable outcomes. And in light of the crucial role that federal support is playing and our accelerated use of AI, one of the most important solutions is to require assurance of legal compliance with existing laws from the recipients of federal relief funding employing AI technologies for critical uses. Such an effort was started recently by several members of Congress to safeguard protected persons and classes — and should be enacted.

We all must do our part to end the cycles of bias and discrimination. We owe it to those whose lives have been taken or altered due to racism to look within ourselves, our communities and our organizations to ensure change. As we increasingly rely on AI, we must be vigilant to ensure these programs are helping to solve problems of racial injustice, rather than perpetuate and magnify them.

Read More

Posted on

Shielding Frontline Health Workers with AI

Illustration: © IoT For All

We are living through an unprecedented crisis. During the COVID-19 pandemic, healthcare workers have emerged as frontline heroes, working overtime to protect our communities from the spread of novel coronavirus. But they aren’t immune to the anxious, uncertain atmosphere the pandemic has fostered nor, indeed, the coronavirus itself.

We need to protect the first responders and hospital staff who put their wellbeing on the line to support their communities during a crisis. To my mind, that means using every tool at our disposal to the fullest — with AI chief among those at hand.

Creative Solution

There’s little doubt that the current situation demands a creative solution. The United States has become the center of the global pandemic; as of April 16th, the US confirmed 644,188 cases and endured 28,579 deaths. Despite efforts to flatten the curve by ordering regional shut-downs and stay-at-home orders, hospitals across the county have been all but overwhelmed by incoming cases. The impact on provider morale has, according to reporting from NPR, been similarly problematic.

“Nearly a month into the declared pandemic, some health care workers say they’re exhausted and burning out from the stress of treating a stream of critically ill patients in an increasingly overstretched health care system,” NPR reporters Will Stone and Leila Fadel recently wrote. “Many are questioning how long they can risk their own health […] In many hospitals, the pandemic has transformed emergency rooms and upended protocols and precautions that workers previously took for granted.”

Hospitals are doing all they can to keep their caregivers safe and protected, but their resources are stretched far too thin. According to reports, some hospitals in high-infection areas like New York City can only afford to give healthcare workers one N95 mask every five days. Used masks are collected, disinfected, and returned on a cycle between uses. But some frontline workers worry that, given the highly contagious nature of the disease, they may not be adequately protected.

“It can be disheartening to have that feeling of uncertainty that you are not going to be protected,” Sophia Rago, an ER nurse based in St. Louis, told reporters for NPR.

We need to shield our frontline workers as much as possible. The obvious solution would be to increase stores of personal protective equipment (PPE) and N95 masks; however, given that we face a national shortfall and harsh state-to-state bidding wars over the gear, that fix seems unlikely. What we can do to at least lessen the risk of patient-to-provider transmission is to invest in AI-powered solutions that can automate some healthcare protocols and limit the need for close contact.

“Traditional processes — those that rely on people to function in the critical path of signal processing — are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale,” a team of digital health researchers recently wrote in an article for the Harvard Business Review.

“Digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity — and we have plenty of both. Digital systems can keep pace with exponential growth.”

These AI-powered, digitally-facilitated solutions generally fall into two broad categories: disease containment and patient management.

Assessing AI’s Ability to Limit Disease Transmission

When it comes to limiting disease spread, the aim is to use AI tools to allocate human resources better while still protecting patients and staff. Take the screening system that was recently deployed at Tampa General Hospital in Florida, for example. This AI framework was designed by the autonomous care startup Care.ai and intended to facilitate early identification and interception of infected people before they come into contact with others. According to a report from the Wall Street Journal, the Care.ai tool taps into entryway cameras and conducts a facial thermal scan. If the system flags any feverish symptoms such as sweat or discoloration, it can notify healthcare staff and prompt immediate intervention.

Other technology companies––Microsoft, for one––have rolled out similar remote diagnostic and alert tools in facilities across the globe. Their unique capabilities vary, but their purposes are the same: to prevent the spread of infection and provide support to overworked personnel.

As representatives for Microsoft shared in a recent press release, “[AI technology] not only improves the efficiency of epidemic prevention, but it also reduces the work burden of frontline personnel so that limited human resources can be used more effectively.”

In these resource-strapped time, the aid is undoubtedly needed.

AI’s Applications for Diagnostics and Patient Management

Fighting a pandemic is a task that requires speed. Now more than ever, providers must be able to accurately and quickly identify infected patients so that they can trace and hopefully contain the viral spread. But doing so isn’t an easy order.

To borrow a quote from Forbes contributor Wendy Singer, “Analyzing test results nowadays requires skilled technicians and a lot of precious time, as much as a few days. But in our current reality, healthcare systems need to analyze thousands of results instantly, and to expose as few lab workers as possible to the virus.”

We don’t have that kind of time––and we can’t put our lab workers at undue risk. Thankfully, cutting-edge AI technologies may provide a solution. With AI, hospitals can automate some steps of the testing process, cutting down on the time and effort needed to process test results. These capabilities aren’t just hypothetical; in the weeks since the start of the pandemic, the health tech startup Diagnostics.ai has provided laboratories in the US and UK with a diagnostic tool that streamlines the testing process by automating DNA analysis.

However, the applications of AI diagnostics aren’t limited to testing alone. Some have also used artificial intelligence to support population management in overstretched hospitals. One Israeli medical-device developer, EarlySense, recently developed an AI-powered sensor that can identify which patients will most likely face complications like sepsis and respiratory failure within six to eight hours. This can give a hospital the information it needs to best allocate limited resources and staff attention.

No AI innovation — no matter how brilliant or helpful — will fix our resources shortfall. There is no question that healthcare providers need more PPE and support, or that they need it immediately. However, the benefits that AI provides to screen and patient management efforts are evident. It seems reasonable that we at least consider the weight the deployment of such tools could remove from our exhausted front-liners’ shoulders.

Read More

Posted on

Innovating during COVID-19: A Story of Collaboration

Connected World’s Peggy Smedley recently sat down for a webcast with Eddy Van Steyvoort, VP, business line automotive and on-road, IGW/VCST, which is a part of BMT Group, Kevin Wrenn, EVP, products, PTC, and Filip Bossuyt, CEO, Ad Ultima, for a discussion about innovating in a time of COVID-19, a story of collaboration.

Van Steyvoort shares the smart factory project, which started in 2017, in silos and realized quickly that it needed to think in an end-to-end scenario. He says it recognizes it had to change its systems, the organization, and its way of thinking to a more end-to-end focus to improve efficiency, reliability, quality, and the way it supports customers. The question became how does it change; and which tools to use? It decided to go to PTC and Ad Ultima to help support it.

“PTC’s PLM Software was known already in the BMT Group and that was a very, very, very strong asset and also a very strong signal from the beginning that we had already the relation, which was already there,” Van Steyvoort says. “We could build on that relation. That was the reason why we established a total plan as partners, and not let’s say as a customer supplier, but as partners,” he adds.

Then the COVID-19 coronavirus pandemic hit. Van Steyvoort opines the automotive industry has been shook by coronavirus, but it didn’t want to stop the strong drive on the project and decided not to change the long-term strategy.

He insists it now knows what AR (augmented reality) is and what it can bring during COVID-19, explaining that it can support people locally from a global perspective to show them how to do things. This is one of the lessons learned during this time—that it needs to invest even more in augmented reality tools.

Ad Ultima’s Bossuyt adds it is helping VCST to think end-to-end and to realize its digital transformation. “Becoming digital is a challenge today because you have to do it end-to-end. You cannot do it for only a part of your business.”

Adding to the conversation, PTC’s Wrenn says PTC can help with openness. “We are open on multiple dimensions. Our technology is open. It enables people to do digital transformation, as Eddy was talking about, connections all the way from engineering, all the way to the factory floor, and even out to their customers. Wwe are also open from a partnership standpoint. Ad Ultima is a really important partner of PTC’s and likewise of VCST. So we are used to working in these environments both from a technology standpoint and a partnership standpoint.”

When the COVID-19 pandemic first hit, PTC’s first response was to reach out to its customers and partners to make sure they could work from home. Wrenn says the technology is made to work from home and not have to be physically on site to be able to operate the technology. “It was much more important for us to figure out how our customers could create business continuity, and at the same time we were doing it for ourselves.”

In all of this, each individual learned something very important. Van Steyvoort says it is important to create a very strong sense of urgency from the very start and keep communicating this through the whole organization that it is a future-based strategy. “Instead of focusing on the change, focus on the alternative of doing nothing, because doing nothing that means you will lose the game.” Also, don’t be afraid to express the hopes and fears.

Ad Ultima’s Bossuyt notes the most important thing is the power of the network and working together with different partners where there is a lot of trust and all the stakeholders are aligned, which has created very good results. PTC’s Wrenn adds the new normal after COVID-19 is it will make people think about the kind of projects because digitalization is going to be a requirement in the new normal.

Going forward, the next steps for VCST is to link the CAD (computer-aided design) information to the PLM (product lifecycle management), that it goes through visualization in ThingWorx, and that the whole picture will be a completely integrated solution for the future. As Van Steyvoort says, “The sky is the limit. The technology is not the limit anymore.”

Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #digitaltransformation #machinelearning #futureofwork #PLM #CAD #AR

Read More

Posted on

Smart Home: On the Rise?

Amid the pandemic, many are wondering if the use of technology is going to continue to rise. In many instances, the answer is yes. Such is the case with smart homes.

A new report points to the importance of incorporating smart-home technology. LexisNexis Risk Solutions released an insurance claims study revealing that in-line water shutoff systems correlate with a decrease in water claims events by 96%.

The study measured the changes in the number and severity of water-related home insurance claims with the Flo by Moen Smart Water Shutoff device against an uninstalled control group of homes in the same geolocation one year before and after installation.

Here is what it found: Prior to installation, 2,306 Flow homes had an average claims severity far greater than the control group two years prior to the installation of the device. The study also found a corresponding 72% decrease in claims severity one year after installation of the device, indicating that smart water shutoff systems are working.

The key takeaway here is that water leak mitigation and the time and money saved could help drive adoption of these smart home devices, ultimately reducing loss costs, improving the customer experience, and more.

This is in line with other reports that the smart homes market, in general, is on the rise. Mordor Intelligence says the market was valued at $64.6 billion in 2019 and is expected to reach $246.42 billion by 2025, a forecasted 25% growth rate, even amid a pandemic. The research shows there is a greater need for security and wireless controls. Further advancements in the IoT (Internet of Things) have resulted in price drops of sensors and processors, which are expected to fuel automation in the home.

While there is much to consider when it comes to smart-home technologies, research points to a continued rise in the years to come.

Want to tweet about this article? Use hashtags #IoT #sustainability #AI #5G #cloud #edge #futureofwork #infrastructure 

Read More

Posted on

AI’s Man Behind the Curtain

As the world grows increasingly connected, growing concern regarding the influence of artificial intelligence (AI) has been bubbling to the surface, affecting perceptions by industries big and small along with the general populace. Spurred on by sensationalized media predictions of AI taking over human decision-making and silver-screen tales of robot revolutions, there is a fear of allowing AI or its cousin, the Internet of Things (IoT), into our lives. Here is AI’s man behind the curtain.

One of the biggest sticking points is the popular – yet mistaken – notion that AI will cost people their jobs. In truth, the situation is just the opposite. The real future of AI isn’t one where people are replaced, but where man and machine work in tandem to cover one another’s weaknesses. AI isn’t a job thief – it’s a job creator.

How do we know? Simple – it’s all happened many times before. Industrial hype cycles have traced very similar lines in the past, from industrialization to the internet. In all cases, many people were certain these new technologies would put people out of work. Of course, this never came to pass, and technology always ended up creating a net gain in job creation.

Learning from History

Consider the automated teller machine or ATM. The fear lied in the name — people were worried the introduction of these devices would render human tellers obsolete.  However, the reality turned out to be the exact opposite. The streamlined service of these ATMs allowed banks to open more branches than ever before, which means – you guessed it – far more tellers were employed than before the introduction of ATMs.

John Hawksworth, the chief economist at PwC, said in a 2018 analysis that AI and robots, much like the inventions of the steam engine and computers before them, will displace jobs yet simultaneously generate large productivity gains. This spike in productivity brings prices down, raises real income, and thus creates demand for more workers. The firm predicts that in a post-AI world, some sectors will see job creation soar by as much as 20 percent.

To take this analysis further, the World Economic Forum predicted in a 2018 study that the complete integration of artificial intelligence would displace 75 million jobs – but, critically, would also result in the creation of 133 million new jobs. The net gain is clear just as it has been with many other technological innovations throughout history.

Also, it should be noted that some business problems will always necessitate the human touch, as even the most advanced AI or the most well-connected IoT device will come up short.

People and Machines in Tandem

The “conversational guidance” software that’s now being rolled out to call centers around the U.S.  is a good example of what AI’s future could look like. These programs use speech recognition AI to measure cues from both sides of a phone call, advising representatives on how to maximize customer satisfaction.

Speech recognition accomplishes this feat by indicating if the rep is talking too slow or fast, is taking too coarse of a tone, sounds tired or bored, and so on. The automatic speech recognition can also pick up on whether a customer is getting frustrated and guide the rep to empathize. Some firms incorporating the technology have already reported an increase in customer satisfaction rates as high as 13 percent.

AI can be seen playing a similar supportive role in areas such as design – from industrial 3D products to graphics and website user experience. Utilizing big data and speedy calculations, AI can advise on subtle design elements that will make customers more likely to engage with a product, platform, or service. For example, it may determine that a web designer would be better served to place a button on the top right of a page, rather than the bottom left, since doing so leads to higher engagement and conversion rates.

It’s not all hard data and calculations though – there’s plenty of room for fun and creativity in this new space, too. Few likely know this better than the founders of “AI Cocktails,” a creative AI toolkit that uses neural networks to create inspirational new drinks born of AI and human collaboration.

Using hundreds of traditional cocktail recipes as its base knowledge, the AI outputs a list of wild combinations that – with a little tweaking from a human bartender – can surprise and delight the taste buds. Who would have guessed that rum, wine, and vanilla ice cream would make such a scrumptious combo?

As we’re seeing now, something truly special emerges when human ingenuity combines with statistical insights that AI can provide us. If we continue to engage and integrate, a slew of longstanding societal challenges – energy waste, traffic, disease, and much more – could be tackled by this newly formed dynamic duo of man and machine.

The notion that technology is a looming threat to labor fails to consider its role as a job creator. Already, businesses are beginning to incorporate AI and IoT to make their services and products more innovative, efficient, profitable, and safe.

The future of AI is in fact already here, and those who have moved past their misconceptions and misgivings to embrace will reap massive benefits.

Image credit: Pexels

Barry Po

A veteran of both startups and enterprise business, Barry has led global product teams operating in over 80 countries and has held leadership roles at some of the world’s most valuable brands. Prior to Universal mCloud, he was head of product, marketing, and business development at NGRAIN, where he played a key role in taking the business to a successful exit. Barry graduated from the University of British Columbia with a Ph.D. in Computer Science in 2005.
His accomplishments have been recognized through numerous awards. He held an NSERC Industrial R&D Fellowship, is a two-time winner of the annual Communications Award from the B.C. Advanced Systems Institute (now the B.C. Innovation Council), and was a nominee for the Governor-General’s Gold Medal. In the press, he has appeared in Popular Mechanics, Inc Magazine, and Singularity Hub.
Barry is active in the Vancouver high-tech community as a guest speaker, mentors budding entrepreneurs and innovators, and serves on the Dean’s Advisory Board in the Faculty of Communications, Art, and Technology at Simon Fraser University.

Read More

Posted on

AI-Driven Video Analytics for the Retail Industry

Illustration: © IoT For All

Artificial intelligence (AI) is directly correlated with Data Science, which is aimed at extracting business value from an array of information. This value can consist of expanding the capabilities of forecasting, knowledge of regularities, informed decision-making, cost reduction, etc. In other words, artificial intelligence operates with massive arrays of information, analyzes incoming data, and develops adaptive solutions based on them. 

In the modern world, the retail industry is rapidly increasing the application of artificial intelligence in all possible work processes. Thus, leveraging opportunities by applying analytics can undoubtedly improve a wide range of operations in the grocery industry. With AI, the largest supermarket chains are achieving very ambitious aims: 

  • improving and expanding customer service capabilities,
  • automating supply chain planning and orders delivery,
  • reducing product waste,
  • sharpening the management of out-of-stock and over-stock (grocery stock out), and
  • enhancing demand forecasting. 

The AI solution ecosystem is extensive and able to satisfy most needs of all grocery retailers (from large chains to the smallest businesses). As of now – during the quarantine, online grocery analytics has become a real “savior” in terms of managing stock-out conditions. With intelligent data-driven approaches, supermarkets can process a large amount of information, accurately forecast consumer demand and supply inventory, and generate the most accurate pricing and purchasing recommendations. As a result, grocery retailers will not only stay afloat, but will continue to generate profits even throughout the most critical situations, like during the coronavirus pandemic. With that being said, it is evident that all companies now require an immediate action plan in response to COVID-19. 

A New Level of Video Surveillance

As a rule, most grocery stores have a continuous video surveillance system. Previously, such systems were installed only for security purposes: controlling the safety of products and preventing theft. But now, artificial intelligence video analysis is able to monitor the behavior of customers from the moment they enter the store until payment. How does it work, and why do stores need it?

Large grocery chains like Amazon and Walmart use high-tech cameras that utilize automatic object identification (RFID). Typically, such a system is used in unmanned electric vehicles to monitor passenger behavior and process visual information via a computer. But the primary goal of video grocery store analytics is to determine which items are in high demand, which products buyers most often return to the shelves, etc. Moreover, cameras recognize faces, determine heights, weights, ages, and other physical characteristics of customers. Subsequently, the AI (based on all the obtained data) identifies the most popular products from specific consumer groups and offers options for changing the pricing policy. A computer automates all these processes without human intervention. 

Preventing Grocery Stock-out and Shrinkage

Artificial intelligence in the retail industry is capable of solving problems that people cannot cope with. Experts state that a person physically cannot view all the video surveillance. There is not enough time for this, and unfortunately, human vision is not perfect. But this is no longer necessary! Video analytics for grocery stores perfectly copes with such tasks. For example, connecting cameras to the store’s automated warehouse system and equipping shelves with sensors can uncover gaps in inventory records and stimulate investigations. Grocery store data analytics can also monitor stocks and provide signals about replenishment needs. Facial recognition technology as described above is capable of comparing the faces of people with criminals (or wanted individuals) and warn security.

Advancing Traffic Flows and Store Layout

Data collected about customer behavior helps supermarket managers optimize store layout. Moreover, the computer program can design the most “optimal” layout and test it, generating an overall better customer experience and an increase in the store’s monthly profit figure. 

Data can be collected about the number of people that enter a store and the amount of time they spend shopping. Based on this data, artificial intelligence can predict crowd sizes and the length of time people wait in line. It will help improve customer service and reduce staff costs during “calm” hours. In other words, AI is able to draw optimal store management plans at various hours of the day with maximum benefit for the business. For example:

  • develop traffic flows
  • optimize display placement and floor planning
  • improve strategic staff distribution
  • draw correlations within the dwell time and purchasing
  • predict products for individual shopping groups

Enhancing Customer Experience

Every business should know as much as possible about its audience to offer the best possible service. AI in grocery stores using video intelligence software gives detailed demographic data with a detailed analysis of shopping habits. This information provides unlimited opportunities for stores to increase profits. By knowing their customers, store managers can maximize the client shopping experience, creating favorable conditions (made specifically for customers’ preferences). Furthermore, AI for grocery stores can help produce the most accurate demand forecasting models of the given target market. 

In addition to working with the target audience, managers can transfer information to the marketing department with the data obtained from video analytics. By exploring other audiences, marketers can develop strategies to attract new customers by creating relevant advertising, promotions, and sales. Additionally, stores can create separate display cases (vegan products or gluten-free) for small shopping groups, satisfying their needs. 

Among all existing technologies of artificial intelligence for grocery stores, video content analytics provides maximum support in almost all activities: merchandising, marketing, advertising, and layout strategies. By optimizing these processes, stores not only save and reduce losses, but also have the opportunity to expand their business by increasing profits. The main goal is not only to satisfy customers, but to strengthen customer retention rate.

Read More