Posted on

I Scored 3rd at the Azure AI Hackathon With an IoT Smart Water Meter

Illustration: © IoT For All
My smart water meter.

I recently had the pleasure of participating in the Microsoft Azure AI Hackathon, where close to 1,000 participants hacked together projects using Azure’s Cognitive Services.

In this blog post, I’ll cover how I used the Azure Anomaly Detector API and an IoT Starter Kit – featuring a Raspberry Pi and an IoT SIM card – to create an award-winning smart water meter that measures drastic changes in water levels.

Materials You’ll Need

  • IoT Starter Kit
  • A Soracom Account
  • An Azure Account
  • Sample code
  • *optional* some sort of enclosure for the project.

First off, let’s cover what we’ll be doing with the above components. The main goal is to collect readings from the starter kit’s ultrasonic sensor and send it via HTTP over cellular to Soracom Beam.

Next, we’ll configure Beam to encrypt that message into HTTPS, and then send it along to our Azure Anomaly Detector endpoint. Azure will return a response to Beam noting whether or not an anomaly was detected, and Beam will return that response to the IoT smart water meter so that it can then take an action based on the outcome.

A timestamp and distance reading is taken from the ultrasonic sensor every minute and stored in a CSV file on the device. Once enough data points are stored (Anomaly Detector API currently has a minimum of 12), the data is sent to the Anomaly Detector API to analyze the latest value against the previous ones.

When an uncommon event occurs, the Anomaly Detector API will return a message to the device letting it know that an anomaly has been detected. This message can then be used to trigger any type of action that you can think of. Currently, it will make light up a red LED attached to the Raspberry Pi.

Here are four simple steps for how I built this intelligent IoT water meter – and how you can build your own!

Step 1: Set up the Azure Anomaly Detector API

Create your Anomaly Detector endpoint within the Azure Portal. Currently, a free pricing tier is available (woohoo!). Once created, please take note of the key and endpoint, which are listed under the “Quick Start” section of the resource.

We’ll need to plug these values into our Soracom Beam configuration in the next step. Here’s a link to the Anomaly Detector API Reference.

Step 2: Set up Soracom Beam

Next, head over to the Soracom User Console and configure Soracom Beam so that it’s ready to convert HTTP posts from the device into HTTPS posts for the Anomaly Detector API endpoint.

Step 3: Build the Soracom Starter Kit

Now that we have the cellular IoT and cloud plumbing all taken care of, let’s build the device. Open your Soracom Starter Kit, and walk through the following guide for getting it set up.

Keep going until you’ve completed the “Setup the ultrasonic range finder” section.

Step 4: Collect and Transmit Sensor Data to Microsoft Azure

With the device successfully built and ready to transmit data, we’ll need to get the Python scripts loaded onto it which will handle collecting the data and sending it via HTTP to Soracom Beam.

SSH back into the Raspberry Pi and follow the next steps to download the code, unzip it and run it:

1) From within your SSH session to the Raspberry Pi, download a zip of this repository to the current directory:


2) Unzip You should now have a folder called “Water-Level-Detector-Master” which includes all of the Python scripts in this repository:


3) Finally, we’ll clean up by deleting the .zip file, as we’ve already extracted the contents:


4) To start the project, `cd` into the “Water-Level-Detector-Master” folder and run the following command:

python 60

The 60 at the end lets the script know to take a sensor reading every 60 seconds. This is because we have our Azure Anomaly Detector API granularity set to “minutely”. It also supports “hourly”, but then we’d need to wait at least 12 hours to see if we broke anything.

Why I Used Soracom Beam

If we were to send our sensor data from the Raspberry Pi directly to Azure, we’d have to consider a couple of extra factors. First, we’d have to store both our endpoint and credential directly on the device.

This means that if we were to ever rotate keys or change the endpoint, we’d have to reach back down into the IoT smart water meter to make the change. Secondly, we’d have to send the data over HTTPS since it would need to traverse the public internet in order to reach Azure. This means extra overhead on the device as well as an increase in bandwidth costs due to the extra header size.

Beam lives within the Soracom platform and off of the public internet, so our smart water meter can send plain HTTP because the data transmission is already secured through SIM authentication.

The Soracom User Console lets us manage our Azure endpoint and credentials so that the device can focus on sending data to Soracom and is unaffected when future changes to endpoints and credentials are made. These burdens may not be too significant on this single IoT smart water meter but can add up significantly at scale.

Taking This Cellular IoT Project Further

Congratulations! Now that you have the IoT smart water meter sending data to Microsoft Azure and responding to the results, here are some ideas on what you can do to improve upon this cellular IoT project:

  • Modify what the Python script does when it detects an anomaly. It could send an alert, trigger some other API or maybe interact with other sensors, motors or relays that you’ll want to wire up to the Raspberry Pi.
  • Improve the sample code by submitting a pull request to the GitHub repository.
  • Build an enclosure for the project. Warm up the 3D printer? Hollow out an old Furby? (remember to send me photos!)

Come find me over on Twitter @Roycodes.

Written by Roy Kincaid
Source: IoT For All
Continue reading I Scored 3rd at the Azure AI Hackathon With an IoT Smart Water Meter

Posted on

CES2020: The Rise of AI and Personalized Wellness

CES, the largest tech event of the year, is no stranger to the extremely cool, strange, repetitive or revolutionary when it comes to technology. Although the show boasts thousands of different types of technologies and products, certain themes and trends are pervasive throughout the week. 

After putting in about 18.5 miles in less than 3 days, I reflected on the few days of sensory overload and everything I had experienced. Many of the conversations I had during the conference revolved around personalized health, connected vehicle ecosystems, smart cities and artificial intelligence (AI). While there were more than a few companies exhibiting at the show attempting to be the next Peloton or claiming their ear pods rival Apple’s Air Pods, I was grateful to not have to endure too many of those conversations. 

Smart City Concepts

With 5G rolling out and the IoT industry maturing, smart cities are the inevitable next move to take advantage of all IoT has to offer. At CES, there was no shortage of smart city concepts to experience. From miniature models that included autonomous cars and helicopters to vehicles that deliver groceries, companies have invested a lot of time and money into building the next generation of automation for our every day lives. The one concept that CES really drove home was that the future of tech is all connected. Smart cities don’t exist without AI or without connected “things” and autonomous vehicles. 

As our infrastructure ages, it becomes all too important for tech companies and their partners to understand how to build, secure and launch a connected future. Smart cities will rely on IoT sensors to understand water and energy consumption, traffic patterns and more. How we understand, control and initiate change based on the data collected in these smart cities will have a direct reflection on whether or not smart cities can be both a sustainable and practical way of life. 

Toyota brought to life their proposed prototype “Woven City” at the conference this year. The concept Toyota used for their booth was inspiring. With a circular fabric set up to display live-action examples of how the city of the future will work, Toyota immersed visitors in their Woven City through sound, video and a 360-degree experience. 

The city will be built as a fully connected ecosystem powered by hydrogen fuel cells at the base of Mt. Fuji in Japan by 2021. This smart city is being hailed as a “living laboratory” where residents and researchers will utilize the from-scratch infrastructure to test and develop numerous technologies including robotics, smart homes and autonomy. Toyota is only one of several companies taking a techno-utopian approach to their plans for the city of the future. 

According to the Danish architect behind the city, Bjarke Ingels, “…connected, autonomous, emission-free and shared mobility solutions are bound to unleash a world of opportunities for new forms of urban life. With the breadth of technologies and industries that we have been able to access and collaborate with from the Toyota ecosystem of companies, we believe we have a unique opportunity to explore new forms of urbanity with the Woven City that could pave new paths for other cities to explore.”

As Toyota takes a step into the future, so too do other tech companies. Sprint, for example, will be utilizing their True Mobile 5G and Curiosity™ IoT in areas across the United States, including Greeneville, SC and Arizona State University in Tempe, AZ.

The combination of Sprint Curiosity™ IoT with advanced network deployment has set the stage for building a truly smart city. Sprint and their partners are developing and deploying connected vehicles, autonomous services/machines and other smart technologies in conditions that reflect what future smart cities will look like. This allows researchers and developers to operate, navigate and react in real-time with real-world scenarios – preparing us for the city of the future. 

The Next Step in Mobility and Autonomous Vehicles

One of, if not the biggest, draw of CES is the automotive section. Everything from flying taxis to augmented reality cars and the latest models are on display at the event. I had the great pleasure of speaking with several experts in the autonomous industry including Blackberry and RTI.

During CES, Blackberry announced two partnerships including the advanced driver assistance systems (ADAS) and an autonomous vehicle platform that will integrate BlackBerry QNX’s real-time operating system with Renovo’s intelligent automotive data platform. Renovo and QNX are jointly developing safety-critical data management tools for connected and autonomous vehicles with the plan to scale safety systems in new cars. Currently, Blackberry’s QNX is already in 150 million cars on the road today. 

I spoke with Kaivan Karimi, Senior Vice President and Co-Head of BlackBerry Technology Solutions about the importance of native and secure technology and data collection in our connected and autonomous vehicles. With technology now embedded in cars before they hit the lots, Karimi expressed how vehicles are becoming a vital component of the infrastructure of smart cities.

As we put the groundwork in now for how cities will look in the future, he also noted the importance of building infrastructure based on the data that these vehicles are collecting from Renovo’s data management system and AI pipeline. Blackberry’s focus on safe and secure technology combined with Renovo’s data capabilities is only one example of how partnerships between private companies, the government, public entities and citizens of the world are necessary for being able to manage connected car data in a safe, secure and private way. 

In addition to Blackberry, Real-Time Innovations (RTI), an IIoT connectivity company, is working on the future of autonomous driving. 

Bob Leigh, Senior Market Development Director, Autonomous Systems at RTI shared with me that RTI believes “that the advancement of autonomous driving will be transformative to industry and society. Right now, automotive and tech companies are grappling with the complexity of the new technology, how to bring it to market, and what business models will ultimately be successful. At CES this year, we saw [that the industry] is much more specific in how they are tackling the challenge; differentiating their technology between advanced ADAS, Level 2+ and Level 4 Autonomy levels. We think this is a sign of the maturing market and the industry as a whole becoming more confident in how they will deliver their first commercial products. At CES 2020 it was clear the exact future of autonomous cars may still be unclear, but there was much more confidence in the path to making this technology real.” 

Personalized Wellness

Human behavior is a peculiar thing. Whether it’s a daily skincare routine, morning yoga or meditation, we are creatures of habit. Technology is advancing the way we personalize our health in those habits. Any marketer will tell you that human connection is the number one way to convince users to buy. If you can find a way to meet consumers where they are and solve their pain points, buyers will be more likely to choose your product. A company’s ethos as well as how it approaches customer satisfaction is of utmost importance as we saturate the market with new solutions, cool tech and products. 

Neutrogena relaunched its NEUTROGENA Skin360™ app this year to democratize skin health information. I spoke with the team, including Global Communications Lead of Beauty and Baby at Johnson & Johnson, Michelle Dionne, who explained and walked me through the app. Skin360™ utilizes advanced skin imaging, behavior coaching and artificial intelligence to empower consumers with actionable, personalized steps to help achieve their skin health goals. 

The original app that launched in 2018 required a skin scanning tool. So why did they relaunch in 2019? The team at Neutrogena put their customers first. They took into consideration valuable insight from consumers who sought personalized recommendations, science-backed information, expert opinions, skincare product tracking and how routine care affects our facial skin health over time. 

The team also added the Neutrogena AI Assistant (NAIA). NAIA is a personal skincare coach that builds a relationship with each user through in-app and text messaging. NAIA uses AI and behavior change techniques to determine each individual’s skincare personality, what their current approach to care is and their current routine. Once you’ve added your information to the app and complete a 180-degree selfie analysis, the app will give you a score for wrinkles, fine lines, dark under-eye circles, dark spots and smoothness.

NAIA then helps users identify and build a personal 8-week skincare goal and routine based on the skin scores and a self-assessment of sleep, exercise, stress levels, external factors, etc. that is monitored and supported through coaching. This allows users to personalize their routine and place importance on various skin attributes such as moisture and tone.

In addition to continuing to accept user feedback and iterate on their app and AI technology, Neutrogena is combining their 360 app with MaskiD, a micro 3D printed facemask that is custom to face shape and structure, formulated with concern-specific ingredients on different areas of your face. Although they won’t be available until later this year, be on the lookout for these masks as they will be both personalized and affordable. Side note: I’ve used the app several times already since being introduced to it last week.

This year at CES, Panasonic also took into consideration how consumers are placing increased attention on their physical and mental health states with the launch of their ‘Human Insight Technology’. 

With Panasonic’s human insight technology, users are provided with data to make recommendations to improve an individual’s experience in the home.

Human insight technology uses non-invasive sensors and imaging to capture and interpret data based on human habits and behaviors. Panasonic demonstrated this technology through an interactive yoga studio. Through analysis of physical stress data, Pansonic was able to design products and environments optimized for typical human movements and physiology. At CES, participants can see human insight technology in action through an interactive yoga studio using the Yoga Synchro Visualizer.

Your face and body are scanned, and the technology prompts you to follow commands. The cameras and sensors recognize human motion and provide users with multiple scores including a pose, fatigue, stability, flow and stress. The best part? You’re able to see the physical representation of the changes taking place in your body while performing your yoga routine. 

AI Home Ecosystems

Among the flooded convention center floors and wave of beautiful displays, you’re more likely than not to have run into companies that are incorporating AI assistants and technology into their products in some way. The smart home industry, in particular, is embedding AI into their ecosystems. 

For example, Sharp has a vision of People-Oriented IoT according to Executive Vice President and Head of AIoT Business Strategy Office, Bob Ishida. With over 150 products in 10 categories, Sharp is rolling out products that meet lifestyle and culture needs. Sharp is only one of many companies that showcased AIoT and 8K solutions that “will explore new possibilities for computers to offer innovative experiences to both business users and individual consumers around the world.”

LG is another example of a company using AI to improve the home ecosystem. Revealed in 2019, LGThinQ artificial intelligence was on full display. LG’s slogan for AI: “anywhere is home”. From kitchen appliances to washing machines and personal wardrobes, all of LG’s appliances are using AI as a consumer experience. Washers are learning how users like certain types of clothing washed and air conditioners are adjusting automatically to your comfortability settings. 

IoT Takeover

As I walked the convention floor with little spare time, I was curious about the prevalence of IoT at CES. Although I had to explain more than a handful of times what IoT is and how it works (simple explanation of IoT), even those that didn’t know it by name were utilizing some element or elements tied deeply to the IoT industry. 

From sensors to AI, 5G and the future of mobility, CES 2020 made a few things clear: partnerships are necessary for how we will build a connected future; personalized wellness is becoming a need to have instead of a nice to have and AI is becoming less of a buzzword and more of an actuality. 

Source: IoT For All
Continue reading CES2020: The Rise of AI and Personalized Wellness

Posted on

Security Vulnerabilities of Amazon Echo and Other Virtual Assistants

Illustration: © IoT For All

Virtual assistants occupy a very special niche in the global IoT ecosystem. They make the concepts of artificial intelligence and machine learning migrate from the realm of cutting-edge tech toward something close at hand and affordable. The speech recognition and synthesis features built into these devices emulate the feel of real-world communication, thereby bridging the emotional gap between the user and the soulless machine.

Furthermore, many of them can become a pivotal part of home automation due to their ability to control other connected devices such as smart locks, lights, thermostats, TVs and more. The list of benefits goes on and on, so it comes as no surprise that Amazon Echo, Google Home and other awesome voice assistants have already made millions of homes more “intelligent” and keep creating ripples in the market.

Mass production of these smart appliances has considerably lowered the price tag, but there is a flip side to mainstream adoption. Manufacturers might prioritize business goals over security in an attempt to outperform their competitors. This can be a slippery slope. A few security loopholes recently discovered in popular virtual assistants speak volumes about the potential risks.

Amazon Echo and Kindle Devices Exposed to a Wi-Fi Protocol Flaw

Ever heard of KRACK? No, it’s not an awkward misspelling. It’s a term coined by two Belgian researchers in 2017, denoting a series of weaknesses in the WPA2 protocol dubbed the “Key Reinstallation Attack.” The issue revolves around an imperfection in the four-way handshake, a technique used to exchange authentication data and encrypt the traffic in modern wireless networks.

In October 2019, analysts from the ESET Smart Home Research Team discovered that a plethora of Internet-enabled gadgets, including virtual assistants, continue to be susceptible to this bug, even though it’s been two years since the experts originally spread the word about their findings. Moreover, it turns out that this issue isn’t restricted to low-end products from lesser-known brands. According to ESET, millions of Amazon Echo 1st generation smart speakers and Amazon Kindle 8th generation e-readers are at risk as well.

To be precise, the above-mentioned devices by Amazon are exposed to two KRACK vulnerabilities cataloged as CVE-2017-13077 and CVE-2017-13078. The former allows an attacker to reinstall the pairwise encryption key in the course of the four-way handshake, and the latter makes it possible to alter the group temporal key along the way. In plain words, this type of unauthorized access can give a cyber intruder the green light to do the following:

  • Decrypt all information submitted by the user
  • Perform a DoS attack by replaying old data packets
  • Wreak havoc with network communication
  • Forge data packets
  • Steal the victim’s credentials

It’s worth mentioning that an attacker needs to be within radio range to take advantage of these flaws, and yet such a crude implementation of Wi-Fi security definitely shouldn’t be the case with devices as popular as Echo and Kindle. Thankfully, Amazon rolled out a patch for these flaws in early 2019 in response to the researchers’ report. It came with a new release of wpa_supplicant, an app tasked with proper authentication to a wireless network. Although the fix should have already arrived in the unprotected smart speakers, it’s a good idea for users to check their current firmware version and check that it’s up to date. As an additional protection step, it’s good to connect all your IoT devices via a VPN router.

Yandex Station’s Sound Activation Leaking Wi-Fi Passwords

Yandex, Russia’s major technology company, stepped into the voice assistant industry by introducing its own smart speaker called Yandex Station in late May 2018. The device goes with a Russian-speaking virtual assistant, Alice, onboard and boasts a decent set of voice-based features. Among other things, it can play requested music via the vendor’s proprietary multimedia service, order pizza, run web searches, provide weather information and cast videos to TV. This seems like a commendable initiative overall, but with the caveat that the initial device setup may expose the user’s Wi-Fi credentials to an attacker.

The process of the first-time activation relies on an audio token generated by the Yandex smartphone application. It should be played in close proximity to the speaker. This R2D2-style earcon conveys the authentication details for the wireless network and the provider’s services. Technically, it’s a portion of the user’s sensitive data converted to sound according to a predefined algorithm. Yandex Station instantly decodes it and configures itself to become a part of the wireless home network.

A security enthusiast named Sergey Krupnik, who goes by the alias Krupnikas, analyzed this activation process and found a way to extract secret credentials from the “magical” audio message. He tried a number of different passwords and scrutinized the deviations in the frequencies and other parameters of the resulting sounds. This allowed the researcher to identify the specific place in the signal that holds data about the Wi-Fi network’s SSID and password. He also determined a method to retrieve these details in hexadecimal format and easily convert them back to plaintext.

Obviously, the likelihood of privacy violation is minimal in this case because the attacker has to be nearby and record the message. One way or another, the analyst let the manufacturer know about his findings in May 2019 but hasn’t received a response ever since. It appears that the wow effect is more important to the vendor than the security of the smart speaker setup process.

Dodgy Apps on Alexa and Google Home Can Snoop on Users

In theory, voice apps for Amazon Alexa (so-called “skills”) and Google Home (referred to as “actions”) can take the user experience to a whole new level. In practice, they may be a mixed blessing due to eavesdropping behind one’s back.

Analysts from SRLabs, a German hacking research firm, recently made a newsmaking discovery. They found that a few extra characters surreptitiously added to a voice app’s code can turn it into a cyber spy. A booby-trapped “skill” or “action” can listen to the unsuspecting user’s conversations while pretending to be inactive. The app may also execute an attacker’s command to request the victim’s passwords under the guise of authorizing an important security update.

To demonstrate this exploitation vector, the researchers created a few benign voice applications that passed the initial security review procedures of Amazon and Google. Then, they modified the code of these apps to make them spy on users.

For instance, one of the tweaks in the experimental Alexa “skill” was an unpronounceable character string “�. ” (U+D801, dot, space) concatenated to a speech prompt. This way, the application can continue its session while remaining silent as if it were disabled. By inserting the above string multiple times, the developer can prolong this misleading silence. Meanwhile, the app is listening to the victim and sending the recorded conversations to its author’s server.

Things were similarly disconcerting with the test “action” for Google Home. SRLabs analysts faked the app’s inactivity by appending its code with a specific Speech Synthesis Markup Language (SSML) element or a series of Unicode characters that cannot be pronounced. With these changes in place, the speaker generates a “Bye” message to make the user think that the application has been turned off while its session actually continues in silent mode.

The researchers also demonstrated a password phishing attack, where a malicious voice app tries to hoodwink the user into disclosing his or her credentials. The bait is a phony security update allegedly available for the device. The application instructs the victim to say, “Start update” and then pronounce their password, which goes to the attacker.


It might appear that attack scenarios with the above-mentioned security flaws at their core are mostly theoretical at this point. The first two vulnerabilities can only be exploited if a malefactor is nearby, and the third hack is a proof of concept. However, none of these restrictions is an obstacle for a well-motivated attacker. Different attack scenarios were described several years ago.

What about the countermeasures? First and foremost, vendors need to release security updates of their devices’ firmware on a regular basis. This is what Amazon did to address the KRACK bug highlighted above, and it worked. Also, voice apps should be subject to mandatory review every time the developers change their code. And lastly, brands should maintain a reasonable balance between the coolness of their virtual assistants’ features and the security of these devices.

Written by David Balaban, Privacy PC
Source: IoT For All
Continue reading Security Vulnerabilities of Amazon Echo and Other Virtual Assistants

Posted on

How the Integration of IoT and 5G Is Set to Shape the Construction Industry

Illustration: © IoT For All

The Internet of Things (IoT) is transforming the way the construction industry does business, allowing companies to become faster, smarter, safer, and more efficient.

New Construction Technology

The industry has entered a new phase of digitalization through IoT and other enabling technologies. IoT devices are no longer just primary sensors, but they are evolving into advanced computers, which are capable of new and demanding applications in construction projects such as remote operation, supply replenishment, construction tools, equipment tracking, equipment servicing, repair, remote usage
monitoring, augmented reality (AR), building information modelling (BIM), predictive maintenance, progress monitoring, construction safety, and quality monitoring. Clearly, there’s a growing need for bandwidth. Given that the adoption of IoT in the industry is projected to keep on rising, the future success of deployments will rely on innovations in connectivity, like 5G.

Early adopters in the industry are already using IoT solutions, albeit in a fragmented approach.  There’s no developed ecosystem for all-round integrated business decision support, mainly due to a lack of fast, reliable and robust connectivity options that can handle communication from distributed locations. Usually, construction sites are in remote areas, and their headquarters are located in cities. Constant communication is required between the various stakeholders within the site, and from the site office to home-office and as well as offices of other partners involved in the projects. Currently, it’s challenging to find a network that’s cost-efficient, feature-rich, and capable of multiservice.

One of the biggest innovations within 5G is support for IoT use in construction in all its forms: providing high-speed data access, addressing mission criticality, and making it possible to connect constrained devices. In construction, project delivery depends on efficient data collection, capture and analysis/evaluation – all of which require reliable connectivity. 5G offers the possibility of real-time data processing, so decisions can be made almost instantly and issues rectified quickly.

The most popular key differentiator of 5G is bandwidth, i.e., data transfer rates of up to 10Gbps. However, according to Ericsson, there are about three different ways to think about IoT in a 5G-enabled world.

  1. Broadband IoT: Enables high volume and high-speed data transfer.
  2. Critical IoT: For mission-critical applications that rely on large bandwidths.
  3. Massive IoT: To connect a large number of devices.

Massive connectivity targets low complexity narrow-bandwidth devices that infrequently send or receive small volumes of data, like concrete maturity monitoring sensors, GPS and RFID tags. The devices can be in challenging radio conditions, like enclosures, and therefore require coverage extension capabilities and usually battery power. Additionally, the number of “things” involved in an IoT network is large, so it’s much different from a computer network, as the number of nodes increases the complexity of the network.

Broadband connectivity enables large volumes of data transfer, extreme data rates and low latencies for devices with significantly larger bandwidths than massive IoT devices. Broadband IoT connectivity is also capable of enhancing signal coverage per base station and extending device battery life if requirements on data rate and latency are not stringent. Broadband IoT is vital for the majority of the mobile equipment use cases that require high data rates and low latency, such as construction equipment telematics, fleet management, sensor sharing, and basic safety.

Mission-critical connectivity enables super-low latency communication. It aims to deliver messages with strictly bounded low latencies, even in heavily loaded cellular networks. In IoT ecosystems, the sensors, actuators and other gadgets are dependent on the responsiveness of the system or network to work effectively. This validates that high latency means delayed responsiveness, and with that comes the inability of things to function to their full capacity. Some IoT systems are designed to respond in case of emergencies, and delayed responsiveness can result in loss of life or property, for example, in the case of autonomous vehicles.

Possible 5G Use Cases in Construction

Real-time automation: Real-time automation is one of the most popular segments of construction applications. It consists of autonomous applications like robotic masons, welders, and cranes that leverage data from sensors in real time to trigger specific actions. It’s often used in mission-critical applications, where latency, availability, reliability, and security are of key importance.

Given that construction sites are complex and constantly evolving environments, teams can rely on 5G to understand activities on worksites in real time and to perform remote or autonomous construction operations. Combined with high communication speeds, this will give those working in construction almost instantaneous access to data-intensive edge and cloud applications, enabling multiple users to interact with each other in real time, and remotely.

While reliability and trust are key considerations in all IoT applications, they’re of utmost importance in mission-critical applications such as the predictability of data delivery to robots.

Monitoring, tracking, and surveillance: Self-driving vehicles are gaining prominence at construction sites, combined with data collected and fused from a vast array of sensors, including concrete maturity, structural health, waste management, location, weather, GPS and IP cameras. With the advent of 5G, this information will become indispensable as companies and cities overlay other technologies, such as artificial intelligence and machine learning, onto real-time data outputs and revolutionize how to work safely and efficiently.

5G will be crucial in monitoring the health, location, status, and specifications of assets of all kinds, including the following:

  • Site machinery to ensure operational ability, availability, remote or autonomous construction operations.
  • Site components to ensure coordination with the project, enabling real-time reaction to changes and updates.
  • Improved safety. With 5G, sensors can more effectively be deployed to improve safety by tracking individuals’ safety compliance through smart vests, helmets, and shoes.

Supply chain optimization: The construction job site has a lot of repetitive activities, and hunting for materials is a constant challenge. Autonomous vehicles, RFIDs, computer vision, BLE (Bluetooth low energy), or other digital tools can be used to help address such issues. If materials can arrive on demand, it will greatly improve productivity.

Real-time information on the order status of materials or various components manufactured offsite is important to ensure a project is running on time. This will benefit project managers, principal construction contractors, and tradespeople.

Multi-trade prefabrication, including utilization of cyber-physical assistance systems, advanced building information modeling (BIM) and design-to-fabrication technologies has a direct impact on improving quality and reducing time spent at the worksite. This requires real-time collaboration, and 5G’s broadband IoT is a possible solution.

Enhanced video services: In terms of video capture, 5G will also help organizations inexpensively deploy technology to quickly capture, organize and analyze massive volumes of video information. This reduces the need for some teams to even visit the construction site. Further, this kind of real-time, rich, visual information can provide reassurance to the owner as well as an on-demand transparent view of the project at any particular moment in time.

Drones are already being employed to take 4K video footage, and 5G will enable real-time video sharing and analytics. Construction status and reporting can now rely on the use of computer visualization to understand the work and automatically update progress on the project. Computers can take care of 80 percent of field engineers’ repetitive work. This would free the knowledge workers to resolve problems as opposed to physically verifying work status.

Another example is the ability to deploy subject matter experts directly to the workplace through augmented and mixed reality, regardless of physical location. With a BIM model and 5G capability, it’s possible to have it instantly available and enable rich video content to provide an even greater level of visualization on the job site or about the site.

Hazard and maintenance sensing: Visual data can help us identify hazards instantly and proactively intervene to reduce accidents and injuries. Videos rather than static images can help streamline inspections, punch lists, audits, safety audits, as-builts, and even compliance. 5G enables visual data. Images make us reactive. In a proactive scenario, data capture is automated, continues through various sources, and is analyzed in real time. AI and machine learning become your predictive analytical engine that reports potential areas of risk before issues arise.

Fostering collaboration: We’re seeing an increasing number of joint ventures as construction projects become complex. Sharing knowledge is now important, not just internally but with peers. Often, we are trying to solve the same problems with the same set of resources (design, vendors, and trade partners). 5G makes this process much easier.

Caveats Regarding 5G in Construction

Just like any other new construction technology, 5G has to be adopted strategically. There are several caveats that companies need to consider, such as the following:

Standardization: While 5G will help to increase collection, capture, and analysis of data, there are many organizations and projects today that don’t have a strategy around the standardization of project delivery. This can reduce the potential benefits of 5G but also impact the safety, quality, completion time and budget of a project.

Security: Given the massive number of connected devices enabled by 5G, there’s an increased need for rigor around updating and following security standards.

To realize the full potential of 5G, construction businesses need a tailored implementation strategy. As a general approach, the following steps may prove useful:

  • Clearly define the problems you want to solve and identify value creation drivers.
  • Make sure you are solving a problem that matters, i.e., one that is supported by a compelling ROI.
  • Choose a credible partner to help you decide on the ecosystem, channel model, and business model to pursue.
  • Build internal capabilities to deploy the solution and to secure technical enablers.
  • Implement the solution and allow its capabilities to evolve. Continuously improve as you experiment and learn until the solution can be deployed to scale.

Although 5G is still in its early days of deployment, fast progress is being made in the development and testing of the technologies, and the standardization process is expected to be completed in 2020 with 3GPP release 16.


When it comes to IoT, 5G’s capabilities open up a seemingly infinite number of new use cases. Data collected at the edge can be understood and acted on in near real time. High bandwidth and low-latency times ensure more data than ever can be quickly and easily collected and analyzed, overlaying increased intelligence into every device at the edge. The integration of 5G and IoT can help AEC organizations to improve productivity, safety, and compliance.

Written by Farai Mazhandu

Source: IoT For All
Continue reading How the Integration of IoT and 5G Is Set to Shape the Construction Industry

Posted on

Swarm Bot Series Part Two: Real Use Cases for Swarm Robotics Applications

Illustration: © IoT For All

Swarm intelligence goes far beyond what current IoT applications are offering the world in the way of futuristic improvements and gee-whiz prototypes seen in many of today’s more progressive cities and homes. 

As you’re about to see, they’re perfect for certain types of business and industry applications, making the world safer, better, and easier in an exciting number of ways. 

Tackling Dangerous Tasks

Swarm-bots can be used to tackle dangerous tasks to reduce or eliminate the risk for humans. Based on the level of danger, there’s potential for loss of robot individuals, necessitating a focus on fault tolerance of the swarm. Some examples include:

  • Search and rescue
  • Cleanup of toxic spills
  • Demining 

When High Flexibility and Scalability Are Needed

There are tasks where it’s difficult or impossible to estimate at the beginning the number of resources needed to accomplish the task. For example, if you’re allocating resources for managing an oil spill or leak, you cannot foresee the oil output or temporal evolution. This makes resource allocation doubly difficult. 

Flexibility and scalability should be the focus of the swarm-bots deployed to handle such tasks. You can add or remove robots as needed to give the right amount of resources according to the evolving requirements of the job. Some applications include tracking, cleaning, and specific search and rescue scenarios.

Unstructured Tasks/Environments

Swarm robotics may be useful when it’s necessary to accomplish tasks within very large or informal environments. In these cases, you don’t have the infrastructure to control the robots, such as a global localized system or a communication network.

Swarm-bots fit the bill because of their ability to work autonomously without any infrastructure or centralized control system. Examples include:

  • Extraterrestrial or underwater excursions
  • Demining
  • Surveillance
  • Search and rescue missions

Dynamic Environments

Certain environments may change rapidly over time, such as after natural disasters like hurricanes or earthquakes. Buildings may collapse, altering the original layout of the environment and creating unforeseen hazards. Here, swarm robots that are customized for high levels of flexibility will be needed.

Examples include:

  • Search and rescue
  • Patrolling
  • Disaster recovery tasks 

How Swarm Robots Get Their Power

It only makes sense to have swarm robots powered by batteries. Experts recommend lithium polymer batteries because they provide high current output and energy density. They are also lightweight, flexible in format, and safe to use, as they resist overcharging. However, these types of batteries can be dangerous if not treated properly, so they must be protected from over/under-voltage, overcurrent, and overheating. 

For a large robot swarm, it’s impractical to design them to be manually recharged. Instead, a module can be written to teach the robot to find a docking station for recharging when it runs low on power. 

Consider that finding and docking at a charging station is a higher-level task than the swarm-bots typically do. Therefore, apart from the module, charging stations should be designed to be as intuitive as possible. For example, the inclusion of high-speed, two-directional communication between the computer and bot during recharging would be useful.

Part of the battery includes charging and discharging management to ensure battery health and safety. The discharge management circuits must be installed in the swarm bot for apparent reasons. However, the charging management circuitry may be placed outside the bot (in the docking station’s computer system). 

There are upsides to external deployments, such as more straightforward bot design, which makes them cheaper to acquire. But if the aim is to maintain a bot’s autonomy within the swarm, then it must be equipped with all necessary tools to enable proper functionality.

Charging is central to a bot’s functionality, so it makes sense to have both charging and discharging management systems within the bot.

How Swarm Robots Connect to the Cloud – Swarm Computing

For swarm robotics, you have autonomous micro-machines that need to communicate with each other and with the cloud when necessary. This evolution that brings together cloud computing with swarm robotics is called swarm computing, and it’s still in its infancy.

Swarm computing brings together cloud principles with network principles, to give rise to higher functionality and flexibility of swarms or IoT ecosystems. It focuses on increasing data sharing and mobility, as well as allowing temporary control of devices connected to the cloud.

The most visible advantage of investing in cloud robotics will be the ability to delegate more difficult tasks to higher-intelligence agents in the cloud. Cloud cooperation should enable swarm robots to, for instance, connect to the more intelligent bots in the cloud when meeting more difficult challenges. There should be real-time data processing, support, and response, whether by humans or robots, to inform the autonomous robots’ actions/responses.

Right now, we still need lots more extensive research to determine how to operate, manage, and deploy highly distributed cloud services. This will demand high levels of innovation and automation but will be essential given the proliferation of IoT and swarm intelligence. 

Real-World Applications of Swarm Robotics

Even though swarm robotics and IoT is a relatively new field, only a few years old, different organizations are already diving headfirst into the field. Below are some examples of companies and organizations using swarm robotics to power various aspects of their operations. 

DOD Micro-drones for Military Use

The military application of swarm robotics perhaps the most significant of all. The US Department of Defense has already demonstrated one of the largest micro-drone swarms in China Lake, California. The swarm showed advanced swarm intelligence, such as decision-making, self-healing, and adaptive formation flying.

Perdix drones, as they are called, work as a collective organism, sharing a distributed brain that enables them to adapt to each other and make decisions to benefit the entire swarm. Without a leader, the swarm adapts gracefully to drones leaving or entering the team.

Ideally, the Pentagon hopes to use these small, cost-effective, and autonomous drones to accomplish the same things they used large, expensive drones to do. However, they were keen to mention that drones will not replace humans in the future battlefield. Instead, they would equip humans with information to make better decisions faster. 


Inspired by biological phenomena, Wyss Institute researchers are developing RoboBees prototypes, which can perform various disaster relief and agriculture-related tasks. A RoboBee is very small, half the size of a paper clip, and weighing 0.1 grams or less. Its flight is powered by “artificial muscles,” which are materials that contract when exposed to voltage.

Some RoboBee models can swim underwater or fly, as well as “perching” on surfaces using static electricity. Researchers wanted to create micro-aerial, autonomous vehicles that could achieve self-directed flight and work coordinately when in large groups. 

RoboBees can be used to assess infrastructural damage after a natural disaster or act of terrorism, as well as locate victims for smart rescue efforts.

Cost-Effective Modular Robots

A research team at the Department of Mechanical Engineering at the University of Toronto developed a modular robot called mROBerTO. Modular robots are bots that can autonomously change shape and perform different functions. 

Such abilities are essential to swarm robotics research, where thousands of small bots are needed to test out behavior algorithms and functionalities. For these miniature robots to be cost-effective, it would be necessary to make sure each unit costs as little as possible. Otherwise, the cost of research would be prohibitive, holding back the advancement of the field. 

Enter mROBerTO, a modular robot that can be made from commonly available and affordable materials. mROBerTo can be used for a variety of applications calling for miniature swarm robots, although its primary purpose was to give swarm robotics researchers cheap physical tools to test out swarm behavior algorithms.

These robots are designed in such a way as to enable researchers to be able to change hardware parts to test our different algorithms, shapes, and functions using the same robot skeleton. The modular millirobots are made such that changing one section/module doesn’t affect the functionality of the other sections. 

Future of Swarm Robotics Research

There are hundreds of possibilities for swarm robotics research across many different fields. Theoretically, the technology will be useful in areas where human intervention would be impossible (e.g., nanomedicine) or too dangerous (e.g., search and rescue, nuclear reactors, chemical plants, mines, etc.).

However, this will only be possible if researchers finds ways to build robot swarms cost-effectively. This is the line of thinking the makers of mROBerTO above adopted (although at $60 each, the cost of a swarm of 100 robots is $6,000, which is still high). 

Similarly, most of the current robotics research has been carried out in controlled lab environments, which do not mimic real-world constraints. It will become crucial, then, to find ways to take swarm robotics out of the lab and into the real world, particularly since most applications will require high flexibility of the swarms in rapidly changing environments and without external intervention.

A joint team of researchers from the University of West England and the University of Bristol is currently working on techniques to develop autonomous discovery of suitable swarm strategies when swarms are deployed in real-life situations. 

Future research in this area will involve using dynamic environments to test swarm robot responses and determine the designs that will be better suited to real-world applications.

Swarm Robots: How Much Is All This Going to Cost?

This is a complex question because the cost of a robot swarm depends on so many factors, like: 

  • Complexity 
  • Number per swarm
  • Level of autonomy 
  • Level of flexibility/adaptability needed 

For example, the University of Colorado wanted to acquire swarm robots called Droplets, which were self-charging and worked in groups of 100 or 1000. The estimated cost was $10,000 for just 100 Droplets in 2014.

It’s easy to understand why, despite its value, swarm robotics research is still out of reach for many businesses. Typically, the people who make these robots are expensive – they are computer scientists, computer engineers, or electronic engineers with advanced degrees. The parts that make these robots – motors, cameras, sensors, etc. – are also expensive.

Final Thoughts

Swarm robotics is set to become one of the most significant technological advancements we’ll see this century. Its applications, particularly in disaster recovery and management, are endless and powerfully significant. 

Of course, swarm robotics research is still in its infancy for many applications, subject to different challenges you’ve learned about here. In time, as better technology becomes cheaper and more accessible, we will see swarm robotics becoming part and parcel of business operations and decision making — for the greater good.

Source: IoT For All
Continue reading Swarm Bot Series Part Two: Real Use Cases for Swarm Robotics Applications

Posted on

How Can a Digital Twin Create a Seamless Workplace for Employees?

Robot twins on a colorful background
Illustration: © IoT For All

How can a digital twin create a seamless workplace for employees? Before we answer that question, let’s first define what we mean by a digital twin.

Defining “Digital Twin”

Not simply a digital mock-up of the physical environment, a digital twin is the contextual model of an entire organization and its operation. It’s the data from your subsystems and the real-time interaction between your people, process and connected things.

Digital twins solve the challenges of real-time data processing by bringing together data from IT and OT systems, IoT sensors and third-party data in a contextual representation of your built environment. They allow you to analyze the complexity of your built environment across the entire portfolio, take immediate action to optimize conditions, and track and improve the state of your built environment over time.

Four Ways Digital Twins Can Create a Seamless Workplace

1. Digital Twins Help Create Dynamic Spaces

The way we work is changing. If you look at industry news headlines, you’ll see articles about the rise of shared working spaces, flexible work hours and remote working. Yes, Millennials and Gen Z’ers are big drivers of this change, but the conversation isn’t limited to the younger generations. In fact, in today’s economy, there is a multigenerational global talent war in many industries — like tech, finance and telecom — where workers of all ages and demographics are asking for flexible working arrangements, remote work and a more holistic perspective on productivity in exchange for their loyalty. Moreover, in our increasingly connected world, employees want their office environments to be as smart as their homes, cars and digital communities, with the ability to create a personalized experience.

However, many traditional office buildings still operate in the “dark” with limited use of modern technology and little ability for employees to interact dynamically with their environment. Not only does this make for an experience marked by friction, it leads to inefficiencies in building performance and to missing out on the “wow” factor — something that helps attract in-demand employees.

A digital twin can transform an outdated workplace into one that’s dynamic, modern and seamless. By bringing together information and data from a variety of different sources and producing a contextual model, a digital twin can be used to optimize conditions and enable employees to interact with their spaces. For instance, if a group of employees needs to work collaboratively on a certain project, digital twin technology can be leveraged to match the group with a space in the building that offers the right features; they can book it and enjoy control over the conditions of the room.

2. Digital Twins Provide Missing Insights About Building Usage

Do you truly know how employees are using your building? For instance, are there empty boardrooms with lights left on for hours at a time? Are there times when an entire floor of employees leaves for a team retreat or conference? Are there certain days of the year when specific teams or companies are working round-the-clock to meet deadlines?

How people interact with their office environment directly impacts resource use and needs for services like cleaning and security. By having better insight into how and when people use a space, you can find ways to scale resources up or down or enable a more flexible use of the space. A digital twin uncovers these missing insights by providing 360 dashboards, floor plans, analytics and other tools that offer information about real-time building use. A digital twin also helps to predict future states and to optimize conditions, resulting in better outcomes for everyone, including reduced costs for owners and a seamless experience for employees.

3. Digital Twins Supply Data to Make Business Decisions

Now, more than ever, building owners and operators are looking to data to make business decisions. Data is all around us, yet harnessing data in a way that allows us to operationalize it has been a challenge. Digital twins change this. By providing a holistic view of a building, they unlock data that was previously hidden and test how this data can result in positive outcomes for both the bottom line and employees.

With access to actionable data, digital twins can help owners/operators make better decisions about a range of topics and economic drivers. For instance, third-party service contracts can be better negotiated and deployed when informed by data from a digital twin. Rather than use a static model for making decisions on services like cleaning or maintenance, owners/operators can employ a demand-based model that matches supply with changing needs. This makes for a smarter approach, resulting in better working conditions for employees and reduced costs for building owners.

4. Digital Twins Improve Employee Experience

How many times have you been too hot or too cold in your office building? How many times have you visited the washroom and found no hand towels? How many times have you reported a light out and waited days or even weeks before it was changed? Communication in a large office building is difficult. With literally hundreds of occupants, dozens of bathrooms and thousands of lights, building managers get endless requests and struggle to triage and action them all in an efficient manner. This is just one contributing factor that can make for a clunky employee experience.

digital twin changes this by bringing together data from connected devices and sensors to provide building managers with real-time information about changing conditions. Is a light out? An IoT sensor will alert the building manager. Is an employee feeling too cold? Using their connected smart phone app, the employee can change their environmental settings. Digital twins also help determine the most efficient path to action requests and allow dynamic two-way communication between building staff and employees.

In conclusion, digital twins are transforming the built environment and creating seamless employee experiences. With digital twin implementation on the rise, new use cases are being developed every day, and the technology is helping owners future-proof their assets and attract in-demand employees by offering a superior experience.

Source: IoT For All
Continue reading How Can a Digital Twin Create a Seamless Workplace for Employees?

Posted on

5 Ways AI Can Make IoT More Intelligent

A brain on a colorful background
Illustration: © IoT For All

The ability to connect billions of devices that exchange data without any third-party interaction is what makes the Internet of Things (IoT) one of today’s most intriguing topics. Since the human analysis of such a wide array of data is unpractical, to say the least, it was only a matter of time until AI was going to be put to use for a more efficient IoT.

Microsoft’s AI expert Rashmi Misra recently revealed in a podcast concerning the implementation of AI in IoT that Microsoft is interested in improving the value of data gathered by IoT devices. This date will provide businesses with solutions for the development of more efficient products and services that will fulfill customers’ expectations.

Seeing Microsoft taking an interest in coupling IoT with AI is a clear signal that there’s real potential in combining the two latest technologies. Let’s take a look at how artificial intelligence can improve IoT.

Airline Crew Management

Boeing is a titan in the airline industry. As such, it pays close attention to its resource allocation strategy. Managing an airline crew is a challenging task, as you need to team-up skilled, rested, motivated, and available crew members for each flight. Boeing’s subsidiary company Jeppesen designed AI-based software that uses IoT data to assign crew members to duty.

The benefit of the Jeppesen Crew Rostering platform is that it provides seamless crew management that allows clients to cut their expenses and always have the most effective crew. The same principle could apply to any industry; even a service like NSBroker could use this approach.

Tesla’s Self-driving Cars

Although the practical use of AI in combination with IoT is a relatively new concept, the most advanced companies are already working hard on leveraging big data for the development of better products. The Tesla company is doing an amazing job at utilizing a series of sensors, radars, cameras, and GPS to provide a safe auto-driving experience. For now, the combination of AI and IoT allows Tesla cars to learn and adapt to various traffic situations, which should eventually enable a fully automatic self-driving vehicle.

The best part is that all Tesla cars will be able to interact with each other to even further improve the performance of each unit.

Smart Thermostat

Managing the temperature in your home via a smartphone or other IoT control devices is becoming more common. The integration of artificial intelligence allows the production of devices that can learn from user experience and perform better.

Nest Labs created the next-generation, self-learning thermostat; it can manage home cooling and heating conditions on its own. The device takes a few weeks to learn the user’s preferences and schedule, which enables it an autonomous operating process. Smart Thermostat gathers data such as favorite temperature settings for each part of each day during the year and adapts the temperature according to that information.

Transport and Logistics

For now, IoT is allowing logistic and transport companies a real-time provision of information that allows a better decision-making process and provides data that prevents losses. With the integration of AI, business owners, especially companies with a large fleet of vehicles and busy schedules, could harvest and analyze data faster and generate more useful feedback.

AI implementation could lead to more effective routes, cut other operational expenses, and delegate work automatically, without human interaction. If we put self-driving vehicles into the equation, we could see a transport company that’s fully operated by collaborative efforts of AI and IoT.

Preventing the Extinction of Species

We often take nature for granted and forget that inspiration and resources for technological progress are most commonly found in nature. To prevent the extinction of species that die out because of our negligence or direct influence, environmentalists are using all sorts of tools and practices.

Wild Track is a noninvasive method of wildlife tracking that includes the use of machine-learning algorithms to collect and analyze data gathered from images and other noninvasively harvested information to help us understand and keep track of endangered species. The researchers are optimistic about the potential of this technology in preserving wildlife, especially those species that can’t be easily tagged or won’t stand still while someone is trying to place a collar on their neck.


These are just some of the ways in which we can use AI to analyze big data gathered through IoT devices and to improve the capabilities of every smart device. We could soon live in fully automated homes that adjust to our needs and habits. The possibilities are virtually endless, and it’s up to us to make the best of what we have and to make the world a better place for us all.

Written by Donna James
Source: IoT For All
Continue reading 5 Ways AI Can Make IoT More Intelligent