Posted on

Building a lawn monitor and weed detection solution with AWS machine learning and IoT services

For new home buyers, a common challenge is to understand how to manage their lawn needs effectively. Now imagine if you’re a farmer and have to do this for many acres of land. As a farmer, some of the challenges you’d typically face include the when (when is the right time to water), the where (where exactly to water or add fertilizer), and the how (how to handle weeds). In a research study conducted by the Weed Science Society of America (WSSA), combined losses in corn and soybean crops due to uncontrolled weeds would total $43 billion annually. For more information, see WSSA Calculates Billions in Potential Economic Losses from Uncontrolled Weeds on the WSSA website.

What if you could use technology and the latest advancements in the field of computer vision and machine learning (ML) to solve this problem?

This is a complex problem to solve, and many enterprises are working on a solution. This post covers how to get started and build a solution using an AWS Starter Kit. The solution has two components:

  • Weed detection using image classification and AWS DeepLens
  • Near real-time monitoring of lawn conditions (soil moisture level, fertility level, and sunlight) using AWS IoT

Prerequisites

To implement this solution, you must have the following prerequisites:

Detecting weeds using image classification

As herbicide resistance is becoming more and more common, weed control is gaining greater importance in agricultural domains. It’s important to detect weeds and take preventive actions early to enhance farm productivity. This is now possible with AWS DeepLens. AWS DeepLens is the world’s first deep learning enabled video camera for developers. With AWS DeepLens, you can get started in minutes with a fully programmable video camera, tutorials, code, and pre-trained models designed to expand deep learning skills. It also lets you learn and explore the latest artificial intelligence (AI) tools and technology for developing computer vision applications based on a deep learning model.

With AWS DeepLens, you can classify images as weeds or grass in real time. You can also connect AWS DeepLens to a microcontroller to trigger a spray to kill the weeds.

This post demonstrates how to detect weeds using the image classification algorithm in Amazon SageMaker. Amazon SageMaker is a fully managed ML service. With Amazon SageMaker, you can quickly and easily build and train ML models, and directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter notebook for easy access to your data sources for exploration and analysis, so you don’t have to manage servers. This post demonstrates how to train a model in the fully managed and on-demand training infrastructure of Amazon SageMaker. In addition, it also demonstrates how to deploy the trained model on AWS DeepLens.

The following architecture diagram gives a high-level overview of the weed detection solution:

This solution uses the image classification algorithm in Amazon SageMaker in transfer learning mode to fine-tune a pre-trained model (trained on ImageNet data) to classify a new dataset. Transfer learning allows you to train deep networks with significantly less data than if you had to train a model from scratch. With transfer learning, you are essentially transferring the knowledge that a model has learned from a previous task to your current task. The idea is that the two tasks are not disjointed, and you can use whatever network parameters that model has learned through its extensive training without having to do that training yourself.

The architecture includes the following steps:

  1. Download the dataset of your images consisting of weeds and grass to your local computer. Organize the images into two distinct folders on your local computer.
  2. Create two .lst files using the RecordIO tool im2rec. One file is for the training portion of the dataset (80%). The other is for testing (20%). For more information, see Create a Dataset Using RecordIO on the MXNet website.
  3. Generate both .rec files from the .lst files and copy both .rec files (training and validation images) to an Amazon S3
  4. Train your model using the Amazon SageMaker image classification algorithm in transfer learning mode.
  5. After you train the model, the training job uploads model artifacts to your S3 buckets. Alternatively, deploy your trained model on AWS DeepLens for real-time inferencing at the edge.

The following sections describe the detailed implementation of the solution.

Understanding the dataset

The Open Sprayer images dataset on the Kaggle website includes pictures of broad-leaved docks (weeds) and pictures of the land without broad-leaved docks (grass). The dataset comes with 1,306 images of weeds and 5,391 images of grass, with a typical size of about 256 pixels by 256 pixels. The dataset provides a recommended train/validate split. The following image shows a collection of weed and grass images.

Preparing the image dataset

This post demonstrates how to use the RecordIO file format for image classification using pipe mode. In pipe mode, your training job streams data directly from Amazon S3. With pipe mode, you reduce the size of the Amazon EBS volumes for your training instances. For more information, see Using pipe input mode for Amazon SageMaker algorithms. You can use the image classification algorithm with either file or pipe input modes. Even though pipe mode is recommended for large datasets, file mode is still useful for small files that fit in memory and where the algorithm has a large number of epochs.

MXNet provides a tool called im2rec to create RecordIO files for your datasets. To use the tool, you provide listing files that describe the set of images. For more information about im2rec, see the im2rec GitHub repo.

To prepare your image dataset, complete the following steps using a local Python interpreter or through a Jupyter notebook on Amazon SageMaker. You execute these steps from the path where you’ve stored your image dataset.

  1. Generate listing files using im2rec.py. See the following code:
    python3 im2rec.py --list --recursive --train-ratio .75 --test-ratio .25 

  2. Use the im2rec utility to create the RecordIO files by entering the following code:
    python3 im2rec.py --num-thread 2 --resize 50 --quality 80 

    im2rec takes the following parameters:

    • list – im2rec creates an image list by traversing root folder and output to .lst.
    • recursive – Recursively walks through subdirectories and assigns a unique label to images in each folder.
    • train-ratio – Ratio of images to use for training.
    • test-ratio – Ratio of images to use for testing.
    • num-thread – Number of threads to use for encoding. The greater this value, the faster the processing.
    • resize – Resize the shorter edge of an image to the new size; original images are packed by default.
    • quality – JPEG quality for encoding, 1–100; or PNG compression for encoding, 1–9 (default: 95).

For more information, see MXNet made simple: Image RecordIO with im2rec and Data Loading.

Implementing the code

To demonstrate each step of the solution, this post uses the Jupyter notebook from the GitHub repo.

Setting up your S3 bucket

Create an S3 bucket to upload the training and validation files in the RecordIO format as well as model artifacts. Use the default_bucket() session parameter from the SageMaker Python SDK. For instructions on creating an S3 bucket manually, see Creating a bucket.

bucket = sagemaker.Session().default_bucket()

The RecordIO files serve as an input to the image classification algorithm. The training data should be inside a subdirectory called train and validation data should be inside a subdirectory called validation:

def upload_to_s3(channel, file):
    s3 = boto3.resource('s3')
    data = open (file, "rb")
    key = channel + '/' + file
    s3.Bucket(bucket).put_object(Key=key, Body=data)

s3_train_key = "image-classification-transfer-learning/train"
s3_validation_key = "image-classification-transfer-learning/validation"
s3_train = 's3://{}/{}/'.format(bucket, s3_train_key)
s3_validation = 's3://{}/{}/'.format(bucket, s3_validation_key)

upload_to_s3(s3_train_key, 'Data_train.rec')
upload_to_s3(s3_validation_key, 'Data_test.rec')

For more information, see the GitHub repo.

Training the image classification model using the Amazon SageMaker built-in algorithm

With the images available in Amazon S3, you are ready to train the model. This post has a few hyperparameters of interest:

  • Number of classes and training samples
  • Batch size, epochs, image size, and pre-trained model

For more information, see Image Classification Hyperparameters.

The Amazon SageMaker image classification algorithm requires you to train models on a GPU instance type, such as ml.p2.xlarge. Set the hyperparameters to the following values:

  • num_layers = 18. Number of layers (depth) for the network. This post uses 18, but you can use other values, such as 50 and 152.
  • image_shape = 3,224,224. Input image dimensions, num_channels, height, and width for the network. It should be no larger than the actual image size. The number of channels should be the same as the actual image. This post uses image dimensions of 3, 224, 224, which is similar to the ImageNet dataset.
  • num_training_samples = 4520. Number of training examples in the input dataset.
  • num_classes = 2. Number of output classes for the new dataset. This post uses 2 because there are two object categories: weeds and grass.
  • mini_batch_size = 128. Number of training samples used for each mini-batch.
  • epochs = 10. Number of training epochs.
  • learning_rate =001. Learning rate for training.
  • top_k = 2. Reports the top-k accuracy during training. This parameter has to be greater than 1 because the top-1 training accuracy is the same as the regular training accuracy that has already been reported.
  • use_pretrained_model = 1. Set to 1 to use a pre-trained model for transfer learning.

After you upload the dataset to Amazon S3 and set the hyperparameters, you can start the training with the Amazon SageMaker CreateTrainingJob API. Because you’re using the RecordIO format for training, you specify both train and validation channels as values for the InputDataConfig parameter of the CreateTrainingJob request. You specify one RecordIO (.rec) file in the train channel and one RecordIO file in the validation channel. Set the content type for both channels to application/x-recordio. See the following code:

"InputDataConfig": [
        {
            "ChannelName": "train",
            "DataSource": {
                "S3DataSource": {
                    "S3DataType": "S3Prefix",
                    "S3Uri": s3_train,
                    "S3DataDistributionType": "FullyReplicated"
                }
            },
            "ContentType": "application/x-recordio",
            "CompressionType": "None"
        },
        {
            "ChannelName": "validation",
            "DataSource": {
                "S3DataSource": {
                    "S3DataType": "S3Prefix",
                    "S3Uri": s3_validation,
                    "S3DataDistributionType": "FullyReplicated"
                }
            },
            "ContentType": "application/x-recordio",
            "CompressionType": "None"
        }
    ]

When the training job is complete, you should see the following message:

Training job ended with status: Completed

The output model is stored in the output path specified by training_params['OutputDataConfig']:

"OutputDataConfig": {
        "S3OutputPath": 's3://{}/{}/output'.format(bucket, job_name_prefix)

For the full code for model training, see the GitHub repo.

Deploying the model for real-time inference

You now want to use the model to perform inference. For this post, that means predicting the images as weeds or grass.

This section involves the following steps:

  1. Create a model for the training output.
  2. Host the model for real-time inference. Create an inference endpoint and perform real-time inference. This consists of the following steps:
    1. Create a configuration that defines an endpoint.
    2. Use the configuration to create an inference endpoint.
    3. Perform inference on some input data using the endpoint.

Creating a model

Create a SageMaker Model from the training output using the following Python code:

model_name="deeplens-image-classification-model"
info = sage.describe_training_job(TrainingJobName=job_name)
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_data)

hosting_image = get_image_uri(boto3.Session().region_name, 'image-classification')

primary_container = {
    'Image': hosting_image,
    'ModelDataUrl': model_data,
}

create_model_response = sage.create_model(
    ModelName = model_name,
    ExecutionRoleArn = role,
    PrimaryContainer = primary_container)

print(create_model_response['ModelArn'])

Real-time inference

You can now host the model with an endpoint and perform real-time inference.

Creating an endpoint configuration

Create an endpoint configuration that Amazon SageMaker hosting services use to deploy models. See the following code:

endpoint_config_name = job_name_prefix + '-epc-' + timestamp
endpoint_config_response = sage.create_endpoint_config(
    EndpointConfigName = endpoint_config_name,
    ProductionVariants=[{
        'InstanceType':'ml.m4.xlarge',
        'InitialInstanceCount':1,
        'ModelName':model_name,
        'VariantName':'AllTraffic'}])

Creating an endpoint

Create the endpoint that serves up the model by specifying the name and configuration you defined previously. The result is an endpoint that you can validate and incorporate into production applications. This takes approximately 9–11 minutes to complete on an m4.xlarge instance. See the following code:

endpoint_name = job_name_prefix + '-ep-' + timestamp
print('Endpoint name: {}'.format(endpoint_name))

endpoint_params = {
    'EndpointName': endpoint_name,
    'EndpointConfigName': endpoint_config_name,
}
endpoint_response = sagemaker.create_endpoint(**endpoint_params)

Create the endpoint with the following code:

response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
# wait until the status has changed
sagemaker.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = sagemaker.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))

if status != 'InService':
    raise Exception('Endpoint creation failed.')

You can confirm the endpoint configuration and status on the Endpoints tab in the Amazon SageMaker console.

You can now create a runtime object from which you can invoke the endpoint.

Performing inference

To validate the model for use, obtain the endpoint from the client library using the result from previous operations and generate classifications from the trained model using that endpoint. The code uses a sample image of a weed for our tests below. See the following code:

runtime = boto3.Session().client(service_name='runtime.sagemaker')
file_name = 'rsz_photo-1.jpg'
# test image
from IPython.display import Image
Image(file_name)

import json
import numpy as np
with open(file_name, 'rb') as f:
    payload = f.read()
    payload = bytearray(payload)
response = runtime.invoke_endpoint(EndpointName=endpoint_name, 
                                   ContentType='application/x-image', 
                                   Body=payload)
result = response['Body'].read()
# result will be in json format and convert it to ndarray
result = json.loads(result)
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
object_categories = ['weed','grass']
print("Result: label - " + object_categories[index] + ", probability - " + str(result[index]))

Result: label - weed, probability - 0.9998767375946045

Running your model at the edge using AWS DeepLens

AWS DeepLens lets you experiment with deep learning at the edge, which gives you an easy way to deploy trained models and use Python code to come up with interesting applications. For your weed identifier, you could mount an AWS DeepLens device on a wire overlooking your lawn. The device feeds cropped images of detected weeds and crops to Amazon S3. It could even trigger a text to your mobile phone when it detects weeds.

An AWS DeepLens project consists of a trained model and an AWS Lambda function. The function uses AWS IoT Greengrass on AWS DeepLens to perform the following tasks:

  • Capture the image from a video stream
  • Perform an inference using that image against the deployed ML model
  • Provide the results to AWS IoT and the output video stream

AWS IoT Greengrass lets you execute Lambda functions locally, which reduces the complexity of developing embedded software. For more information, see Create and Publish an AWS DeepLens Inference Lambda Function.

When you use a custom image classification model produced by Amazon SageMaker, there is an additional step in your AWS DeepLens inference Lambda function. The inference function needs to call MXNet’s model optimizer before performing any inference using your model. To optimize and load the model, see the following code:

model_path = '/opt/awscam/artifacts/image-classification.xml'
error, model_path = mo.optimize(model_name,224,224,aux_inputs={'--epoch':10}) model = awscam.Model(model_path, {'GPU': 1})

Performing model inference on AWS DeepLens

Model inference from your Lambda function is very similar to the previous steps for invoking a model using an Amazon SageMaker hosted endpoint. The following Python code finds weeds and grass in a frame that the AWS DeepLens video camera provides:

frame_resize = cv2.resize(frame, (512, 512))

# Run the images through the inference engine and parse the results using
# the parser API.  Note it is possible to get the output of doInference
# and do the parsing manually, but since it is a ssd model,
# a simple API is provided.
parsed_inference_results = model.parseResult(
                                 model_type,
                                 model.doInference(frame_resize))

For information about a complete inference Lambda function to use on AWS DeepLens with this image classification model, see the GitHub repo. Lambda function uses AWS IoT Greengrass SDK to publish text-based output on IoT topic in JSON format. Here are instructions on how to view the output in IoT Console.

Below picture shows our sample setup :

Near real-time monitoring of the farm conditions

The right amount of water and nutrients are key to improving farm productivity. One of the ways you can achieve that is through real-time monitoring of the field, which helps you save costs on water and fertilizer consumption. The information collected from real-time monitoring can help you identify when and where to water or fertilize the soil.

To monitor the soil conditions, this post used four soil sensors and monitored a patch of a 2-feet-by-1-foot plant bed. These sensors can detect the current moisture level and fertility of the soil, light received, and temperature of the surroundings. It collects this information via Raspberry Pi with Bluetooth enabled. For actual production, you should use a much longer-range protocol like LoRa. Raspberry Pi polls the sensors, collects this information, and sends it to AWS IoT Core. You use an IoT rule to collect data and forward it to Amazon Kinesis Data Analytics for real-time analysis. Now, when the moisture level for a specific part of the farm falls below the threshold value, this would trigger an alert notification. You can also use this information to automate the control of sprinklers system.

The system also has built-in logic to poll the local weather conditions. You can use this information to decide when and if to turn the sprinklers on.

Architecture

The following diagram illustrates the architecture of this workflow.

The architecture includes the following high-level steps:

  • Raspberry Pi acts as an edge gateway device and collects information from soil moisture sensors via Bluetooth.
  • After authentication, Raspberry Pi subscribes to an IoT topic and starts publishing messages to it.
  • Using the IoT rule engine, data moves to Amazon Kinesis Data Firehose, which has two destinations: Amazon S3 for persistent storage and Kinesis Data Analytics for real time processing.
  • The output of Kinesis Data Analytics is a Lambda function, which triggers SNS alerts when the moisture level falls below the threshold value.
  • Finally, you use Amazon S3 data to build an Amazon QuickSight

To prepare this architecture, you need a Raspberry Pi and soil moisture sensors.

Setting up your IoT device

To set up a Raspberry Pi to generate the data, complete the following steps:

  1. Configure the Raspberry Pi and connect it to AWS IoT. For more information, see Using the AWS IoT SDKs on a Raspberry Pi
  2. Download the latest version of Python AWS IoT SDK on Raspberry Pi by either SSH or command prompt in PuTTY. You can use the pip3 program command. See the following code:
    pip3 install AWSIoTPythonSDK

    Use Python 3.4 or higher because the bltewrap library used for miflora sensors only works with Python 3.4 or above.

  3. Download the miflora sensor library to talk to sensors with the following code:
    pip3 install miflora

    The code also automatically downloads the bltewrap library required for miflora. For more information, see the GitHub repo.

  4. Download the farmbot.py script from GitHub and update the following parameters:
    • clientID – IoT and IoT shadow client ID (from step 1)
    • configureEndpoint – IoT and IoT shadow endpoint from your AWS Management Console
    • configureCredentials – IoT and IoT shadow credentials location on Raspberry Pi (from step 1)
    • Sensors – Update the mac address values of the sensors
  1. SSH into Raspberry Pi and start the Python py script. The script polls the soil moisture sensors configured in the previous step. See the following code:
    python3 farmbot.py

Creating a Firehose delivery stream with Lambda transformation

This post uses an Amazon Kinesis Data Firehose delivery stream to stream the data. You use the data from the Firehose delivery stream as input for Kinesis Data Analytics and also store it in Amazon S3 for building an Amazon QuickSight dashboard. To create your delivery stream, complete the following steps:

  1. On the Kinesis Data Firehose console, choose Create delivery stream.
  2. Create a Firehose delivery stream with the name IoT-data-Stream.
  3. Select the source as the default Direct PUT or other sources and leave Server side encryption to default as unchecked.
  4. Choose Next
  5. For Transform source records for AWS Lambda, select Enabled.
  6. Choose Create new.
  7. Using this Lambda function, add the new line character to the end of each record for the data sent from the delivery stream to Amazon S3. For more information, see the GitHub repo.

This is required to create a visualization in Amazon QuickSight. See the following code:

'use strict';
console.log('Loading function');

exports.handler = (event, context, callback) => {
  
    /* Process the list of records and transform them */
    /* The following must be the schema of the returning record 
      Otherwise you will get processing-failed exceptions
      {recordId: , result: 'Ok/Processing/Failed', data:  } 
    */ 
    const output = event.records.map((record) => ({
        /* This transformation is the "identity" transformation, the data is left intact */
        recordId: record.recordId,
        result: 'Ok',
        data: record.data+"Cg==",
    }));
    console.log(`Processing completed.  Successful records ${output.length}.`);
    callback(null, { records: output });
};

  1. For Convert record format, select disabled.
  2. Choose Next.
  3. For Destination, choose Amazon S3.
  4. Create a new S3 bucket named -kinesis.
  5. Keep all other fields default.
  6. Choose Next.
  7. Under S3 buffer conditions, for Buffer interval, enter 60 seconds.
  8. For Buffer size, enter 1 MB.
  9. Choose Create an IAM role. Kinesis Data Firehose uses this role to access your bucket.
  10. Keep all other fields default.
  11. Choose Next.
  12. Review all the fields and choose Create delivery stream.

Setting up AWS IoT to forward data to the delivery stream

To forward data to your delivery stream, complete the following steps:

  1. On the AWS IoT Core console, choose Act.
  2. Choose Create rule.
  3. Create a new AWS IoT rule with the following field values:
     Name: 'IoT_to_Firehose'
     Attribute :' * '
     Rule Querry Statement : SELECT * FROM 'farmbot/#'
     Add Action : " Send messages to an Amazon Kinesis Data Firehose stream (select    IoT-Source-Stream from the Stream name dropdown)"
     Select Separator: " n (newline) "

  4. For the IAM role, choose Create new role.

The console creates an IAM role with the appropriate permissions for AWS IoT to access Kinesis Data Firehose. If you prefer to use an existing role, select the role from the drop-down menu and choose Update role. This adds the required permissions to your selected IAM role.

Creating the destination for the analytics application

To create the destination for the analytics application, complete the following steps:

  1. On the Amazon SNS console, create a topic and subscribe to it via email or text. For more information, see step 1 and step 2 of Getting Started with Amazon SNS
  2. Copy the ARN for the SNS topic in a text file.

Creating an analytics application to process the data

To create your analytics application, complete the following steps:

  1. On the Amazon Kinesis console, choose Data Analytics.
  2. Choose Create application.
  3. For your application name, enter Farmbot-application.
  4. For your runtime, select SQL and choose Create application
  5. For the source, choose Connect streaming data and select Choose source.
  6. Choose Kinesis Data Firehose delivery stream.
  7. Choose IoT-Source-Stream.
  8. Leave Record pre-processing with Lambda
  9. Under Access Permission, let the console create and update an IAM role to use with Kinesis Data Analytics.

If you already have an appropriate IAM role that Kinesis Data Analytics can assume, choose that role from the drop-down menu.

  1. Choose Discover Schema.
  2. Wait for the application to show results and then choose Save and continue.

If you have configured everything correctly, you see a table with the sensor names, parameters, and timestamp. See the following screenshot.

  1. For real-time analytics processing, choose Go to SQL editor.
  2. Enter the following parameters:
  • SOURCE_SQL_STREAM_001 – Contains name and sensor parameters with value and timestamp from the incoming stream.
  • INTERMEDIATE_SQL_STREAM – Contains all records with Moisture value less than 25 and filters down to fewer parameters, which for this post is Moisture and Conductivity.
  • DESTINATION_SQL_HHR_STREAM – Performs functions on the aggregate row over a 2-minute sliding window for a specified column. It detects if a sensor is constantly reporting low moisture level for 2 minutes.

See the following example code:

-- Create an output stream with seven columns, which is used to send IoT data to the destination
SELECT STREAM "Moisture","Temperature","Name","Conductivity","Light","Battery","DateTime" FROM "SOURCE_SQL_STREAM_001";


CREATE OR REPLACE STREAM "INTERMIDIATE_SQL_STREAM" (Moisture INT, Conductivity INT, Name VARCHAR(16));
-- Create pump to insert into output 
CREATE OR REPLACE PUMP "STREAM_PUMP_001" AS INSERT INTO "INTERMIDIATE_SQL_STREAM"

-- Select all columns from source stream
SELECT STREAM "Moisture","Conductivity","Name"
FROM "SOURCE_SQL_STREAM_001"
-- LIKE compares a string to a string pattern (_ matches all char, % matches substring)
-- SIMILAR TO compares string to a regex, may use ESCAPE
WHERE "Moisture" < 25;


-- ** Aggregate (COUNT, AVG, etc.) + Sliding time window **
-- Performs function on the aggregate rows over a 2 minute sliding window for a specified column. 
--          .----------.   .----------.   .----------.              
--          |  SOURCE  |   |  INSERT  |   |  DESTIN. |              
-- Source-->|  STREAM  |-->| & SELECT |-->|  STREAM  |-->Destination
--          |          |   |  (PUMP)  |   |          |              
--          '----------'   '----------'   '----------'               

CREATE OR REPLACE STREAM "DESTINATION_SQL_HHR_STREAM" (Name VARCHAR(16), high_count INTEGER);
-- Create a pump which continuously selects from a source stream (SOURCE_SQL_STREAM_001)

-- performs an aggregate count that is grouped by columns ticker over a 2-minute sliding window
CREATE OR REPLACE PUMP "STREAM_PUMP_002" AS INSERT INTO "DESTINATION_SQL_HHR_STREAM"
-- COUNT|AVG|MAX|MIN|SUM|STDDEV_POP|STDDEV_SAMP|VAR_POP|VAR_SAMP)
SELECT STREAM *
FROM (
      SELECT STREAM 
             Name, 
             COUNT(*) OVER THIRTY_SECOND_SLIDING_WINDOW AS high_count
      FROM "INTERMIDIATE_SQL_STREAM"
      WINDOW TWO_MINUTE_SLIDING_WINDOW AS (
        PARTITION BY Name
        RANGE INTERVAL '2' MINUTE PRECEDING)
) AS a
WHERE (high_count >3);


On the Real-time analytics tab, you can see the results of the SQL query. See the following screenshot.

Connecting the destination for the analytics application

To connect the destination for your analytics application, complete the following steps:

  1. For Destination, select AWS Lambda Function.
  2. Create a new Lambda function and choose the Lambda blueprint.
  3. In the search bar, enter SNS.
  4. Choose kinesis-analytics-output-sns .
  5. Enter the function name.
  6. Select output format as JSON
  7. For the execution role, select Create a new role with basic Lambda permissions.

The Lambda function processes the results of the application and sends the results to the SNS topic you created.

  1. Modify the following function code and add the topic ARN you recorded:
    import base64
    import json
    import boto3
    from botocore.vendored import requests
    
    #import requests
    
    snsClient = boto3.client('sns')
    print('Loading function')
        
        for record in event['records']:
            # Kinesis data is base64 encoded so decode here
            print 'Number of records {}.'.format(len(event['records']))
            payload = json.loads(base64.b64decode(record['data']))
            #payload = base64.b64decode(record['kinesis']['data'])
            print payload
            response = snsClient.publish(
            TopicArn= 'arn:aws:sns:::',
            Message='Based on current soil moisture level you need to water your lawn! ',
            Subject='Lawn maintenance reminder ',
            MessageStructure='string',
            MessageAttributes={
            'String': {
                'DataType': 'String',
                'StringValue': 'New records have been processed.'
                }
            }
        )
        return 'Successfully processed {} records.'.format(len(event['records']))

  2. Choose Create function.
  3. On the Kinesis Data Analytics console, under In-application stream, select Existing in-application stream.
  4. From the drop-down menu, choose DESTINATION_SQL_HHR_STREAM with JSON output format.
  5. Under Access permission select create or update an IAM role to use with Kinesis Data Analytics.

If you already have an appropriate IAM role that Kinesis Data Analytics can assume, choose that role from the drop-down menu.

Connecting Amazon QuickSight for data visualization

To build an Amazon QuickSight dashboard, you need to ingest the JSON raw data from Kinesis Data Firehose. Complete the following steps:

  1. On the console, choose QuickSight.

If you are using Amazon QuickSight for the first time, you must create a new account.

  1. After you log in, choose New analysis.
  2. Choose New data set.
  3. From the available source, choose S3.
  4. Enter data source name.
  5. Select to upload the manifest file.

The manifest file provides the location of the S3 bucket and the format of data in Amazon S3. For more information, see Supported Formats for Amazon S3 Manifest Files. See the following example code:

{ 
    "fileLocations": [                                                    
              {"URIPrefixes": ["https://s3.amazonaws.com//data/////"
                ]}
     ],
     "globalUploadSettings": { 
     "format": "JSON"
    }
}

Amazon QuickSight imports and parses the data.

  1. To transform and format the ingested data, choose Edit/Preview data.
  2. After you are done formatting, choose Save and visualize data.

To build an analysis on the visualization screen, complete the following steps:

  1. For Visual type, choose Line Chart.
  2. Drag and drop the DateTime field to the Field Wells X-axis.
  3. Move Light to the Value
  4. Move Name to the Color

This builds a dashboard that shows the amount of light over a period of time for the four sensor locations.

  1. Change the DateTime aggregate option from DAY to MINUTE.
  2. Change Light from Sum to Average.
  3. Choose the format visual option to change the X-axis, Y-axis, or legend properties.

The following graph shows the average light over time. You can similarly build one for moisture, temperature, and soil fertility level.

Conclusion

This post showed how to use AWS DeepLens and the built-in image classification algorithm in Amazon SageMaker to detect weeds from grass based on a publicly available dataset and build a real-time lawn monitoring system using Amazon Kinesis and AWS IoT. You can also visualize data using Amazon QuickSight. You can clone and extend this example for your own use cases.


About the Authors

Ravi Gupta is an Enterprise Solutions Architect at Amazon web services. He is a passionate technology enthusiast who enjoys working with customers and helping them build innovative solutions. His core areas of focus are IoT, Analytics and Machine learning. In his spare time, Ravi enjoys spending time with his family and photography.

Shayon Sanyal is a Senior Data Architect (Data Lake) with Global Financial Services at Amazon Web Services. He enables AWS customers and partners to solve complex and challenging AI/ML, IoT, and Data Analytics problems and helps them realize their vision of a data-driven enterprise. In his spare time, Shayon enjoys traveling, running and hiking.

Read More

Posted on

Why Mastering Delegation is Crucial in the Gig Economy

In my own life, I’ve seen massive shifts in delegation since I began to engage regularly with freelancers. I get more done, and the projects benefit from the contributions of experts. I also get to spend more time with my family, doing things we all enjoy.

I also spend a lot of my time writing and teaching on the Gig Mindset. Both writing and teaching require a great deal of communication. In addition to speaking at conferences, I deliver a weekly newsletter to more than 60,000 people and record regular podcasts.

I call this approach the Gig Mindset.

The Gig Mindset involves making my network of freelancers my first port of call when I have something to accomplish. Every element of the Gig Mindset takes practice. By far, the hardest, in my experience, is delegation.

Delegation requires you to let others run with your ideas.

It’s difficult because, to delegate successfully, you need to be willing to give up control. It takes courage to change your mindset, trust in people with diverse backgrounds, and radically reinvent how you work and live.

To truly engage with what is possible in the Gig Economy, however, delegation essential. Let’s break down what I mean by delegation in a Gig Mindset context — and why it’s such an important skill.

Giving up Control

What is delegation? When I use the term, I’m not simply saying that you tell a freelancer what you want to be done. Imagine you’ve ordered an Uber. The app allows you to plot your journey and dial in the exact spot to be dropped off. When you get in the car, you could ride the entire way in silence. Maybe, as you approach your destination, you offer a few bits of clarity to guide in those last few blocks. Think about the trust you just placed in your driver.

Could you have sat down in the passenger seat, app out, playing navigator the entire time? Sure. You could also just drive yourself if you need to have that much control. Delegation means stepping back from the driver’s seat and trusting your freelancer to follow directions and ask questions if they get stuck.

Right now, I have people that do web research and data to support my arguments and narratives around a variety of topics. I have an expert who does motion graphics. I have another editor for videos. Sometimes I need graphs and charts based on the data I’ve sourced to support articles and newsletters.

All of these tasks represent someone I’ve delegated to. Someone I’ve trusted to run with my instructions. This is the type of delegation that will allow you to thrive in the Gig Economy.

Conferring Authority on Your Team

The reason delegation is difficult is that it requires you to assign both responsibility and authority to the freelancers with whom you work.

Responsibility is easy. When you hire someone full time, you are giving them responsibility. It’s part of their job description. When you delegate to someone, you are assigning them authority. They can make decisions based on your instructions and your intent. You are trusting them to make the right choices in pursuit of a shared goal, for which you are ultimately accountable. For so many people I’ve met, that is the scariest thing imaginable.

I can’t emphasize enough how hard that idea was for me and still is for people who are beginning to work without a shared context. Human beings have difficulty seeing how delegation can be a blessing.

If we’re honest with ourselves, it looks like a threat. We all have that expectation that we are “the only ones who can do this task.” We tell ourselves that we’re the only person who can do it. If we don’t do it personally, it just won’t get done. Or it won’t get done right.

For anyone who has managed a team of people, you know delegating won’t lead to the same end result as doing something yourself. But I’ll bet that you have experience of getting things done through delegation.

No matter the task, you and your team pulled through. And that diversity of thought made the project better. Different doesn’t equal worse. Working with a wide group of people adds new voices and perspectives and helps find new solutions to a variety of challenges.

When I engage with freelancers and bring together a wider team, I gain knowledge. My life experience is limited to my gender, my race, the neighborhood I grew up in, and the schools I attended, as well as the companies and industries where I worked.

What looks “right” to me is fixed and rigid. Adding in the perspectives of people from around the world teaches me how to connect on a whole new level. It makes the market research better, designs products better, and forces me to improve my management skills and communication.

The Gig Mindset is not a shallow pond. You can’t just dip in your toes, play around, and then go back to your old lifestyle. In fact, you have to come to this with a little faith, the belief that this will work. You have to lean into it, dive into the deep end with the expectation that—for just a moment—you will be completely underwater.

How to Communicate Effectively

The number one challenge, the number one place where people struggle, is communication. I’ve seen it from thousands of people. They struggle with how to communicate their expectations to someone who may not have a shared context; to give up control and trust.

The “control” problem isn’t exclusive to business relationships. Control affects millions of intimate relationships too. Couples counseling is a huge industry in the United States — becasue of the “control” issue.

Now they have to articulate those tasks. They have to provide specific instructions and then just walk away. For a lot of people, this is new. It’s easy to sit in a meeting and just talk, but far more difficult when you have to write a descriptive project brief to delegate.

It’s understandable. Your tasks are so innate to you. If you closed your eyes, you could picture every detail. Now you have to work with someone who doesn’t share that context, and you have to place all your hopes and expectations into them. It’s a real learning process.

Delegation isn’t just saying, “Go do this.” It’s building expectations, setting timelines, and really engaging with these experts. To delegate is to get your vision on paper with examples of things that inspire you.

Effective delegation is inviting the other individual to provide guidance –to you– on how they work and feedback on your thinking. It’s trusting that they are professionals and want to deliver the very best. Most of all, it’s about having an open and curious mind throughout the process.

Delegation in Action

Let’s use an example to illustrate how delegation works—and how it doesn’t work. You need to cater to a working lunch for a group of ten. Now, if you were to ask a virtual assistant to find a place to eat, you’d get back a pretty bland response. Maybe something on their list would fit your needs, but it would be a roll of the dice. What about dietary restrictions or allergies? In this case — you’ve provided too little information and context to expect a good result.

So, you go back to the freelancer, but you ask a more detailed question: “I’d like somewhere to order lunch. It needs to be within fifteen miles of my office, my boss prefers Italian, and it needs to be vegan-friendly. Also, we are capped at $30 a person.”

You’ve provided the same request but with context. You want something specific, but not so specific that the request is redundant. If I engaged a virtual assistant and said, “I’d like to eat at McDonald’s tonight,” I’ve wasted our time and my money.

The sweet spot lies in providing enough information to your freelancer for them to come back with specific recommendations that meet your needs, but not so much that their input is redundant. To delegate effectively, you need to know what you want, create a brief, then trust an expert to fulfill that brief.

A Delegation Revolution

To excel at delegation, you need to be clear about what you gain. It is all about your relationship with time. You have to go back and look at all the tasks for this project. What are the trade-offs? What has to go? No matter what you do in life, your time is finite.

Whether you work in the mailroom or the top-floor corner office, you have the same number of hours in a day. You can’t do everything you want. You can’t even do all the tasks you need to do, at least not alone. So, you need to start looking at your life and selecting those items you can delegate out. What can you give up, relinquish all control of, so you can have more time and space?

Radical delegation is about practice. Start delegating with small tasks, which leads you to more complex tasks. Do a couple of projects in the virtual system. Engage with a virtual assistant on one of the platforms and practice giving detailed instructions. You’re not writing pages and pages of notes, just a few bulleted guidelines.

Giving up control is hard. It takes time. But it gets easier as you build your trusted network of freelancers. The goal is to find your tribe. After a while, you will see that your value isn’t the control. Your value comes with the exponential opportunities you create by engaging with these experts.

When you empower your employees to use the Gig Mindset, you add a force multiplier to your team. Each person becomes an engine of activity, bringing in expertise that you couldn’t have expected before.

Do you have a special project you want to do at work? Or a family activity that keeps being pushed back? Or a trip to visit family and friends? Find a freelancer, brief them thoroughly, then stand back and allow them to do their job.

For more advice on mastering delegation, you can find Gig Mindset on Amazon.

Paul Estes

Paul Estes is an unstoppable advocate for the gig economy who is dedicated to creating opportunity for everyone, reskilling by doing, and bringing diversity to our work. After twenty years of driving innovation in Big Tech (Dell, Microsoft, Amazon), Paul transitioned into working as an independent, remote freelancer. He shares his insights from main stages as a keynote speaker and offers his thoughts and advice through articles on LinkedIn. By engaging with freelancers, Paul gets exponentially more done at work and has more time for his wife and two daughters. He’s the author of the best-selling book, Gig Mindset: Reclaim Your Time, Reinvent Your Career, and Ride the Next Wave of Disruption.

Read More

Posted on

Make a Plan Before Building Your Mobile App

With the rise of smartphones, the demand for mobile app development companies and mobile app developers are also in demand. The market and advantages for the mobile app in any organization should convince every industry to search out professionals who can build apps to fulfill their requirements.

The mobile market is divided into two forms of businesses.

The first company has the app and will undergo continuous attempts to innovate and upgrade its app, supporting the buyers. The company will work to enhance the business and grab the attention of new buyers.

The second company may or may not be a startup. The second company is new, and they’re now ready to enter the market and showcase their business. This company is ripe for targeting viewers through mobile apps.

Things to consider for developing a mobile app.

List of essential things to consider for developing a mobile app

Here are the things to consider before developing an app.

  • Perform thorough research.

    Initially, you need to focus on doing out-and-out market research on other apps that give similar services that your app provides. The market research will help you to have a clear idea of features that are successful and features that are not accessed by the customers.

    Go through customer reviews of similar apps and find out what extra features the customers want. Read and look for complaints and make your application work better and run more smoothly.

  • Make sure that it does not gabble up data.

    Collecting data is a very tedious thing. The data so collected is so expensive. So if you want to increase the popularity of your app, then deduce the efficiency. Apart from this, some apps need a continuous internet connection to function. Some apps don’t require the internet to function. So plan which kind of app you will prefer.

  • Plan to offer something unique to the customer.

    As per current stats, there are some millions of apps both in the Android and iOS platforms. If you desire to develop something that people want, then plan to design something unique in its features. Make sure that your app has some unique features that make you separate from others.

  • Use animations.

    Using animation is a unique feature, but by using animations, it takes time to load. Load times may give a false impression on your app to the users. To avoid lousy load times, make sure that your app is working correctly without any crash.

    Stay away from a slow load time no matter what — but use animations to improve the overall user experience of an app significantly, which may give your customers the fun part they are looking for with your app.

  • Plan monetizing your app.

    While developing a mobile app, it is essential to think about the price of your app correctly. There are a lot of mobile app development companies that are looking to hire mobile app developers for a job to be done on a contract basis. Be genuine while charging remuneration.

  • Recognize your app users.

    Before launching the app, it is wise to identify your app users. Once you are okay with your target audience, this will help you to know what exactly they want from your app. Your target audience also helps to narrow down the features you’ll include in your first launch of the app. Study and understand your target audience.

  • Choose the expert development team in one platform.

    Choose the mobile app development company that is an expert in developing mobile apps in various domains on both platforms — iOS or Android. Check all the past work of your developer. Was your developer successful in app development, and was the app able to gain popularity in the mobile app industry?

  • Always think about the right marketing strategies.

    Marketing strategies are essential. Your app may have several unique features, but it may not gain market traction. To have good market results, you must start marketing campaigns for two or three weeks (or months) before the launch of the app in the market.

  • Test the app thoroughly before launching.

    Before launching the app, be sure that all the bugs are fixed. If your app has bugs through which you can lose your users and also the goodwill of your customers. Make sure that every bug was fixed.

  • Write a catchy description.

    An excellent app description will help the user to find all the information about the app. The play store also allows the developer to add a few lines of a story about the app. A developer will not describe your app as well as a writer — so write some fun details about your app. Make an eye-catching graphic for your app — grab visitor’s attention.

  • Plan the development in your budget.

    App development is not free. Here is the time to spend a lump sum on your app developers to give a shape to your business idea. The development involves multiple stages — plan your budget to cover all the steps.

Conclusion

This list is just a quick overview of the app-building process. Your users will ultimately be the success of your investment in mobile app development. Make sure your app is an application that will be much recognized and loved by your audience.

Image Credit: cottonbro; Pexels

Jyothi Amarthi

Digital Marketing Analyst at Krify Software Technology – A leading web and mobile app development company.

Read More

Posted on

AI’s Man Behind the Curtain

As the world grows increasingly connected, growing concern regarding the influence of artificial intelligence (AI) has been bubbling to the surface, affecting perceptions by industries big and small along with the general populace. Spurred on by sensationalized media predictions of AI taking over human decision-making and silver-screen tales of robot revolutions, there is a fear of allowing AI or its cousin, the Internet of Things (IoT), into our lives. Here is AI’s man behind the curtain.

One of the biggest sticking points is the popular – yet mistaken – notion that AI will cost people their jobs. In truth, the situation is just the opposite. The real future of AI isn’t one where people are replaced, but where man and machine work in tandem to cover one another’s weaknesses. AI isn’t a job thief – it’s a job creator.

How do we know? Simple – it’s all happened many times before. Industrial hype cycles have traced very similar lines in the past, from industrialization to the internet. In all cases, many people were certain these new technologies would put people out of work. Of course, this never came to pass, and technology always ended up creating a net gain in job creation.

Learning from History

Consider the automated teller machine or ATM. The fear lied in the name — people were worried the introduction of these devices would render human tellers obsolete.  However, the reality turned out to be the exact opposite. The streamlined service of these ATMs allowed banks to open more branches than ever before, which means – you guessed it – far more tellers were employed than before the introduction of ATMs.

John Hawksworth, the chief economist at PwC, said in a 2018 analysis that AI and robots, much like the inventions of the steam engine and computers before them, will displace jobs yet simultaneously generate large productivity gains. This spike in productivity brings prices down, raises real income, and thus creates demand for more workers. The firm predicts that in a post-AI world, some sectors will see job creation soar by as much as 20 percent.

To take this analysis further, the World Economic Forum predicted in a 2018 study that the complete integration of artificial intelligence would displace 75 million jobs – but, critically, would also result in the creation of 133 million new jobs. The net gain is clear just as it has been with many other technological innovations throughout history.

Also, it should be noted that some business problems will always necessitate the human touch, as even the most advanced AI or the most well-connected IoT device will come up short.

People and Machines in Tandem

The “conversational guidance” software that’s now being rolled out to call centers around the U.S.  is a good example of what AI’s future could look like. These programs use speech recognition AI to measure cues from both sides of a phone call, advising representatives on how to maximize customer satisfaction.

Speech recognition accomplishes this feat by indicating if the rep is talking too slow or fast, is taking too coarse of a tone, sounds tired or bored, and so on. The automatic speech recognition can also pick up on whether a customer is getting frustrated and guide the rep to empathize. Some firms incorporating the technology have already reported an increase in customer satisfaction rates as high as 13 percent.

AI can be seen playing a similar supportive role in areas such as design – from industrial 3D products to graphics and website user experience. Utilizing big data and speedy calculations, AI can advise on subtle design elements that will make customers more likely to engage with a product, platform, or service. For example, it may determine that a web designer would be better served to place a button on the top right of a page, rather than the bottom left, since doing so leads to higher engagement and conversion rates.

It’s not all hard data and calculations though – there’s plenty of room for fun and creativity in this new space, too. Few likely know this better than the founders of “AI Cocktails,” a creative AI toolkit that uses neural networks to create inspirational new drinks born of AI and human collaboration.

Using hundreds of traditional cocktail recipes as its base knowledge, the AI outputs a list of wild combinations that – with a little tweaking from a human bartender – can surprise and delight the taste buds. Who would have guessed that rum, wine, and vanilla ice cream would make such a scrumptious combo?

As we’re seeing now, something truly special emerges when human ingenuity combines with statistical insights that AI can provide us. If we continue to engage and integrate, a slew of longstanding societal challenges – energy waste, traffic, disease, and much more – could be tackled by this newly formed dynamic duo of man and machine.

The notion that technology is a looming threat to labor fails to consider its role as a job creator. Already, businesses are beginning to incorporate AI and IoT to make their services and products more innovative, efficient, profitable, and safe.

The future of AI is in fact already here, and those who have moved past their misconceptions and misgivings to embrace will reap massive benefits.

Image credit: Pexels

Barry Po

A veteran of both startups and enterprise business, Barry has led global product teams operating in over 80 countries and has held leadership roles at some of the world’s most valuable brands. Prior to Universal mCloud, he was head of product, marketing, and business development at NGRAIN, where he played a key role in taking the business to a successful exit. Barry graduated from the University of British Columbia with a Ph.D. in Computer Science in 2005.
His accomplishments have been recognized through numerous awards. He held an NSERC Industrial R&D Fellowship, is a two-time winner of the annual Communications Award from the B.C. Advanced Systems Institute (now the B.C. Innovation Council), and was a nominee for the Governor-General’s Gold Medal. In the press, he has appeared in Popular Mechanics, Inc Magazine, and Singularity Hub.
Barry is active in the Vancouver high-tech community as a guest speaker, mentors budding entrepreneurs and innovators, and serves on the Dean’s Advisory Board in the Faculty of Communications, Art, and Technology at Simon Fraser University.

Read More

Posted on

Why organizations need IoT data scientists on their team

Data scientists increasingly play mission-critical roles in IT, as big data continues to surge and predictive systems grow more essential to organizations. However, data scientists aren’t a one-size-fits-all proposition, and different skill sets work with different IT domains. Now, the IT domain that most needs a data scientist is IoT.

IoT architecture differs from conventional IT and cloud architecture because of its broad distribution of devices and networking intricacies.  The differences in the kind of data on the edge and how it must be processed are just as important as the architecture. Data processing is where IoT data scientists can significantly influence both the quality and use of data.

The challenges of working with data at the edge can be well-mitigated by an IoT data scientist who has the right skills for the job. Organizations that invest in an IoT data scientist will see improvements in several areas. The IoT data scientist’s diverse knowledge base will take pressure and time off IT during project deployment and testing. IT can reconsider and adapt to decisions about data management and algorithm application in an ongoing and accelerated fashion, rather than wait until the system is ready for formal testing. An IoT data scientist can develop a complete understanding of the IoT system’s behavior, both operationally and in the abstract potential of its output.

Edge data creates unique challenges

IoT data scientists must understand the differences in the processing and management of data on the edge, where IoT happens, versus traditional infrastructure. The following four aspects show the contrasts:

Preprocessing of data. Data doesn’t flow out of IoT in tidy, well-formatted records, as it does in conventional systems. IoT data is often sparse or incomplete, subject to the whims of the environment and the state of the machine producing it and it varies under changing conditions. The data is frequently temporal and time sensitive. IoT data scientists can apply deep learning to spot conditional shifts in data patterns, make predictive assessments of data quality and fill in the gaps as needed.

Sensor fusion. Increasingly, the state of a machine or a process depends on many IoT sensor inputs. The challenge is to integrate the data from disparate devices meaningfully to boost the quality and mitigate the uncertainty of individual results. Data scientists often must customize data integration, which requires specialized methodology to achieve and validate.

Deep learning and AI on the edge. Many IoT applications need AI, but also have a real-time component, such as facial recognition. In such scenarios, the AI application must learn in real time, as there’s no room for the latency created from round trips of data to and from a cloud. Deep learning must occur where IoT data is created in edge computing nodes.

Real-time processes. Another major consideration is the need to aggregate and correlate IoT data for real-time processes, such as fleet management. IoT data is often unstructured and must be tagged and correctly synchronized in real time for proper use because time windows fluctuate, and some applications require instantaneous best-guess corrections.

The skills an IoT data scientist must have

All data scientists should be well-versed in machine learning and deep learning, but IoT data scientists also require different skills from traditional data scientists.

An understanding of signal processing. Data streaming into enterprise processes via IoT channels should be treated as a plethora of signals all flowing into a battlefield command center. The timing and relative strength of the signals are crucial to making sense of the intelligence they can convey. Competence with signal processing mathematics, as well as information theory, are major advantages in making sense of IoT data.

Knowledge of gateway layers. Between the edge and the enterprise there should always be a gateway layer where security, routing and often data aggregation take place. A strong foundation in how this layer works, the ways it can be configured and the hardware and software options available will be a plus for any data scientist who must match the handling of the data to the actual hardware.

Edge analytics. IoT data scientists should understand how edge analytics differs from cloud analytics because organizations increasingly require real-time response and low latency in IoT applications. Cloud analytics is seldom time-sensitive and doesn’t always require granular inputs, while IoT analytics does.

An understanding of blockchain. Blockchain is a necessary skill at the edge as its use continues to grow. IT experts creatively apply blockchain to heighten security beyond enterprise firewalls and audit transactions in decentralized field environments.

Personal skills. IoT data scientists must be cross-disciplinary by nature and capable of multitasking. More than that, however, they should be curious and innovative, and willing to learn new things when the task requires it. It’s also a plus if they work well with a wide range of people in other disciplines.

Read More

Posted on

AI-Driven Video Analytics for the Retail Industry

Illustration: © IoT For All

Artificial intelligence (AI) is directly correlated with Data Science, which is aimed at extracting business value from an array of information. This value can consist of expanding the capabilities of forecasting, knowledge of regularities, informed decision-making, cost reduction, etc. In other words, artificial intelligence operates with massive arrays of information, analyzes incoming data, and develops adaptive solutions based on them. 

In the modern world, the retail industry is rapidly increasing the application of artificial intelligence in all possible work processes. Thus, leveraging opportunities by applying analytics can undoubtedly improve a wide range of operations in the grocery industry. With AI, the largest supermarket chains are achieving very ambitious aims: 

  • improving and expanding customer service capabilities,
  • automating supply chain planning and orders delivery,
  • reducing product waste,
  • sharpening the management of out-of-stock and over-stock (grocery stock out), and
  • enhancing demand forecasting. 

The AI solution ecosystem is extensive and able to satisfy most needs of all grocery retailers (from large chains to the smallest businesses). As of now – during the quarantine, online grocery analytics has become a real “savior” in terms of managing stock-out conditions. With intelligent data-driven approaches, supermarkets can process a large amount of information, accurately forecast consumer demand and supply inventory, and generate the most accurate pricing and purchasing recommendations. As a result, grocery retailers will not only stay afloat, but will continue to generate profits even throughout the most critical situations, like during the coronavirus pandemic. With that being said, it is evident that all companies now require an immediate action plan in response to COVID-19. 

A New Level of Video Surveillance

As a rule, most grocery stores have a continuous video surveillance system. Previously, such systems were installed only for security purposes: controlling the safety of products and preventing theft. But now, artificial intelligence video analysis is able to monitor the behavior of customers from the moment they enter the store until payment. How does it work, and why do stores need it?

Large grocery chains like Amazon and Walmart use high-tech cameras that utilize automatic object identification (RFID). Typically, such a system is used in unmanned electric vehicles to monitor passenger behavior and process visual information via a computer. But the primary goal of video grocery store analytics is to determine which items are in high demand, which products buyers most often return to the shelves, etc. Moreover, cameras recognize faces, determine heights, weights, ages, and other physical characteristics of customers. Subsequently, the AI (based on all the obtained data) identifies the most popular products from specific consumer groups and offers options for changing the pricing policy. A computer automates all these processes without human intervention. 

Preventing Grocery Stock-out and Shrinkage

Artificial intelligence in the retail industry is capable of solving problems that people cannot cope with. Experts state that a person physically cannot view all the video surveillance. There is not enough time for this, and unfortunately, human vision is not perfect. But this is no longer necessary! Video analytics for grocery stores perfectly copes with such tasks. For example, connecting cameras to the store’s automated warehouse system and equipping shelves with sensors can uncover gaps in inventory records and stimulate investigations. Grocery store data analytics can also monitor stocks and provide signals about replenishment needs. Facial recognition technology as described above is capable of comparing the faces of people with criminals (or wanted individuals) and warn security.

Advancing Traffic Flows and Store Layout

Data collected about customer behavior helps supermarket managers optimize store layout. Moreover, the computer program can design the most “optimal” layout and test it, generating an overall better customer experience and an increase in the store’s monthly profit figure. 

Data can be collected about the number of people that enter a store and the amount of time they spend shopping. Based on this data, artificial intelligence can predict crowd sizes and the length of time people wait in line. It will help improve customer service and reduce staff costs during “calm” hours. In other words, AI is able to draw optimal store management plans at various hours of the day with maximum benefit for the business. For example:

  • develop traffic flows
  • optimize display placement and floor planning
  • improve strategic staff distribution
  • draw correlations within the dwell time and purchasing
  • predict products for individual shopping groups

Enhancing Customer Experience

Every business should know as much as possible about its audience to offer the best possible service. AI in grocery stores using video intelligence software gives detailed demographic data with a detailed analysis of shopping habits. This information provides unlimited opportunities for stores to increase profits. By knowing their customers, store managers can maximize the client shopping experience, creating favorable conditions (made specifically for customers’ preferences). Furthermore, AI for grocery stores can help produce the most accurate demand forecasting models of the given target market. 

In addition to working with the target audience, managers can transfer information to the marketing department with the data obtained from video analytics. By exploring other audiences, marketers can develop strategies to attract new customers by creating relevant advertising, promotions, and sales. Additionally, stores can create separate display cases (vegan products or gluten-free) for small shopping groups, satisfying their needs. 

Among all existing technologies of artificial intelligence for grocery stores, video content analytics provides maximum support in almost all activities: merchandising, marketing, advertising, and layout strategies. By optimizing these processes, stores not only save and reduce losses, but also have the opportunity to expand their business by increasing profits. The main goal is not only to satisfy customers, but to strengthen customer retention rate.

Read More

Posted on

How to Turn Customer Satisfaction Survey Results into Action

The concept of customer satisfaction has prevailed since the beginning of commerce. In other words — customer satisfaction is certainly not a new concept. However, the way feedback is being gathered to improve customer satisfaction level has totally changed. Here is how to turn your customer satisfaction survey results into action.

Many businesses conduct customer satisfaction surveys, sadly, a very few of them actually are capable of leveraging the results into real-time applications. As a result, it is a waste of time, resources, and workforce. But here is the thing; it is not that tough to turn those survey results into an action plan.

Every business, regardless of its size, market focus, and type can be benefited from the customer satisfaction survey results conducted to identify customer needs and expectations.

Surveys reveal actionable insights.

  • Market demand for products and services of your brand.
  • What do your potential customers prefer: basic or value-added?
  • Grounds where you need to work on your products and services.
  • Changes required in your current service delivery business model.

Once you have these data, now is the time to make action plans, work on them, and turn them into fruitful results. And for that, let’s understand how you can develop your action plan based on the results you have received from visitor satisfaction surveys.

Analyze the Results

This is the first step that must be followed while leveraging the results from customer satisfaction plans. It can be really useful to provide you with strategic as well as tactical road maps for carrying out implementation works. By analyzing these results, you can enhance or modify your existing products or services to meet market needs.

Additionally, you can also develop some new products or services reflecting the demands of your customers. Customer experience surveys also set the ground for you to identify as well as cultivate new horizons of target markets depending upon identified patterns. Align your decision-making according to the purchase behavior, preferences, and characteristics of users.

Comprehend the Experience of Your Customers from Their Perspective

Many companies develop various perceptions about the experience of their customers while using their products and services. In order to improve one thing, they tend to lose another and it again hampers their customer satisfaction.

Some companies when getting negative feedback about their high prices reduce their prices drastically and end up giving less training to their other teams and sales force. Ultimately, it results in poor customer support.

The catch here is to never compromise on providing a good customer experience. You have to think from the perspective of your customers and ten make your business plans.

Unleash the Value of Satisfied Customers

If your survey results reflect that you have good numbers of satisfied customers, don’t just sit back. At this time, use those happy customers for strengthening customer satisfaction more by further monetizing it. For this, you can turn the results of customer experience surveys into something tangible and then use it in marketing.

For instance, if you have found in the survey that 85% of your customers like recommending your brand to their friends, share the same information in your marketing campaigns. Furthermore, you can also ask those customers to give you testimonials and then use them on your marketing campaigns, company brochures, and most importantly, on your website.

Identify the Reasons for Issues

When you get negative feedback about something, the first thing is to fix it as early as possible. And once you have fixed it, try to understand the ground reasons behind those issues so that such things would not happen again.

When you can identify those actual issues, implement plans to overcome them by setting new and innovative methods. Thus, when you operate at the molecular level, you can expect good changes in the overall satisfaction levels of your customers.

Convert Unsatisfied Customers into Potential Resources

The way you handle negative feedback says a lot about your business. Apologizing is good, but you have to walk extra miles to win those customers back. But it is true that you can convert your unsatisfied customers into potential resources. It might seem tough, but it is a great opportunity for you to establish your credibility again to those customers.

As unhappy and angry customers are more honest while giving feedback, you can use them as your marketing weapon. For this, you need to listen to them patiently. You can send them an email immediately with more information on the inconvenience caused. Besides, you can also do variations such as a coupon code, apology letter from your CEO, and a survey asking more details regarding the issue, and so on.

Final Words

A customer satisfaction survey is vital for every business these days, given the huge competition in the market. In such a scenario, you cannot leave any loopholes from where you can lose your customers. Instead, make a good implementation strategy to leverage your survey results in order to win back your unhappy customers and boost the numbers of more happy customers.

Vinod Janapala

Product Marketing Manager

Vinod is Senior Product Marketing Manager at piHappiness – Customer Feedback App & Survey Software. piHappiness is a top customer feedback software designed to collect customer feedback on Web, iPad & Android tablets. Vinod is keen on such topics as marketing, SaaS challenges, and Personal Growth.

Read More

Posted on

How Exercise Can Help You Sleep Better

Most of us know exercise can improve muscle strength, heart health, and energy levels, but regular physical activity can also reduce insomnia and increase deep sleep. Research shows that just 30 minutes of moderate aerobic exercise, such as walking, swimming, or jogging, can help you fall asleep faster and experience more deep sleep.

What’s more, better sleep can be found after just one day of exercise and will continue to improve with long-term training. Below, we outline precisely how physical activity impacts that body and why it is conducive to a good night’s sleep.

How Can Regular Exercise Affect Your Sleep?

Including exercise into your daily routine can improve your mental and physical health, but it also offers the following sleep-related benefits.

Fall Asleep Faster

One link between sleep and exercise involves body temperature. Your body temperature fluctuates slightly throughout the day—it tends to be higher in the afternoon when you’re alert and lower in the evening when you prepare for sleep.

When you exercise, your core body temperature and alertness increases. As your temperature gradually lowers back down throughout the day, you naturally become tired and fall asleep quickly once it is time for bed.

Whether you workout first thing in the morning or the afternoon, you will still experience this benefit. However, exercise right before bed can increase alertness and inhibit sleep, so it is best to work out at least 2 hours before bed.

Increase Deep Sleep

Exercise increases slow brain waves, total sleep time, and REM sleep, which leads to deeper, more restorative rest. When you spend more time in deep sleep (stage 3 and REM), you experience more healing—slow delta waves clean the brain, important information is stored in long term memory, and HGH works to repair and rebuild muscles. Shortened sleep periods deprive your body of these essential benefits.

The increased adenosine production during physical activity seems to be one of the reasons exercise can improve deep sleep. Adenosine is a vital component of a natural sleep-wake cycle. When adenosine builds up in the body, it gradually causes slower cell activity that leads to drowsiness—helping us drop into deep sleep faster.

Improve Sleep Duration

Exercise requires energy—when you workout, you expend more energy and will naturally require more sleep to feel rejuvenated. After practicing moderate aerobic activity for 16 weeks, evidence shows that sleep duration increases by up to 2 hours.

To experience longer, better quality sleep, a regular exercise routine is vital; however, your workout does not need to be intense to experience these benefits.

Studies show that patients enjoyed a longer sleep time regardless of the type of activity or intensity. But the consistency of activity did make a difference, and most sleepers find a gradual improvement between 4 weeks of experience and 16 weeks.

Alleviate Stress and Anxiety

Stress often triggers insomnia and frequent sleep disruption. When you try to get to sleep while plagued with excessive anxiety, you will likely toss and turn while your mind races with worry.

The flood of cortisol in the body (the stress hormone) keeps the heart racing and the brain active—preventing sleep and relaxation. Unfortunately, this is very common and often creates a vicious cycle—stress can inhibit sleep, but sleep deprivation can aggravate anxiety and make it difficult to handle everyday issues.

Exercise can help break this cycle through the release of endorphins. Endorphins stimulate opioid receptors that minimize pain and increase feelings of well-being.

As you work out, endorphins are released—gradually lower cortisol, adrenaline levels, and regulating mood. Stretching exercises can also help relax the nervous system and lower blood pressure, which can both improve emotional stability and stress management.

Maintain Circadian Rhythm

We each have a circadian rhythm that is linked to the rise and set of the sun. Sunlight exposure inhibits melatonin (the sleep hormone) production during the day and keeps you alert and focused. When the sun sets, and light decreases, melatonin production increases and you become tired. This cycle sets your internal clock and regulates the time of day you often feel sleepy versus awake. When this cycle is not balanced, it can affect our sleep, metabolism, and immune function.

When you exercise in the morning or afternoon, we are helping your body establish this internal clock. Exercise increases your body’s core temperature and promotes wakefulness—creating the shift in your schedule and keeping the sleep-wake cycle intake.

Relieve Chronic Sleep Disorders

Exercise may also help improve symptoms of sleep apnea and Restless Leg Syndrome. Weight loss can alleviate snoring and obstructive breathing, but evidence suggests that moderate exercise can reduce symptoms of sleep apnea even before weight loss beings.

Experts think this may be due to increased oxygen consumption during exercise and improved heart and lung function, both of which can make breathing more comfortable during sleep.

Moderate exercise can also decrease pain associated with Restless Leg Syndrome. Physical activity increases blood flow to the legs. Plus, the dopamine production that comes with working out can reduce pain and discomfort.

How Much Sleep Do You Need For Better Sleep?

The amount of exercise that is right for you will depend on your age, current activity level, and overall health, but most healthy adults should aim for at least 30 minutes of moderate activity five days a week.

According to the National Institutes of Health and the American Heart Association, children should get at least 60 minutes of physical activity every day. Adults should get at least 150 minutes of moderate exercise or 75 minutes of intense activity every week.

Adults over 65 should also aim for 150 minutes per week but should be mindful of physical limitations. Walking is a low-risk activity for older adults and offers the same benefits as most moderate aerobic workouts.

When Should I Exercise?

Exercise affects everyone differently, so it is important to listen to your body. Some may find that exercising too close to bedtime increases heart rate and causes the mind to become more active—making it difficult to sleep. But others may find that working out in the evening promotes more relaxation. It depends on what works best for you.

In general, working out in the morning or afternoon tends to offer the most benefits in terms of sleep promotion. Since core body temperature rises during exercise, you are more likely to feel energized after working out. Therefore, the body needs time (at least 2 hours) to cool off before trying to sleep.

Other Ways to Improve Sleep

In addition to regular exercise, maintaining proper sleep hygiene is essential to getting better rest. Below, we offer 5 tips that will set you up for perfect sleep.

Sleep On a Supportive Mattress

It can be difficult to find proper sleep rest on an old, broken down mattress. When resting on a bed with indentations, your body will overcompensate for the lack of support by sleeping in awkward positions. These uncomfortable positions can throw the spine out of alignment and increase tension—causing you to wake with sore, achy muscles and stiff joints.

Memory foam mattresses contour to the curves of the body to provide pressure-free support and pain relief. However, memory foam varies significantly in terms of quality and breathability. To find the best memory foam mattress on the market, look for one that won’t trap heat and also has a responsiveness that will help you feel more lifted on the mattress, rather than sunk.

To find the best mattress for your preferred sleep position, you will want to consider firmness. For those that prefer side sleeping, a soft to medium firmness level will cushion sensitive hips and shoulders while keeping the spine neutral.

Those who prefer back and stomach sleeping may want to opt for something on the firmer side—medium to firm. This firmness will prevent the hips and shoulders from sinking too far down and keep weight evenly distributed.

Create a Relaxing Sleep Space

In addition to an uncomfortable mattress, trying to fall asleep in a hot, stuffy, or cluttered bedroom can prove impossible. Your sleep space should promote relaxation and help alleviate stress and tension from your day. Consider some of our tips below for creating the ideal bedroom:

  • Keep it cool: Set your thermostat between 67 and 70 degrees. This temperature will prevent any sleep disruptions due to nighttime sweats or overheating. Also, be sure to use cotton or linen sheets to increase breathability and comfort.
  • Keep it organized: Clutter, such as paperwork or laundry, can cause unneeded stress right before bed. Do your best to keep your bedroom organized and remove any stress triggers.
  • Keep it quiet: If you sleep with a partner who snores, you may want to consider an adjustable base. These advanced bed frames allow you to slightly alleviate the head with just a click of a button. This slight lift opens airways and reduces snoring so you can both sleep comfortably. If you don’t have an adjustable base, you can also try a wedge pillow.
  • Keep it dark: If you have unwanted light coming into your bedroom, either early in the morning or before bed, consider using black curtains or an eye mask. The darkness will help increase melatonin production and encourage sleep.

Establish a Bedtime Routine

Creating a set bedtime and nightly routine can help you mentally and physically prepare for sleep. By maintaining a consistent bedtime, you will gradually reinforce your internal clock—making it easier to fall asleep each night. When you perform the same nighttime routine before bed, you are also signaling the brain that it is time to relax.

To set yourself up for a comfortable sleep, you may want to change into breathable, cozy pajamas, wash your face with warm water, brush your teeth, and perform a relaxing activity before trying to get some shut-eye.

Reduce Blue Light Exposure

As we mentioned above, our circadian rhythm is influenced by sunlight. Darkness triggers melatonin production, causing us to become tired. When we expose ourselves to the blue light from electronic screens before bed, it can cause the brain to think it’s still day time—inhibiting melatonin production and keeping us from sleep.

To prevent wakefulness and disruption to our internal clock, experts suggest avoiding our electronic screens at least 2 hours before bed.

Alleviate Stress Before Bed

Stress and sleep are inextricably linked. When sleep-deprived, we have less control over our breathing and blood pressure—forcing us to react to everyday stressors in unhealthy ways. But stress can also cause wakefulness and racing thoughts.

To prevent this, you can perform a relaxing, stress-reducing activity before bed. Consider taking a warm bath or shower, reading, journaling, or performing a breathing experience. These activities can help you release tension and anxiety, so you can fall asleep peacefully.

Americans are missing out on valuable hours of sleep—about a third of us get below the recommended 7 hours per night. Whether that’s due to stress, insomnia, or intentionally putting off rest, sleep deprivation can be dangerous.

Certainly, stress reduces our focus, decision-making skills, and simple hand-eye coordination, not to mention that it can lead to serious health complications over time. Therefore, we must do what we can to find better sleep.

Regular exercise can help you maintain good health, manage stress, and find the rest you need to be our best.

Brad Anderson

Editor In Chief at ReadWrite

Brad is the editor overseeing contributed content at ReadWrite.com. He previously worked as an editor at PayPal and Crunchbase. You can reach him at brad at readwrite.com.

Read More

Posted on

How to Enhance your Omnichannel Strategy with SMS Messaging

Businesses are always trying to figure out the best ways to scale their businesses, especially with the myriad of technological tools and advancements we now enjoy in the 21st century. Here is how to enhance our omnichannel strategy with SMS messaging.

One such way to scale has been rethinking the way we marry digital and traditional forms of marketing. From a simple multichannel strategy that often would echo the same message on separate and unconnected channels, we’ve shifted to a more customer-centric, experience-boosting approach: omnichannel marketing.

As a refresher, allow me to quote Shopify’s definition of omnichannel marketing:

“Omnichannel [marketing] revolves around your customer and creates a single customer experience across your brand by unifying sales and marketing that accounts for the spillover between channels.”

So instead of thinking of marketing channels as separate entities repeating the same message, we’re forced to look at all channels and see how they bring about a unified customer experience. But why is it essential to implement an omnichannel strategy in 2020 and beyond?

Brand benefits for omnichannel marketing.

Let’s quickly discuss a few reasons why your brand can benefit from having an omnichannel marketing approach. According to surveys and studies, here’s what the data says in support of omnichannel:

  • Customers exposed to 3 or more marketing channels of a brand are more likely to purchase 250% more than customers only exposed to a single channel.
  • The engagement rate for brands using multiple channels can shoot up to 18.96%, compared to a 5.4% engagement rate on a single channel
  • Customer retention rates are 90% higher for omnichannel versus single-channel.
  • Average order value per customer rose by 13% for customers exposed to multiple channels compared to those who were not.

Purchase and engagement rates go up with an omnichannel approach. (Image source)

Which channels should businesses use for their omnichannel strategy?

Knowing what we now know about the benefits of omnichannel, you might now be wondering which channels actually to use. The answer is simple: whichever makes sense for your brand.

Most brands employ a mix of email marketing, social media, landing pages, ecommerce stores, and some offline channels like in-store experiences or activations. A combination of about three or more of these are tried and true; there’s another channel we recommend that is quite underutilized but can deliver powerful results: SMS messaging.

Why should businesses capitalize on SMS Messaging?

90% of customers prefer texting over other forms of communication, like phone calls and email. And for a good reason: texting is convenient, faster, and often more personal. Open rates for a text message go as high as 98%, compared to the average email open rate of about 20%.

The average person takes about 90 seconds to respond to a text but up to 90 minutes or more to respond to an email. Furthermore, 75% of customers themselves have reported that they’d be happy to receive personalized offers through text, and 65% of marketers claim that SMS marketing has been “very effective” for their businesses.

Conversions are also better with SMS marketing. Studies have found that consumers redeem coupons they receive through text ten times more than any other kind of coupon. And text messages get a 209% response rate, where 30% of customers respond to messages, and 50% of responders move on to make a purchase.

And while the data tells us SMS messaging should be a crucial part of our omnichannel strategy, not many businesses are using it. Some barriers to starting include not knowing how to use this ubiquitous channel properly. Marketers may also lack knowledge on how to get started. If you aren’t sure how to make the most of SMS as part of your omnichannel strategy, read on.

In these next sections, we discuss different ways SMS messaging can enhance your omnichannel strategy and deliver better customer experiences. And later on, we show you some of the best practices to keep in mind when you decide to send texts to your customers.

5 ways to enhance an omnichannel strategy with SMS Messaging

Just like every channel in your omnichannel strategy, SMS can play a significant role in delivering seamless, personalized customer experience. Here are five ways to use text messaging that pack a more powerful punch for your marketing campaigns.

Use a CMS that supports SMS messaging integrations.

First and foremost, you’ll need to integrate your website builder and content management system (CMS) with an SMS marketing plugin or software. The right SMS marketing solution for you makes it easy to manage SMS campaigns, keep track of customers, and automatically personalize messages with customer names and data.

The best reason for integrating your CMS with these tools is for easy monitoring of results from your SMS campaigns. You can track landing page views, conversion rates, and more to get insights or tweak campaigns as necessary. Most SMS marketing plugins and software also come equipped with autoresponders. Meaning that when a customer replies with a specific keyword, for example, they get an instant response that sends them the details they requested. Sending texts often come at no extra charge, but check with your chosen SMS service provider for any limits on your plan.

Different CMS like WordPress can integrate with several SMS marketing tools, allowing you to manage campaigns from one dashboard.

Click and Collect

One effective use case for retailers using SMS marketing is doing Click and Collect. Click and Collect is essentially allowing customers to purchase and checkout items from your retail store, then allowing them to pick up the items in person. Many customers use this shopping option to save on shipping and handling fees. Some opt for Click and Collect when they aren’t sure they can be home when deliveries arrive.

To enhance the Click and Collect customer experience, here are some ways you can use SMS messaging in this use case:

  • Send a text to customers to confirm an order has been placed.
  • Notify customers through text that their order is ready for pickup.
  • Remind customers that their order is waiting, in case the customer hasn’t picked up the parcel over some time.
  • Allowing customers to extend their collection period by texting a keyword.
Helpful messages arriving on your phone saves time.

An important message sent to your customers’ cell phone might include something that said:

Your order is here — Order Number 30745123. Please bring this message with you to pick up along with your payment card and photo ID. We will keep your order for 7 days. Thank you for your order from The Coolest Place on Earth.

Transactional reminders

After a customer makes or attempts a transaction with your online store, you can use SMS messaging to enhance the experience or increase conversions.

Here are examples of SMS transactional reminders you can send customers:

Abandoned cart retargeting. When a user abandons their cart online, remind them that they’ve left items before checkout. A message can convert users who may have abandoned their online carts because of a distraction, but you can also incentivize completing checkout with exclusive, time-limited discounts

  • The item on your wishlist is on sale. If a user had saved something onto their wishlist that suddenly goes on sale, an instant text notification can encourage them checkout and make a purchase.
  • Delivery status. Send text updates about order confirmation, estimated delivery dates, and when items are on their way or arriving on the same day.
  • Payment reminders. If customers have yet to make a payment on their orders or invoices, send a helpful reminder that they’ll see right away.

Text offers and discounts can encourage shoppers to complete their purchases and even increase the average order value.

Use text messages for your customer service.

90% of customers prefer to text over a call or email, making SMS messaging a top channel for doing customer service. Also, by allowing customers to contact you via SMS for any concerns, queries, or requests, you’re able to provide nearly instant replies.

Have a robust SMS marketing software that can integrate with your other tools. Customers can receive your message information real-time.

Customers can receive your message information real-time.

Other ways that SMS marketing can help you provide better customer service is by sending personalized birthday greetings to customers, sending exclusive offers for loyal users, or having an onboarding sequence for new customers.

Send important announcements (e.g., delays, outages).

If anything, such as delivery delays or service outages, occurs, you can notify customers instantly by sending an SMS campaign.

Because people are more likely to check their phones and read text messages than read an email, you also ensure that your customers stay informed. A text message also helps lessen any anxiety and complaints.

Inform customers about any delays related to your product, such as flight delays for travelers.

Inform customers about any delays related to your product, such as flight delays for travelers.

Hiccups and bumps on the road will always happen, but as a business, it’s your job to keep customers in the loop – especially if they’re affected by any service interruption or delay. By keeping your customer updated with instant text messages (compared to emails or website banners that customers may never open), it’s easier to provide a seamless customer experience.

Even if your business encounters a roadblock, customers are far more likely to be forgiving if they’re notified right away.

Best practices when using SMS Messaging

After seeing a few ways you can use SMS marketing, keep in mind these best practices as you create your campaigns:

Make sure your website is mobile-friendly

With SMS marketing, you can send links and multimedia messages (MMS) to provide a more dynamic messaging experience.

So if your campaigns link to your website or landing page, it’s incredibly important to make it easy for customers to view pages, offers, and sign-up forms on mobile.

You can use MMS and in-text links for special promotions and CTAs.

Mobile-friendliness also transcends to simple themes and visuals adapting to the screen. It should be easy for customers to locate important buttons, forms, and other call-to-action on your landing page.

Send the message at the time most convenient to your leads

Timing is crucial for SMS marketing. Imagine if you sent a discount code for customers at 4 in the morning. You can probably expect unsubscribes or negative brand experiences.

90% of people will read a text message within the first 3 minutes of receiving it. So if you have any offers and promotions, you might want to send these campaigns during times when customers are on a break or are at home.

Transactional messages like order confirmations can be sent instantly (even if a customer completes a purchase at, say, midnight). Still, reminder or update texts are best sent during convenient times, such as regular waking hours.

Add value while creating awareness

It can be easy to fall into the trap of using SMS marketing purely for promotions. But if all you send are mass generic promotional messages, customers may not become happy with your brand.

Even though space for SMS messaging is limited, you can still add value for customers in your text campaigns. This applies to both promotional and non-promotional messages.

For example, using the transactional history of your users, you can send highly personalized and targeted offers that show you understand their preferences and needs.

Companies can also use text messaging to enhance purchase experiences, such as providing high-value follow-ups post-purchase.

Follow-ups can be highly valuable in your omnichannel SMS strategy.

Ensure your message is clear and consistent.

Be sure your brand voice is consistent in your messages. And because you only have a limited number of characters per SMS, keep your message short and sweet.

Consider using MMS and dynamic elements to supplement your text. Don’t send long blocks of texts that are difficult to skim.

The key to effective SMS marketing is sending short messages.

The key to effective SMS marketing is short messages.

Cover all privacy issues and concerns.

As data privacy concerns are on the rise – where consumers may feel that marketing messages can be borderline intrusive – be very transparent about the way you handle their personal data, such as their phone numbers.

Display disclosures and trust signals that guarantee to consumers that messages they receive and send to your brand are encrypted and secure. (Read privacycanada.net and the European GDPRs if you really want to protect your customers’ data.)

For example, when a new user makes a purchase on your site, you may display a disclaimer that their phone number will be used to get in touch regarding delivery updates. Build trust from the get-go, so customers feel at ease receiving your SMS campaigns.

Use in moderation — don’t overdo messaging.

While you can enjoy higher open and engagement rates using SMS marketing, don’t overdo it. Customers will likely only be annoyed if they receive say, daily messages from your company – or worse, multiple messages throughout the day.

Keep text campaigns regular, about once per week or less. Try to reserve your SMS campaigns for outstanding offers and promos, and keep follow-ups and reminders far and few to avoid looking spammy. Often, it’s enough to send one campaign to announce your promotion. Then another 1-2 reminders that are considerably spaced out, such as one during the middle of your promotion and another one towards the end.

Focus on creating conversations

Maximize the power of SMS marketing by having a conversational approach. Because SMS can feel very personal to customers, it’s important for messages to be as highly targeted as possible.

Use customers’ first names whenever possible. Don’t be afraid to end messages with questions or CTAs. Craft every message as though you were speaking to your customers individually, and customers will only appreciate your SMS efforts.

Track and monitor your results

Last but not least, every channel in your omnichannel strategy ought to be monitored – especially SMS. Review your open rates, conversions, as well as other KPIs you set at the beginning of your campaign.

Tweak your strategy as you go, primarily based on real data and insights from your tactics. After all, the more you experiment with SMS marketing, the better you get.

Takeaway

To provide a complete omnichannel marketing strategy, consider adding SMS into your media mix. With unparalleled engagement rates, this simple channel can help streamline your customer experiences and boosting loyalty along the way. Use the tips in this article to help you create your own SMS campaigns, and start seeing your omnichannel strategy create valuable online and offline experiences.

Image Credit: Maxim Ilyahov, Unsplash

Kevin Payne

Kevin Payne is a content marketing consultant that helps software companies build marketing funnels and implement content marketing campaigns to increase their inbound leads.

Read More

Posted on

Break down cellular IoT connectivity options

With the multitude of unlicensed and cellular IoT connectivity options, tech experts starting their IoT initiatives might get confused with the array of acronyms and network types. IT experts must understand the available connectivity options, which work best for different uses and why they might not always use cellular IoT standards.

Tech experts might have to consider using different networks for their IoT devices because of the device capabilities available. Combining different connectivity methods can also lead to interoperability issues that make managing IoT devices and data complex. IT experts must also consider how distributed IoT technology varies in its inherent security and data management. The continued development of connectivity options works to address these issues, and 5G developments could make some headway to bring IoT devices under one network.

The common connectivity options for IoT fall on a spectrum of different throughputs and use cases. Standards organizations also continue to develop and improve common cellular IoT connectivity options to try to simplify the current complex state of IoT connectivity.

“If you bought anything new today, your choices would be, ‘What flavor of 4G am I going to choose?’ Or, ‘I may choose to go with an unlicensed technology, like LoRaWAN,'” said Brian Partridge, vice president of infrastructure, DevOps and IoT at 451 Research.

On the lower end of throughput and battery power, organizations use LPWAN options including narrowband IoT (NB-IoT) and LoRaWAN. On the higher throughput end of the spectrum, organizations use 4G and 5G. Selecting the wrong connectivity technology can affect the costs, coverage, flexibility and longevity of an IoT network.

“5G today is about enhanced mobile broadband, which are really fat pipes. And then NB-IoT sits on the other end of that spectrum, which is a really, really tiny pipe, cheaper and better battery life,” Partridge said.

In order to decide which cellular connectivity option is best for an IoT project, IT pros need to weigh device requirements and project goals.

What are the LPWAN connectivity options?

Organizations that need their devices to work in the field with a long range, extended battery life and lower costs would use LPWAN connectivity options, said Samuel Ropert, head of smart verticals and IoT practice at iDate DigiWorld, in the webinar titled “Technology Innovation for Optimized Global IoT Connectivity from LPWAN to 5G technologies.”

By 2025, the LPWAN IoT market will have more than 1.7 billion objects, a 35% compound annual growth rate (CAGR) from 2020 to 2025, according to iBasis, a communication services provider based in Lexington, Mass. LPWAN connectivity breaks down into two categories: unlicensed and licensed frequencies.

Unlicensed technologies can serve some organizations better than cellular options because they are cheaper, but they often perform slower than licensed standard technology. Approximately 25 different LPWAN technologies use unlicensed frequencies, with Sigfox and LoRaWAN more widely used, Ropert said in the webinar. Each technology has different advantages. For example, Sigfox, is a proprietary public network designed for IoT with activated roaming, and LoRaWAN serves as a private network.

Organizations only have two LPWAN cellular IoT connectivity options: NB-IoT and LTE-M. These two low-power options are the newest members of the 4G family and will carry over into 5G in 3GPP’s release 17. The cellular connectivity options cost more than unlicensed versions because organizations must have a data plan with a public network provider.

“What we see is more demand for LTE-M in our customers, but, over time, we believe it will start transitioning to NB-IoT because there are more networks that are deploying NB-IoT,” said iBasis CTO Ajay Joseph. “The stuff we see in the field is LTE-M is a lot easier to work with.”

NB-IoT and LTE-M are standardized and offer credibility to industrial IoT applications but currently do not include all of the features organizations might want, such as roaming. 3GPP is working on roaming for NB-IoT now and LTE-M roaming does work, Joseph said. Both NB-IoT and LTE-M use low power, but LTE-M has a higher throughput than NB-IoT and does not preserve battery life to the same degree, he said. If organizations need longer battery life, then NB-IoT is the better choice. For example, NB-IoT could be used for low-powered devices that monitor the level of garbage in trash cans.

“In terms of distribution, NB-IoT is expected to be the main [LPWAN] technology and by far,” Ropert said.

As of January 2020, mobile network operators had deployed three times as many NB-IoT network devices as LTE-M devices, according to GSMA. Both NB-IoT and LTE-M will see 37% expected growth CAGR from 2020 to 2025, according to iBasis.

5G promises greater support on one network

Cellular 4G provides organizations with connectivity for IoT deployments that need greater data throughput. 5G also offers this advantage, but at a higher magnitude and scale and gives critical communication lower latency, Joseph said in the webinar. Providers are still rolling out 5G networks and it is only available in select places.

“The big advantage of cellular is that it is licensed. It’s running over the licensed spectrum, which means no interference, a higher-quality connection, less dropped connections,” Partridge said.

In terms of distribution, NB-IoT is expected to be the main [LPWAN] technology and by far.
Samuel RopertHead of smart verticals and IoT practice, iDate Digiworld

The main differences between 4G and 5G for IoT are the data rate and latency. 4G provides 100 Mbps and has a typical latency of 10 milliseconds (ms). 5G provides 20 Gbps and sees about 1 ms of latency. These key differences make each option better for different uses. Organizations using 4G for IoT deployments would switch to 5G for lower latency, which is vital for applications that require real-time communication and analysis, such as vehicle-to-vehicle communication, Joseph said.

Today, a factory making cars or industrial machines would have robots, sensors and connected machines using different networks, including a wired network using industrial protocols, some Wi-Fi and a lot of Ethernet, said Partridge.

“In other words, it’s a big fragmented mess of a network,” Partridge said. “The promise of 5G is to be able to replace that mess with a single deployment of 5G to support all the connection requirements in the business.”

The wireless aspect of cellular networks offers cost savings and flexibility. Organizations can change the factory floor layout without worrying about physical network cables and use the connection for real-time analysis of data from devices, such as high-definition cameras or sensors on pallets in the supply chain.

New technology with 5G will further shape future of connectivity

Continued 5G releases improve on cellular IoT and push connectivity options to serve more IoT use cases under one network. By 2023, organizations will have 5G network options for high bandwidth and NB-IoT and new radio (NR) Light, Partridge said. As part of release 17 of 5G, 3GPP will introduce the new radio technology called NR Light, which will support use cases that NB-IoT does not cover and on the same network. NR Light will offer higher data throughput and lower latency, which organizations can use for IoT devices such as security cameras or wearables.

“The most important takeaway is that the standards are bringing forward the LTE, LTE IoT-friendly variants,” Partridge said. “An investment in a narrowband IoT solution today is not going to go out the window in two years because of 5G.”

The question remains, who is going to deliver 5G? In the U.S., the Federal Communications Commission is rolling out Citizens Broadband Radio Services (CBRS) to give enterprises access to licensed spectrum repatriated from radar systems used by the Navy, Partridge said. CBRS is going to support 5G. Organizations can get CBRS cheap and deploy their own 5G private network, or an operator can use CBRS to run a private network on behalf of a customer.

“This CBRS spectrum is that Goldilocks spectrum, which is mid-band 3.5 gigahertz,” Partridge said. “What that means is it’s the spectrum that has the best combination of performance and propagation, such that if you deploy it, you’re going to get really strong performance. Things like a leaf won’t take the network down. It’s going to penetrate walls and get to the devices.”

Read More