Amazon Web Services Feed
Field Notes: Powering the Connected Vehicle with Amazon Alexa
Alexa has improved the in-home experience and has potential to greatly enhance the in-car experience. This blog is a continuation of my previous blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT. Multiple OEMs (Original Equipment Manufacturers) have showcased this capability during CES 2020. Use cases include; a person seating at the rear seat can play a song, control HVAC (Heating, ventilation, and air conditioning), pay for gas/coffee, all while using Alexa. In this blog, I cover how you create a connected vehicle using Alexa, to initiate a command, such as; ‘Alexa, open my trunk’.
Solution Architecture
“Alexa, open my trunk”
The preceding architecture shows a message flowing in the following example:
- A user of a connected vehicle wants to open their trunk using an Alexa voice command. Alexa will identify the right intent based on utterances and invoke a Lambda function. The Lambda function updates the device shadow with (desired {““trunk””: ““open””}).
- Vehicle TCU registered the callback function shadowRegisterDeltaCallback(). Listen to delta topics for the device shadow by subscribing to delta topics. Whenever there is a difference between the desired and reported state, the registered callback will be called. The delta payload will be available in the callback. Update performed in #1 will be received in delta callback.
- Now, the vehicle must act on the desired state. In this case, it acts on the trunk status change. After performing the required action for the trunk change, the vehicle TCU will update the device shadow with the reported state (reported : { “trunk”: “open”} )
- The web/mobile app subscribed to the topic $aws/things/tcu/shadow/update/accepted”. Therefore, as soon as the vehicle TCU updates the shadow, the Web/Mobile app received the update and synchronized the UI state.
As part of the previous blog, we implemented #2, #3 and #4. Lets implement #1 and incorporate into the solution.
The source code (vehicle-command) of this blog is available in this code repository.
The Alexa voice command required the implementation of three key areas:
- Configure Alexa – which will listen to utterances and identify the right intent and invoke a Lambda function.
- Set up the Lambda function – which will interpret the command and invoke the AWS IoT Core device shadow API.
- Handle Command at Vehicle tcu and App – Vehicle tcu must register shadowRegisterDeltaCallback so any update in the device shadow will receive a call message to perform the actual command by the vehicle and synchronize the state with a web/mobile app.
Let’s ‘Open a trunk’ using Alexa voice command. First set up the environment:
- Open AWS Cloud9 IDE created in an earlier lab and run the following command:
Set up permanent credentials. Note: Alexa doesn’t work with temporary credentials. Configure it with permanent credentials for ASK command line interface (CLI).
- Open Cloud9 Preferences by clicking AWS Cloud9 > Preference or by clicking on the “gear” icon in the upper right corner of the Cloud9 window
- Select “AWS Settings”
- Disable “AWS managed temporary credentials”
$ aws configure
- Enter the Access Key and Secret Access Key of a user that has required access credentials
- Use us-east-1 as the region. It will store in ~/.aws/config
Verify that everything worked by examining the file ~/.aws/credentials. It should resemble the following:
[default]
aws_access_key_id = <access_key>
aws_secret_access_key = <secrect_key>
aws_session_token=
*Remove aws_session_token line from credentials file.
Next, install the Alexa CLI:
$ npm install ask-cli --global
Initialize ASK CLI by issuing the following command. This will initialize the ASK CLI with a profile associated with your Amazon developer credentials.
$ ask configure --no-browser
Check you are linking AWS account with Alexa:
Do you want to link your AWS account in order to host your Alexa skills? Yes
#At the end output should look as follows:
------------------------- Initialization Complete -------------------------
Here is the summary for the profile setup:
ASK Profile: default
AWS Profile: default
Vendor ID: MXXXXXXXXXX
As part of the previous blog, you have already cloned the following git repository in AWS Cloud9 IDE. It has a baseline code to jump start.
$ git clone
Configure Alexa Skills
The Alexa Developer console GUI can be used but we are doing it programmatically so it can be done at scale and allows versioning.
1. Open connected-vehicle-lab/vehicle-command/skill-package/skill.json . We have 2 locale en-US, en-IN are defined in the base code for Alexa command. Let’s add en-GB locale in the json file located at “manifest”/”publishingInformation”/”locales”. Similarly, you can add locale for your preferred language:
"en-GB": {
"name": "vehicle-command",
"summary": "Control Vehicle using voice command",
"description": "Allow you to control vehicle using voice command",
"examplePhrases": [ "Alexa open genie", "ask genie to lower window", "window up" ],
"keywords": []
}
If you are inserting into the middle then make sure it is separated by a comma.
2. Let’s create a copy of models connected-vehicle-lab/vehicle-command/skill-package/interactionModels/custom/en-US.json and rename it to en-GB.json and add our intent
- We have “invocationName”: “genie”. Here, we are using “genie” as a command to invoke our Alexa skill. You can change if needed
- The key elements in this json file is intent, slots, sample utterance and slot types. Let’s define the slot types t_action_type for ‘open’, ‘close’, ‘lock’, ‘unlock’. under “types”: [].
{ "name": "t_action_type", "values": [ { "name": { "value": "unlock" } }, { "name": { "value": "lock" } }, { "name": { "value": "close" } }, { "name": { "value": "open" } } ] }
- Let’s add intent under “intents”: [] for trunk ‘TrunkCommandIntent’ and define the sample utterance speech like ‘lock my trunk’, ‘open trunk’. We are using slot types to simplify the utterance and understand the operation requested by a user.
{ "name": "TrunkCommandIntent", "slots": [ { "name": "t_action", "type": "t_action_type" } ], "samples": [ "{t_action} trunk", "trunk {t_action}", "{t_action} my trunk", "{t_action} trunk" ]
}
- Now add the same intent, slots, slot type and sample utterances for other locales files (en-US.json and en-IN.json) as well.
3. Let’s add response message under languageString.js (available at /connected-vehicle-lab/vehicle-command/lambda/custom).
TRUNK_OPEN: 'Trunk Open',
TRUNK_CLOSE: 'Trunk Close'
If you are inserting into the middle then make sure it is separated by a comma.
Set up the Lambda function
1. Add a Lambda function which will get invoked by Alexa. This Lambda function will handle the intent and invoke IoT Core Device Shadow API and execute the actual command of ‘Trunk open/unlock or lock/close’.
- Open /connected-vehicle-lab/vehicle-command/lambda/custom/index.js and add our TrunkCommandIntent
const TrunkCommandIntentHandler = { canHandle(handlerInput) { return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' && Alexa.getIntentName(handlerInput.requestEnvelope) === 'TrunkCommandIntent'; }, handle(handlerInput) { var t_action_value = handlerInput.requestEnvelope.request.intent.slots.t_action.value; console.log(t_action_value); var speakOutput; const obj = "trunk"; if (t_action_value == "lock" || t_action_value == "open") { updateDeviceShadow(obj, "open"); speakOutput = handlerInput.t('TRUNK_OPEN') } else { updateDeviceShadow(obj, "close"); speakOutput = handlerInput.t('TRUNK_CLOSE') } console.log(speakOutput); return handlerInput.responseBuilder .speak(speakOutput) //.reprompt('add a reprompt if you want to keep the session open for the user to respond') .getResponse(); } };
- We have UpdateDeviceShadow(“vehicle_part”, “command”) function which actually invokes the IoT core Device Shadow API
function updateDeviceShadow (obj, command) { shadowMessage.state.desired[obj] = command; var iotdata = new AWS.IotData({endpoint: ioT_EndPoint}); var params = { payload: JSON.stringify(shadowMessage) , /* required */ thingName: deviceName /* required */ }; iotdata.updateThingShadow(params, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); //reset the shadow shadowMessage.state.desired = {} });
}
2. Update the value of ioT_EndPoint from AWS IoT Core > Settings > Custom Endpoint
3. Add Trunk CommandIntent in request handler
exports.handler = Alexa.SkillBuilders.custom() .addRequestHandlers( LaunchRequestHandler, WindowCommandIntentHandler, DoorCommandIntentHandler, TrunkCommandIntentHandler,
4. Deploy Alexa Skills
$ cd ~/environment/connected-vehicle-lab/vehicle-command
$ ask deploy
Handle Command at Vehicle tcu and App
For more detail on this section, refer to part 1 of this blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT.
@ Vehicle tcu – tcuShadowRead.py has trunk_handle() function to receive a message from device shadow
def trunk_handle(status): if status is not None: shadowClient.reportedShadowMessage['state']['reported']['trunk'] = status print ('Perform action on trunk status change : ' + str(status))
@web App – demo-car/js/websocket.js has handleTrunkCommand function receive callback message as soon any update happened on Device Shadow
//this function will be called by onMessageArrive
function handleTrunkCommand(trunkStatus) { obj = document.getElementsByClassName("action trunk")[0]; obj.checked = trunkStatus == "open" ? true : false; console.log(obj.getAttribute("data-text") + " : " + obj.checked);
}
demo-car/js/demo-car.js has handleTrunkCommand function to handle UI input and invoke IoT Core Device Gateway API to update the desired state.
//this function will be called when user will click on trunk checkbox handleTrunkCommand: function(obj) { obj.checked ? demoCar.shadowMessage.state.desired.trunk = "open" : demoCar.shadowMessage.state.desired.trunk = "close"; console.log(obj.getAttribute("data-text") + " : " + demoCar.shadowMessage.state.desired.trunk); demoCar.accessIoTDevice(); },
Use Alexa skill to invoke a command
Let’s test or command ‘Alexa, open my trunk’. We can use a command line and execute:
$ask dialog --locale "en-GB"
Using Alexa GUI, provides an interesting visualization, as shown in the following screenshot.
- Open the Alexa GUI, Select ‘vehicle command’ skill and select test tab. Allow “developer.amazon.com” to use your microphone?
- Open a demo.html web app side by side of the Alexa GUI to check an actual operation happened at the Vehicle tcu and synchronize the status with virtual car model.
- Now test the Alexa skill. You can use an audio command as well. You can ask or write ‘ask genie’.
Clean Up
What a fun exploration this has been! Now clean up AWS resources created for this and the previous post to avoid incurring any future AWS services costs. Resources created by CDK can be deleted by deleting the stack on the CloudFormation console. Resources created manually need to be deleted individually.
Conclusion
In this blog post, I showed how you can enable voice command for a connected vehicle and enhance in-vehicle user experience. Similarly, you can also extend this solution for the use cases like Alexa ‘open my garage’. AWS IoT Core Device Shadow API does all the heavy-lifting in this case. Any update in device shadow allows both device and user application to act. Alexa skill is acting as an interface to capture the user command and invoke the lambda function.
Since these are all serverless services, that means this implementation can scale without making any change in the application and you only pay when someone invokes a command. Creating an engaging, high-quality interaction with Alexa in the vehicle is critical. You can refer to Alexa Automotive Documentation for an Alexa Built-in automotive experience.
You can read more on this topic in my previous blog: Field Notes: Implementing a Digital Shadow of a Connected Vehicle with AWS IoT.
Also, check out the Automotive issue of the AWS Architecture Monthly Magazine.
Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.