Premise Another video comparing voice control using a Moto 360 versus the Amazon Echo

Motorola Premise


Senior Member
Video Description:
This video compares various methods I’m using to control my home using my voice. Both methods use the same scripting on my home automation server. You can see in the video that Google Now wins, but that the Amazon Echo is faster today versus last night. I also noticed the Android Alexa App was down last night after posting the video. Maybe Amazon was updating things?

I still like the Amazon Echo since it will work 40 feet away (even in the next room) and has much better noise cancellation using a decent DSP chip with low S/N in conjunction with a 7 microphone array!

If I get enough subscribers, I plan to make future more organized videos along with details of how to set all this up in your home. I’ll also post the KODI Premise module for you to play with after testing is complete.

To see more home automation videos, please see my channel here:

Here’s the hardware I’m using to do this:
1. Amazon's Echo (aka Alexa)
2. KODI running on an nVidia Shield AndroidTV device
3. A free home automation program called Motorola Premise Home Control running on a Windows 7 server (
4. Z-wave lighting system connected to my Premise Home Control server.
5. Moto 360 watch running Android Wear
6. Nexus 6 smartphone

And here's some more of the software:
7. A KODI module I've written for Premise allowing full two-way ip based functionality (including library importing) and IR too (so you can use the native Netflix 4k App that comes on the nVidia Shield without picking up another remote).
8. A very versatile SpeechParser module I've written for Premise, that takes a generic command phrase, then performs some action and forms a natural language response. Both Alexa and Google Now are sending commands to this
SpeechParser module.
9. A new Amazon Echo skill I'm calling "Premise" that is in testing under my developer account. It uses an Intent called “Premise” to pass whatever is said after “Alexa ask Premise to” to my home automation server.
10. A free tiered Amazon Web Services (AWS) account to send Alexa commands to my home automation server over HTTPS. The same AWS lambda function also reads back an HTTP response of what actions took place that is sent from my home automation server (via the SpeechParser module).
11. Tasker and Autovoice apps running on my Nexus 6 phone.

12. Additional apps used to avoid having to have an intent word (such as required by Alexa) include: Xposed Framework for Android 5.1 and the Google Now API:

Some additional geeky details if you were to set this up:
Everything you see is done in a very generic fashion. No individual phrases were programmed for what you see in the video, I’m too lazy for that!

I’ve written code (a Premise SpeechParser module) for my home automation system that actually interprets the sentence using nested regular expressions to find what property state, property value, device type and room location you are trying to control based on what command you say. In this manner, the command phrases are NOT order dependent (unlike most other options out there including Amazon’s), and leverage the object based structure of Premise, to recursively find a match within my home for whatever command is issued.

Once found from the command phrase, the device type and room location are then used to examine all devices in the under a particular location (e.g. room) that match a particular device type (e.g. light). Once a match is found (e.g. table lamp in the living room), the properties under that object are compared using recursion to find the best match for the command sentence, and the new value is set.

The queries in the Part 2 video I posted with the Amazon Echo also work in a similar manner, but instead of setting a property value, they grab the value and return a response to the query.

In Tasker I have a task that executes a javascript I’ve written that takes the phrase passed by Autovoice and sends it over HTTPS to my Premise home automation server. Then my server returns a response if successful. If my server returns “I did not understand” then the Google Now app is left running in the foreground to complete navigation as usual. This way you can still say “ok google, navigate to McDonald’s” and the Google Now app will stay open!

To be clear using the Xposed Framework and the Google Now/Search API requires an unlocked and rooted android device! You can use Autowear if you want to shake your watch then say a command without rooting. For me this isn’t as elegant as a direct hook to instantly capture the OK Google phrase.

If you can’t root, you could use Autovoice and Tasker and enable the Autovoice Google Now service (which will only work with your phone and not your Android Wear device).
A special thanks to MohammadAG for posting the Google Search API and to iHelp for maintaining it.    
ETC - very, very nice! I'm still getting back into the swing, now that my Winchester Mystery house project is over...