Voice-activated HA via tasker

i played around with the kinect sdk about a year or so ago.  the thing i didn't like (maybe that's changed?) was that all the recognition phrases had to be pre-canned.  ie if you don't have the phrase "patio light on" set up in your app, it won't recognize it.  i prefer android's speech recognition where it passes the entire spoken phrase (doesn't have to match some pre-programmed set of phrases in order to be recognized) - you just need to parse out the command (a couple of examples of what i've done: http://goo.gl/xjxEd and http://goo.gl/Z9upD ).  the other problem i have is being in earshot.  this comes in handy when i'm bbq'ing in the backyard or working on the car in the garage 
 
I agree that out of the box it is a bit constrained although I have added a "what can i say command" that speaks available commands and you also have the option of defining multiple ways to say the same thing ('Patio light on' could be trigged by "Patio Light On", "Turn the Patio Light on", "Turn on the Patio Light" etc.
 
What I want to look at doing when I have a bit more time though is defining a vocabulary of words that will be in most commands such as:
ALARM NAME
ON
OFF
PATIO LIGHT
MUSIC
FRONT DOOR
PORCH
ARM
STAY
AWAY
 
etc.
 
Kinect should be able to recognize those individual words and in the program as those words are recognized I would string them together and if enough key words are strung together to form a command within say, a 5 second time frame the command would execute.  So during the course of picking up sound the Kinect might hear "AMY please go ahead and turn the PATIO LIGHT to the ON position please".  Having received the command control phrase "AMY", an object "PATIO LIGHT" and an action "ON" within a defined period of time the system would know to execute the command.  Again perhaps 5 seconds or so after recogizing one of those required words (Command control/object/action) the system would clear the variable where the word was stored.  I like being able to just speak to the house rather than grabbing my tablet or phone and speaking into that.
 
In order to make this easier, we've added a new HTTP based trigger driver. So, instead of having to use the web server, and then indirectly invoke an action, you can now directly train CQC to invoke actions via HTTP GET commands. So you just train it to respond to a specific URL and invoke a particular action in response. Any query parameters to the URL are passed to the action, so you can parameterize the actions you invoke.
 
Back
Top