Voice Activated Home Automation

ChrisCicc said:
Actually you can make SiriProxy work remotely with simple VPN. I do it all the time.

There is nothing simple about a VPN to "regular" users :)
I think anyone in your target market and certainly anyone using SiriProxy knows how to set up a VPN.
 
elvisimprsntr said:
I think anyone in your target market and certainly anyone using SiriProxy knows how to set up a VPN.
 
Enthusiasts and some others? Yes. The rest of our target market? No. Our Kinect voice control is about as simple as it gets, all someone has to do is know how to install a program, and enter their credentials to their local install of CastleOS (which is picked on initial config). The rest is automatic (or optional). This is one of the major differentiating features of CastleOS. 
 
ChrisCicc said:
There is nothing simple about a VPN to "regular" users :)
 
There is nothing simple about a VPN for network engineers either!  :)   It should be a 4 letter acronym, VPfN's....
 
elvisimprsntr said:
Awesome!   Yes please share on GH.  You can post to GH using the old school command line method, or you can simply download the native app for your OS distribution which makes it 1000X easier to mange your GH repos.  The easiest way is to clone an existing repo and tweak from there.
 
I also encourage you to edit Plamoni's SiriProxy plugins page to add links to your repo and demo videos.   
 
Done!  It's available on github.com at https://github.com/mghan/siriproxy-m1
 
Mike
 
Nice work guys! I started customizing the ISY plugins to suit my needs and realized that setting up new rules would be easier if the ISY REST API was abstracted a little so I don't have to clutter the plugin code with all the technical REST stuff. So I'm working on a library for the ISY-99i - something like Mike did for the M1. Hopefully others will find it useful when I'm finished.
 
I wanted to do a complete re-write of the ISY plug in over Xmas but was distracted with other priorities. Some of the plugins I wrote completely from scratch I think are a little better. At some point I want to make the ISY completely self aware.
 
elvisimprsntr said:
I wanted to do a complete re-write of the ISY plug in over Xmas but was distracted with other priorities. Some of the plugins I wrote completely from scratch I think are a little better. At some point I want to make the ISY completely self aware.
 
I finally got around to rewriting hoopty3's original ISY/Elk SiriProxy plugin completely from scratch to make it easier for others to use, code efficient, and more portable.   It's not self aware yet, but the changes are one step closer to making that possible.    Probably my best Ruby work to date, but I am still learning.  
 
https://github.com/elvisimprsntr/siriproxy-isy99i
 
That and my Raspberry Pi SD card image and a $35 USD Raspberry Pi make HA voice control about as easy and inexpensive as anyone can make it.  http://sourceforge.net/projects/siriproxyrpi/
 
 
Enjoy!
 
Elvis
 
I've had the house on X10 for years. I once created a Nintendo DS program that could connect to my PC and let me control the lights with the DS touch screen.  I got an Android tablet in December.  I wondered if the voice recognition on it could control my lights.  I found the mochad software.  I installed it on a Raspberry Pi to work with a CM19a transceiver I had.  I edited sample CGI files included with mochad.  This gave me browser based control from the tablet.  I installed a Siri clone called EVA.  Eva has a bookmark feature that you can set to push a url to a browser.  So I can say "Eva" "Office on" or "Eva" "Office off" which will send a url to the browser.  It's more of a novelty but I can talk to the android tablet and control my lights.   
 
our system has an IM interface on it like many others.  the past 4 or 5 days, i've been playing around with sl4a ( http://goo.gl/D0Qym ) & python on my note ii using google's VR.  from my phone, the text of the recognized speech is IM'd to our HA server.  on the server, i have some scripting in place to do some basic natural language processing of the text (which was already in place with the IM interface) to try and extract the device and command. if it finds a valid device and command, it will perform the operation & IM back a confirmation of the action. if it can't figure it out, it will IM back a negative response.  the script on my phone will read back the IM'd response via TTS.  right now it's limited to turning on & off all the lights, appliances modules and relays.  
 
this works really well. i take advantage of google's excellent VR.  the IM backbone allows me to voice control anywhere i have a data connection (no futzing with vpn/ssh) and let's me do all the text processing, device filtering, etc on the HA server so i've got a lot of flexibility.  i'm also looking at incorporating NLTK ( http://goo.gl/pJimR ) to allow even more flexibility on extracting commands and having normal conversational speech recognized. at some point, i'll post a video, but right now it's still a bit raw.
 
Back
Top