Frustrated with home automation software

Sure, I was just pointing out that it's not some kind of 'trademarky' special technology that's required. It just falls out of the design of automation systems and any of them can do it. Any sort of scheme is ultimately an illusion that they are really talking to each other.
 
Actually allowing devices to do so in an unsupervised way (unsupervised by the automation controller) is not really even desirable, IMO. You want the automation controller to be the one that (under user controlled circumstances) mediates between them. I.e. the form of such inter-device communications should be in the form of a user configuring the automation system to react to X happening in device A and causing Y to happen in device B, not one device directly (well, indirectly) sending commands to another device. That has all of the 'spread out setup' downsides of a non-centralized controller scenario, while still requiring the centralized controller. Keeping it in the automation controller in the form I describe above means that the specific devices involved don't matter since they are not involved in control, they are only controlled (and notify the automation system of changes in their states.)
 
If all communication goes through your central hub, that simplifies protocol and setup issues for sure, but it potentially can introduce latency (relevance depends on how fast the communications are) and it also creates a single point of failure that can take down the whole system. Without dependence on the central hub, if that hub fails, your light switch can still talk to your lamp even if they are the only two devices left. If you mean the central hub should be in control of setting up and monitoring those connections, while still allowing the devices to communicate with each other directly, sure.

With regard to standardizing protocols, there is a rush towards IP-based protocols because they are easy. You can cram network-awareness in any old device. Thus converting it into a device which offers an attack surface in your local network, and (if it has updateable firmware) which can be made to run malware -- as is already the case with routers, network attached storage, etc. Use a non-IP protocol and now your devices are effectively protected behind your one bridge device (that speaks both IP and your automation protocols). In case of an attack or security flaw, only that one bridge device needs to be updated or replaced. We've already seen that updates for critical communication devices like cellphones often dwindle away to nothing in a year or two, we certainly can't expect the manufacturer will be consistently rolling out security updates to our light bulbs. Directly exposing those devices to the network is just flat out dangerous.
 
jdale said:
With regard to standardizing protocols, there is a rush towards IP-based protocols because they are easy. You can cram network-awareness in any old device. Thus converting it into a device which offers an attack surface in your local network, and (if it has updateable firmware) which can be made to run malware -- as is already the case with routers, network attached storage, etc. Use a non-IP protocol and now your devices are effectively protected behind your one bridge device (that speaks both IP and your automation protocols). In case of an attack or security flaw, only that one bridge device needs to be updated or replaced. We've already seen that updates for critical communication devices like cellphones often dwindle away to nothing in a year or two, we certainly can't expect the manufacturer will be consistently rolling out security updates to our light bulbs. Directly exposing those devices to the network is just flat out dangerous.
 
In the current state, IP cannot do everything for automation. That's why groups like Thread exist. Even the poster child of IP based devices doesn't rely on IP networking for mission critical functionality (smoke detectors talking to each other). It relies on mesh (I think someone here earlier was asking why mesh was relevant, here's a perfect example).
 
jdale said:
If all communication goes through your central hub, that simplifies protocol and setup issues for sure, but it potentially can introduce latency (relevance depends on how fast the communications are) and it also creates a single point of failure that can take down the whole system. Without dependence on the central hub, if that hub fails, your light switch can still talk to your lamp even if they are the only two devices left. If you mean the central hub should be in control of setting up and monitoring those connections, while still allowing the devices to communicate with each other directly, sure.
 
The only way for devices on different protocols to talk is if there is a data broker. The question is whether that data broker is local or in the cloud. At least if local if something goes wrong, it only affects one house. If something goes wrong in the cloud with cloud-based systems (as often happens), everyone has issues. Also, latency isn't a factor for local systems (at least it shouldn't be), but is a factor for cloud systems. 
 
If that local data broker goes down, how much comfort do I receive from knowing that no one else's house isn't working? Not much.

Putting it in the cloud does, however, add a vulnerability to internet outages and additional privacy concerns, so I still agree that's worse.
 
The whole point is moot anyway, if you are talking about real automation. Yeh, you can have devices talk to each other, but it will only be in the most simplistic way. If you want real automation, you need the central controller, and it can provide the translation between devices easily enough anyway. As to lights, in any realistic setup, every load is going to have a local, physical switch anyway.
 
As to latency, a little is involved of course, but at the data rates that things like low power mesh or power line devices communicate, it's not going to be a lot in a good system. Of course part of that will depend on how good the devices are with their integration protocol. Some of them don't take into account that, if the device can support 255 loads but provides a serial protocol at 9600 and has to be polled, that this is going to make it impossible to provide low latency response to changes. Good devices provide fast, async notification of changes which keeps latency to a minimum.
 
And the other thing to keep in mind is that, in almost all cases, the light system is going to be a single vendor system, or a single technology system which multiple vendors support (Z-Wave, UPB, etc...) So, if you really want extra-controller inter-device communications, that doesn't require an all seeing, all knowing common protocol anyway, since they will inherently all talk to each other.
 
And, as soon you need that switch or pad button to do something besides lights when it's flipped or pressed, and do more advanced control, you'll still need the central controller with it's far more capable user logic creation system and ability to deal with devices outside of the protocol the switch or pad supports (the reality for a very long time.)
 
With my Insteon system I have many options to do these controls.

Devices can talk directly to each other, talk to a central controller that decides whether the end device gets tripped or not, or talk directly to each other with a central controller deciding what level of brightness it will operate at, whether it will operate at all, or how long the end device will stay on.

Scenes can be set up in the ISY994i controller that are preset into devices avoiding much of the logic and protocol latencies. A motion sensor triggers a lamp on via being connected to a module that is preset to the desired level of brightness, based on the time of day or other logic decided by the ISY994i central controller. Undetectable latency plus "home automation" control logic.

Now the MS can time out and turn the lamp off and/or the ISY994i can turn the light off based on complex logic factors.

If the central controller drops out of the picture things can still mostly work without it, just slower to turn off possibly wasting more energy. Of course text messaging and email notifications will not be sent from the end devices but the basic system can still work providing the user set it up that way.
 
Dean Roddey said:
The whole point is moot anyway, if you are talking about real automation. Yeh, you can have devices talk to each other, but it will only be in the most simplistic way. If you want real automation, you need the central controller, and it can provide the translation between devices easily enough anyway. As to lights, in any realistic setup, every load is going to have a local, physical switch anyway.
 
As to latency, a little is involved of course, but at the data rates that things like low power mesh or power line devices communicate, it's not going to be a lot in a good system. Of course part of that will depend on how good the devices are with their integration protocol. Some of them don't take into account that, if the device can support 255 loads but provides a serial protocol at 9600 and has to be polled, that this is going to make it impossible to provide low latency response to changes. Good devices provide fast, async notification of changes which keeps latency to a minimum.
 
And the other thing to keep in mind is that, in almost all cases, the light system is going to be a single vendor system, or a single technology system which multiple vendors support (Z-Wave, UPB, etc...) So, if you really want extra-controller inter-device communications, that doesn't require an all seeing, all knowing common protocol anyway, since they will inherently all talk to each other.
 
And, as soon you need that switch or pad button to do something besides lights when it's flipped or pressed, and do more advanced control, you'll still need the central controller with it's far more capable user logic creation system and ability to deal with devices outside of the protocol the switch or pad supports (the reality for a very long time.)
I can't see any flaw in your argument, and yet MQTT, which is a publish/subscribe model, seems to be gathering momentum of late, especially with respect to Internet of Things.  MQTT seems to de-emphasize the importance of centralized control, or else perhaps I'm missing the point of it.
 
LarrylLix said:
With my Insteon system I have many options to do these controls.

Devices can talk directly to each other, talk to a central controller that decides whether the end device gets tripped or not, or talk directly to each other with a central controller deciding what level of brightness it will operate at, whether it will operate at all, or how long the end device will stay on.

Scenes can be set up in the ISY994i controller that are preset into devices avoiding much of the logic and protocol latencies. A motion sensor triggers a lamp on via being connected to a module that is preset to the desired level of brightness, based on the time of day or other logic decided by the ISY994i central controller. Undetectable latency plus "home automation" control logic.

Now the MS can time out and turn the lamp off and/or the ISY994i can turn the light off based on complex logic factors.

If the central controller drops out of the picture things can still mostly work without it, just slower to turn off possibly wasting more energy. Of course text messaging and email notifications will not be sent from the end devices but the basic system can still work providing the user set it up that way.
"..a single vendor system, or a single technology system which multiple vendors support (Z-Wave, UPB, etc...).."
So you never expect anyone to migrate from one system to another? Like from X-10 to Z-Wave or you expect them to replace all their lights at once?
-jim
 
Perhaps you have been misinformed regarding Insteon.
The ISY994i controller supports ZWave, Insteon, Zigbee, IR, and X10 from built in cards. With it's Ethernet access it controls almost every lighting and gadget hub without further hardware additions. It also sports a bi-directional REST interface for hooking in mobile gadgets, loggers, Sonos, Philips, TCP. Insteon equipment is made by about a dozen manufacturers and typically cheaper than most others, except for obsolete X10 modules.
 
I still use a few X10 modules, MSes, and keypads but am migrating to Insteon and Philips using the ISY994i controller. Many ISY994i users use multiple other protocols together in a co-ordinated HA system with the ISY controller. Most Insteon modules can accept direct controls from X10 initiators without use of any smart controller. I do not know why I would migrate further over the next few decades. Constant reports from other system users demonstrate the Insteon system to be one of the best so far.
 
Scanned this thread as i'm mostly AWOL on HA forums (after 6 years of getting my system setup, i'm in the "enjoy it" stage except for the occasional upgrades). Its a little nonsequitir but goes to the point of frustration.
 
I made the mistake of using a single box for all my needs: HA, HT (SageTV), Plex, Picasa/DropBox/GDrive, NAS (i think 27TB raw capacity and I'm running out). My thinking was that power bills in CA are exhorbitant ($0.40/KwH incremental), so minimize the # of 24x7 boxes.
 
But, that means the same OS for all of those. I did a massive upgrade of my server so I could set up Plex to transcode multiple HD streams on the fly to the recipient box (Roku or tablet or phone), so I upgraded from WHS to Win7.  Problem is that it broke my morning TTS announcements via CQC. Something about the service not running in the right account. I've tried for HOURS to fix it but I can't figure out how to set up Win7 the right way.
 
Moral: To reduce frustration, keep your HA system standalone. I'm contemplating getting an Intel NUC or something small for CQC, and putting WHS back on that. But its hard to justify spending incremental time to build another box, move all the crap over to a new server, just so I can get TTS working. I have a lot of non-HA demands on my time, and don't feel like piddling with this.
 
@IVB, If you already have a beefy computer you should consider running multiple virtual machines.  Then you can run Win 7, WHS, Linux and whatever all on the same box.  You get snapshot backups for quick recovery from failed software upgrades or virus.  You get hardware abstraction so if you have a computer failure you can quickly switch over to running the virtual machines on any random computer you happen to have.  You can even configure redundancy so your VMs can switch to another computer when you have a critical component failure.  You also can easily spin up a extra VMs in a sandbox to test new software or configurations of your HA.
 
yeah, true. But thats still more stuff to learn, I've never done VMs before. Given that i'm playing 5 games of soccer/week, 2 kids, new dog, wife, and oh - a job, its all but impossible to find more than 1-2 hours/week to expand the HA system. The main reason I carved out time to dig into my VRUSB issues is that my dad just moved in with us and he's forgetting to lock the door when he goes out so the wife made me :)
 
IVB said:
Scanned this thread as i'm mostly AWOL on HA forums (after 6 years of getting my system setup, i'm in the "enjoy it" stage except for the occasional upgrades). Its a little nonsequitir but goes to the point of frustration.
 
I made the mistake of using a single box for all my needs: HA, HT (SageTV), Plex, Picasa/DropBox/GDrive, NAS (i think 27TB raw capacity and I'm running out). My thinking was that power bills in CA are exhorbitant ($0.40/KwH incremental), so minimize the # of 24x7 boxes.
 
But, that means the same OS for all of those. I did a massive upgrade of my server so I could set up Plex to transcode multiple HD streams on the fly to the recipient box (Roku or tablet or phone), so I upgraded from WHS to Win7.  Problem is that it broke my morning TTS announcements via CQC. Something about the service not running in the right account. I've tried for HOURS to fix it but I can't figure out how to set up Win7 the right way.
 
Moral: To reduce frustration, keep your HA system standalone. I'm contemplating getting an Intel NUC or something small for CQC, and putting WHS back on that. But its hard to justify spending incremental time to build another box, move all the crap over to a new server, just so I can get TTS working. I have a lot of non-HA demands on my time, and don't feel like piddling with this.
 
As of late Vista and Win7, regular TTS commands stopped working from a service, because presumably the OS just doesn't set up the connection to the audio outputs in a service anymore. So any TTS commands that you run from the background (event server mostly) you won't hear anymore. So we creates a speech driver that handles the conversion of text to audio itself and spools that audio out via an audio output in the way an audio player would, effectively providing the TTS functionality ourselves to get around this limitation. Too bad they did that, but gotta adapt and move forward.
 
It also has some useful features as well that the old standard TTS stuff from the background didn't. Since it's now essentially an audio player, you can mute and pause it and adjust the volume. And it allows you to set up CML macros that it will call before speech begins, after speech ends, and for each discrete speech operation in between, which is very useful for various types of 'save state, broadcast, replace state' type of stuff.
 
You can still do regular TTS commands from the foreground, i.e. from within your user interfaces, and that's more convenient in those situations than using the speech server. The speech server is more for global type speech for the most part, which is how TTS commands in the background were really being used anyway.
 
Back
Top