Distributed home automation systems

upstatemike

Senior Member
So just how many PCs will be required for a fully functional distributed system? Maybe the discussions on allocating closet space have all been underestimating what will be needed in the future!
 
upstatemike said:
So just how many PCs will be required for a fully functional distributed system? Maybe the discussions on allocating closet space have all been underestimating what will be needed in the future!
That's a user preference. One PC should run about all you need to automate a home today unless you run some really CPU intensive process like video processing then you may choose 2 PC's. Personally "distributed system" is more of a buzz word and as power prices increase the buzz word will loose alot of it's luster. There are situations where this is handy though. The fact that I can hear my all of my PC's speech from home at work is really a nice benefit.
 
Network distribution is way more than that. With CQC, you can sit at any computer in the network and manage your CQC system. You can draw interaces, write macros, manage images, load/unload drivers and so forth. So a fully distributed system isn't just about the 'runtime' aspects, it's also about the 'management time' aspects. It is though also the foundation of a system where you have a touch screen in every room (on the wall or a tablet), that has local smarts, not just a web browser type thing. You need a network distributed system to do that.

Ask any of our users what it means to them and you'll get a feel for how not a 'nicety' it is. Once you've used a system like that, you'll never want to go back. It's just like having a computer vs. having a computer on a network, which allows you to share drives and access remote printers and send e-mail and so forth. It multiplies the power of an automation system in the same sort of way.

Building a fully network distributed system is very hard and unless you started off that way, it's even harder. We knew we were going that way from day one and built it from the ground up. It will be quite painful to add it post-facto, as other programs like ML are also trying to do.
 
Dean Roddey said:
...It is though also the foundation of a system where you have a touch screen in every room (on the wall or a tablet), that has local smarts, not just a web browser type thing. You need a network distributed system to do that...

...Once you've used a system like that, you'll never want to go back...
I guess I didn't realize how many of the folks here have a touch screen in every room. I suddenly feel very inadequate.
 
It's not necessarily EVERY room, but many rooms. The issue is the same. It's not unheard of though for high end systems to have a touch screen on the wall in every important room (i.e. not the bathrooms, but kitchen, and all bedrooms and common rooms), plus some number of wireless tablets like in the living room and/or theater. In some rooms (like an office) it might just be a standard PC on the desk. But even if it's just two or three clients, you need a networked system for that if you want local smarts in the clients.
 
Important from an automation standpoint. Some people might put a touch screen in there, though you'd need to have a water/moisture proof one. But it's probably not terribly common. In uber-high end systems though, anything goes.
 
I guess it what you get used to. Being a web developer I do not think about network distributed software since that goes without saying when developing web applications. I guess that's why more and more software is getting web-a-fied these days.
 
Not to get too technical here, but the web is (mostly) a straight client/server request/response paradigm, and is also usually not connection oriented, i.e. you connect, you get one page of stuff, then you disconnect. In a system like CQC, it's really distributed, in that each box can run various components of the system to distribute it around the network as required. So each box is often constantly talking to each other box, or some number of them, and constantly managing things behind the scenes to keep all the juices flowing.

It's a much more complex scenario. You might press a button on machine A which runs an action there, which writes to a field in a device driver in machine B, which sends out a change event that's seen by machines C and D which run triggered events that change some device status that's seen on the screen back on machine A. So if you have an automation closet you can have a server in there provdiing the main server plus controlling any devices in there, and you can have a separate HTPC in the theater that's controlling all the theater equipment, and they create a seamless automation system where each machine can see all of the devices controlled by any other, and you can tweak a user interface while in the theater though it's stored on the main server.
 
That's a viable scheme as well. Network distributed doesn't mean that they have to be a long distance away from each other. You are just moving the boxes to the closet and pushing their video/touch interface out to the rooms. It's exactly the same conceptually. You still somehow need multiple clients if you want to have multiple, independent touch screens. It doesn't really matter if they are in the closet or in the individual rooms. They still need to be networked in order for the scheme to work. I've suggested this setup before as well, and it does have some benefits if the cost isn't prohibitive.
 
Dean,

I assume Upstate Mike was in his usual kiddingly-serious mode (or is it seriouly-kidding), but is there are value on having the processing power of a set of blade servers? For HA, I mean. Per instance, is there any application that would distribute the workload of transcoding a movie, per instance?

Or, do you mean, having different processors streaming video/sound to different remote media clients?

The first one seems stretch, but the second one seems pertectly viable. Could the distributed system send the next task to the least loaded server? Like single-threaded load leveling?

Sorry if I went to OT, but this has reached an interesting turn.
 
The issue always comes down to the fact that you want to have X number of touch screens. You really have like three gross level options:

1. You somehow get a single machine to somehow animate all those screens (i.e. no smarts in the screen, since machine driving them all.)

2. You use something like RDP so that you have one machine on the back end, but effectively that machine is running X number of user sessions.

3. You have X smart clients that run their own OS.

#1 would be something like the Rad-IO system (the little 240x160 pixel ones.) You have one machine, and the output from the video is split out and shown on up to 8 screens.

#2 is something like that the UTMA devices, where the devices are just dumb displays and all the actual work is done on the back end on a single machine, by having RDP sessions runnign on the back end. Just using something like an Airpanel as an RDP client would be the same thing basically.

#3 means each client is a separate computer. That could be that you have a computer in each room (and it could be an all-in-one touch screen and computer that's designed for in-wall mounting), or you could put them all in a closet and just run wires to the room for the video/touch interfaces. These two scenarios are exactly the same ('topologically'), they just differ in where the client machines live.


#1 is interesting but there's not really much built in support for such things. The Rad-IO guys really work best if you have a dedicated machine just to run the Rad--IO because you cannot run another video card at the same time. YOu can plug a local display in, but you are limited to the 640x480 resolution of the Rad-IO video. It would be the 'lightest' of all the options, but unless someone comes up with some highly speicalized hardware to support it better, it's kind of limited.

#2 has it's pros and cons. The big con is that you need a fairly mondo sized machine if you want to run a number of clients, because all the work is done on the back end. And if the box is also going to be a media server and automation server, it starts getting kind of iffy. So you may end up with one moderate and one big machine. The performance will not be as good as separate clients but it's probably not bad.

#3 is the most flexible and highest performance. The big downside if you put the machines in each rooms is maintenance. You end up walking around the house to make changes, and the machines are open to abuse potentially by those psycho little bra... I mean the kids. Using a blade server is a way to have this solution while still maintaining a centralized system, so that the only thing exposed in the room is a screen. Everything else is in the closet.

It might not even be the most expensive. If you look at something like LIfeware which uses the in the wall type touch screens, those guys are around $3K a pop. If you wanted to have 8 of those, that would be $24K, before you even bought their product and other accoutrements. You could do a blade server with 8 blades (plus a bigger one for the main server) plus stanard touch screns for a lot less than that I think.


Of course, if you can handle just having a standard PC in each room, that could be inexpensive according to how you go about it. A $700 mini-ITX system could drive a $650 15" touch screen in each room. It's just not as 'clean and profession' as just having a magic touch screen on the wall.
 
Oh, in terms of media, they would all be streaming off the same server I'm sure. So they would have just minimal discs, or might even be set up to boot of an image on the server, since they would exist solely to drive a screen. So you could buy the lowest end blades available for the client blades, and just buy one or two more manly ones to use as the real servers.

One interesting side effect of such a thing is that, since all of them are right there, you could make the server a dual homed system (two network cards) and the server and the client blades could have a 'private' network for streaming media and automation data), while the main machine is on the broader network via the second NIC. So you won't have the overhead of media streaming on your main network, and wierdness on your main network won't interfere with your media/automation network. That would provide a lot of control over the network access by the clients as well.
 
Back
Top