Fastest rule-processing for automation?

There's nothing corner case about it. If you want to pick one little thing, yeh, maybe so. But if you look across the breadth of functionality, everyone has a clear justification for what they want to do. If you can't do it, they'll go give their money to someone who can. That's the bottom line. 
 
Of course sometimes you just say it can't be done. We dropped any attempt to poll Z-Wave in our V2 Z-Wave driver. It's just not practical. We do a long period 'just are you alive' type poll, but that's it. Otherwise we require async reporting if you want status for a module. Otherwise, it's write only.
 
But, you can't really do that too much, and I can tell you that a lot of Z-Wave users weren't totally happy about that. Maybe enough that some of them went elsewhere, and also maybe enough that some new users will go elsewhere.
 
I'm no pro and this may be a naive thing to say but you guys haven't mentioned something that seems apparent to me.
 
Doesn't the old analogy of a roadway apply here? If you want to go faster you get a bigger engine and if you run into too much traffic you add lanes to the highway. Or to Intel's approach to the problem. The PC began with the AT data bus then ATA and PCI and so on and they started putting multiple processors on the mboard and then multiple cores on a single processor. Sometimes you just need a bigger boat.
 
It is interesting to talk about designing for greatest efficiency but at some point you are limited by the fact that we live in a physical world and even bits and bytes are physical, just electrons traveling on a wire or signals though the air and they occupy physical space and require time to travel.
 
Why don't we see Elk or HAI offering different models with faster processors and multiple data buses? Probably because they consider their systems are capable as-is.  With all of the different wired and wireless devices available on the market and new devices appearing every day and the lack of any communication protocol standard it is impossible for everything to work together smoothly. Without any standards it is up to the installer to choose and plan carefully and avoid the bottle necks that bring it all to it's knees. And as far as developing a standard with all of your wonderful ideas it is not going to happen with technological advances coming as quickly as they are and with so many companies popping up and trying to grab their piece of the pie$.
 
Mike.
 
Well, with wireless there's not necessarily more room (bandwidth, channels, protocol limits) to add lanes.  Wired, sure, but that's not necessarily a viable solution for mass market uptake.
 
What vendors considered "capable as-is" was also true for horse livery and carriages.  Diversity certain does breed advances, it would be naive to think otherwise.  Now, not all of the advances succeed, of course.  Steam cars didn't quite make it, nor did electric (but that's circling back around).  But now, more than ever, there's quite a lot of overlap and interaction possible with HA devices. 
 
I'm absolutely not focused at all on an installer's perspective.  It might be fair to say some of them have been part of the problem.
 
wkearney99 said:
Well, with wireless there's not necessarily more room (bandwidth, channels, protocol limits) to add lanes.  Wired, sure, but that's not necessarily a viable solution for mass market uptake.
 
What vendors considered "capable as-is" was also true for horse livery and carriages.  Diversity certain does breed advances, it would be naive to think otherwise.  Now, not all of the advances succeed, of course.  Steam cars didn't quite make it, nor did electric (but that's circling back around).  But now, more than ever, there's quite a lot of overlap and interaction possible with HA devices. 
 
I'm absolutely not focused at all on an installer's perspective.  It might be fair to say some of them have been part of the problem.
 
In a system with both polling and asynchronous triggers occurring isn't there always going to be the chance of two things occurring in the same time and space regardless of the processing procedures?
 
mikefamig said:
In a system with both polling and asynchronous triggers occurring isn't there always going to be the chance of two things occurring in the same time and space regardless of the processing procedures?
 
Absolutely.  How likely and at what times, during what kind of activities, are factors to consider for work-arounds, if they're necessary at all.
 
wkearney99 said:
Absolutely.  How likely and at what times, during what kind of activities, are factors to consider for work-arounds, if they're necessary at all.
 
The more devices and triggers that exist in the system the more likely it is that two things trigger at the same time. I would say that it could be very likely in a large system and in some cases must be planned for.
 
My simple point is that two heads are better than one. If someone throws two balls at you it is better to have two hands to catch them with. It's just where my mind went after reading your comments and something that I wanted to add to the conversation.
 
Mike.
 
wkearney99 said:
I'm absolutely not focused at all on an installer's perspective.  It might be fair to say some of them have been part of the problem.
 I think that the installers perspective is very relevant to this conversation. In an automated home the house is just a big container for the system and all of it's components. The Elk panel or the HAI panel or the UPB components are all just pieces of a large system and the installer is the design engineer of that system. There is no system until the parts list is developed and the installation complete.
 
EDIT
 
The topic of conversations is "fastest rule processing for automation" and that process will be different in each installation depending on everything that is under the roof which will be determined by the installer. I don't mean to get off topic, but I don't think that there is any such thing or can be any such thing as the one fastest process.
 
Mike.
 
Here I push Homeseer software to its limits.  Basically the Homeseer software core is referred to as the event engine.  I am a bit analog as personally every layer of whatever software piece exists beween the hardware will slow it down.  IE: Homeseer here is connected to one USB hub via one cable.  The USB hub talks to 5 USB devices and 16 serial devices.  (IE: 21 analog hardware devices).  Of those 21 hardware devices are automation controllers which talk X10, Z-Wave, Insteon, UPB.  There is much more.  All do come in to play together with defined variables and schedules driven by the event engine.  I am in to timing so I use debug logging, visual and speech events to watch.  While old now I started doing a hub and spoke thing with it years ago adding irrigation remotely, weather remotely, satellite mapping remotely and it worked for me.
 
Using Homeseer 3 now ...baby steps I am doing a sort of hub and spoke thing using the network.  IE: the RPi2 with a Z-Wave GPIO card can run and utilize the network to remotely connect to the mothership.  I have now though moved a dependency for Z-Wave to the in home network.  It is another layer here which can slow down the event processing and is less reslient than a direct connect.  The Alexa and Kinect (and Amazon Echo device) functions run on a quad core Baytrail computer running Windows 10 and utilize the transport (network) to communicate to HS3.  Irrigation software is running on a Seagate Dockstar with direct analog connections and virtual connects to the mothership.  The dependences on the network can and do cause a slow down in transactional processing.  Much related to the mothership software multiprocessing multiple channels (threads) of information going through the network transport.  I can move this a bit closer to the mothership running these two devices on a subset WIndows 10 VB on the Ubuntu Server if I want. (personally here always did like the original motorola 68XX CPU over the original Intel 8086 type of archecture; but that is me).
 
My wife really does like the older Omnitouch serially connected touchscreens which utilize a simple 200Mhz CPU over my customized with too many buttons homeseer touch screens today. 
 
In between and here we have the RPi2 and the Almond Plus devices.  The Almond plus device simple as it is functions as a router, firewall, access point, touchscreen, Zigbee and Z-Wave hub in a tiny package based on an arm CPU.
 
On the other end here playing with a micro router that runs at 400Mhz with a very simple OS called OpenWRT which is based on simple communications using Lua.  It has an RTC, two network ports, wireless transport and a few GPIO pins.  There is not much play room and so far timing of whatever I do with it is quick.  Even smaller now is an Arduino Nano which provides a very simple sub set automation function.
 
Years ago I would have to justify and test software on enterprise global networks.  This involved watching transactions occur in software (rule processing), network transport timing and what the client saw.  I would see issues in every piece of the transport and software which always slowed down rule processing.  That was the way it was.
 
Much simpler and very functional for me is the OmniPro 2 as it's rules set, event engine while primitive does its job in a timely manner.  With a software piece to it the OPII is a controller on steroids dependant only on a serial or network connection for functions.
 
Relating to fastest rule processing for automation; the OS, Kernel, software connectivity to that piece of hardware utilized for automation (anything really) really does slow down stuff and that is the way it is.  The more layers the slower it is.  Pushing the transactions and event processing (including TTS and VR) to the cloud is cheap and functional.  That said though software does let you play with everything related to automation.  That is the way it is. 
 
Back in the 80s and 90s I ran an X10 HA system with one of the fastest processor out there, 40Mhz AMD, 8 bit data bus, 486 with about 40  X10 devices. The control program HC-2000 had 1 second triggers that I used for many  time keeping routines in order to tell if MS units detected motions, repetitively, or which way the person travelled on a better time resolution that one  minute intervals.
 
Now fast forward to present day with our CPUs. Even the cheap old CPU controller chip in the ISY994 at it's modest single core 0.6-0.8 GHz (not megahertz anymore), probably at least 16 bit, or possibly 32 bit, data bus CPU has no problem bogging down doing the most complex processing a HA controller needs. There is a lot faster CPUs being used for HA out there than that.
 
Again, it's not the CPU processing that creates delays, it's the protocol comms.  Having two single directional channels on different frequencies for each direction of traffic could help eliminate some comm confusion by reducing a few overheads.
 
For a well designed software based system like ours, two things happening at once aren't really much of an issue. Everything is handled in a multi-threaded way. Communications is interrupt driven at the system level. You have one thread that monitors the communications channel and handles send/receive as quickly as it can be done. When it gets stuff to deal with it, it hands it off to other worker threads to deal with and it goes back immediately to serving the comm channel. That's a fairly standard architecture for server based systems these days. It maximizes throughput and minimizes CPU usage at the same time.
 
It's a little different for a device driver (the above is more the structure of a server), but not a lot so. If the device handles asyncs, the driver, just always listens for incoming msgs. When it has to send an outgoing command, while listening for the reply it also processes asyncs. So they get handled as soon as they happen. They don't have to wait for the current exchange to complete or anything.
 
@Dean,
 
Personally that is why I tinker with new automation stuff running in Wintel or Linux.  You can do anything with any type of controller.  The tiny Intel quad core Baytrail with 2Gb of memory does OK for little bitty automation but you can push it to its limits fast with Windows 10 on it.
 
Pushing the RPi2 to more than 2-3 controllers / many devices and multitasking does take it to the brink very quickly with its limited CPU and RAM. 
 
Same as the embedded devices known as the ISY994, Almond +, Vera edge et al. (or even the multitude of hubs out there today).  Same as the OPII.  It is limited by its CPU / firmware.  What it does though it does well and fast.
 
I am impressed with what these embedded in firmware automation devices can do; but they are very limited.
 
I am not dinging the Ubuntu 14.04 box  / Haswell CPU / 16Gb of memory and 32/64 bit Wintel OS's for automation concurrent OS play here.  IE: Ubuntu and Wintel are talking to different hardware with direct connects or indirect network connects and all plays just fine today.  (multitasking to the nth degree). 
 
I play much with Homeseer touchscreen designer.  I am not really good at the flow and colors of what it is I put on the screens.  I do run one liner scripts with the buttons.   I run HSDesigner on a virtual Wintel server and can get to it today from any wintel / ubuntu device on the network.  I just made one change to one variable for one screen on up to 15 HSTouch devices and can do that right now in less than 5 minutes.  I cannot get my wife to use the screens as she prefers the simplicity of the Omnitouch screens today.  She is afraid of breaking the house (those demon seed thoughts).
 
I have played much with Android OS and touch stuff.  I had to root the Android OS and cook much of it to get it to where I wanted it to be.  That said even today much of the base Android OS is very mobile like; well even the tablet stuff is.  I don't have to do anything to the Wintel OS touchscreen stuff as it works right out of the box. 
 
I use VLC today via one liners embedded in buttons to stream radio channels and HD live TV or HD CCTV stuff and it works fine for me.  Here is an example of an audio radio button.  You never see it running at all.
 
For video it is just a defined box embedded in the HStouch client application.  Microsoft SAPI with TTS/VR runs fine concurrent with the touchscreen application.  Going to a faster touchscreen CPU I can now run Kinect, Alexa and have a nice video interface.  I would like to see an AI character though that lives in the automation server.
 
vlc2.exe" --intf dummy-quiet hxxp://www.bbc.co.uk/worldservice/meta/live/mp3/eneuk.pls
 
Earlier I did mention how the base kernel, OS, software adds a slow down to rule processing.  That though is relative to what is considered slow and how the software is written.  IE; programming and testing the Arduino Nano was less than 30 seconds running the arduino software and then talking to the device via a serial port on a wintel PC.  I have done the same except for programming the arduino nano on the microrouter using mini com.
 
Oh there's a lot to be said for a host operating environment that allows for a lot of options.  I'm tempted to say "and it's not good" yet that would be exaggerating a bit.  But a lot of stuff has certainly got a lot of cruft piled onto it.  What with dead-end drivers or only last gen support.  
 
It's one thing to have a flexible framework, it's another to have lashed together spaghetti.  We've all suffered through enough of that.  Not pointing any fingers here, metaphorically or otherwise.
 
There's a lot of fresh momentum and development building behind IoT concepts and interaction with networked resources as a fundamental part of the process.  Not just an endpoint orphaned at the other end of an RS-232 line (or emulated on ethernet) without any real integration within the framework.  Simple things like reflection, json, REST and the like go a long way toward making for new kinds of automation.  Lots of thing have always "been possible" but often failed to meet market expectations.
 
I am not knocking the new methodologies and developement relating to the use of automation in the cloud. 
 
My spaghetti concocation evolved from the 1990's automation does give me a choice today of chosing whether I want to or not utilize external resources and I do and it works for me.  IE: I can run Alexa, ask a question and then shut it off whenever I please today using the Kinect device and a silent but effective hand gesture if I want versus VR.  (guessing this is on demand)
 
I have 3 Almond Plus arm based little touchscreen hubs here.  These devices are a combination touchscreen, automation hub, router, firewall, Access Point and switch.  I did enjoy watching (on my phone) the automation processing on the Amazon host happen as fast as the little touchscreen.  It is thrilling to watch but I now bored as it didn't do enough for me relating to my automation tinkering.
 
The new is promoting a return to some very simple and fast (not fastest) rule processing except for one dependancy which is not soup yet mostly assumed.
 
Most of the problems I see and I think the basis of most of the arguments here are incomplete/unreliable protocols. Reliability in engineering/protocol design has well known solutions but most of the time these are not employed. This is most likely because they feel they must make tradeoffs due to physical constraints of environments like powerline, but I suspect some of it has to do with the foolish idea they have to re-invent the wheel to ensure they can patent it and try an corner the market.

Reliability standards in protocols took decades to develop and can be very complex. Reliability in engineering is a whole discipline unto itself. Developing a new protocol that meets the reliability of standardized common protocols is cost prohibitive, time consuming and beyond the skill set of most developers. If they want to bring a product to market in a timely/cost effective manner they should just use something from the numerous standards based protocols and drop the foolish requirement of needing to corner the market with patent-able proprietary protocols.
 
+1 on wheel re-invention and attempts to lock-in the market.  Everyone wants to make a buck, can't argue against that.  But in doing so they bite off quite a lot more than they can chew and often choke on their failure.  I, mean, I get it.  Investors and fanatical founders want to believe they can 'own it all'.  None of them have succeeded and the market has arguably suffered for their hubris.   
 
But at the same time, as you rightly point out, reliability and complexity are non-trivial problems that aren't always obvious.  And very hard to work back into an effort if they weren't factored in at the start.  Buckets of lipstick won't dress up a pig.
 
Back
Top