Fastest rule-processing for automation?

wkearney99 said:
Oh there's a lot to be said for a host operating environment that allows for a lot of options.  I'm tempted to say "and it's not good" yet that would be exaggerating a bit.  But a lot of stuff has certainly got a lot of cruft piled onto it.  What with dead-end drivers or only last gen support.  
 
It's one thing to have a flexible framework, it's another to have lashed together spaghetti.  We've all suffered through enough of that.  Not pointing any fingers here, metaphorically or otherwise.
 
There's a lot of fresh momentum and development building behind IoT concepts and interaction with networked resources as a fundamental part of the process.  Not just an endpoint orphaned at the other end of an RS-232 line (or emulated on ethernet) without any real integration within the framework.  Simple things like reflection, json, REST and the like go a long way toward making for new kinds of automation.  Lots of thing have always "been possible" but often failed to meet market expectations.
 
IMO, I think you have it completely backwards. IoT is not at all about tight integration, it's about islands of stuff, created by people who are only thinking about their own little universes, that something may or may not be able to bring together reliably, most of which is far more oriented towards control with their own standalone phone app.
 
A device with a serial protocol that's well designed is infinitely more about integration than most of the IoT stuff. REST is not really a good protocol for device control. It's inherently defined as a client/server (aka polling only) protocol, so adding async support requires essentially doing things that are outside the definition of the standard, so it's inherently non-standard.
 
Buckets of lipstick won't dress up a pig.
 
+1
 
Many folks cannot walk and chew gum at the same time....that is the nature of the human bean...
 
IoT and REST are often about being able to make significantly lighter-weight interactions.  Being able to use reflection to query for results rather than depending on a much more rigid configuration.  Yes, there's plenty of downsides to either approach, but stepping up to being able to use JSON, XML, HTTP and node.js types of interaction opens up a wealth of connectivity options.  It raises the bar on what the devices have to be able to do if they're going to provide for network queries.  Once upon a time that was a pretty tall order, now it's pretty trivial to implement at much lower costs than ever before.  
 
As for polling only, no, interfaces like REST can certainly be used for polling.  But there's nothing about it that would prevent more from being done using the same techniques.  Without having to rise to the level of much heavier-weight protocols (WSDL, etc).  
 
Yes, of course, some degree of configuration is going to be beneficial.  Having everything being dynamic can be a huge waste of resources.  But there's no reason a middle ground can't be sought.  Especially if the elements involved understand from the start how to interact on a dynamic basis.  That way alterations to the system have the potential to be less tedious to manage over time.
 
It has to be said that many of the protocols in devices, though bad, are not bad due to a desire to own the world. It's more likely some combination of:
 
1. Lack of money, so they give the job to some Jr. engineer who hasn't ever done it before
2. They add it at the end, where it's too late to make changes that would make the protocol better, so the protocol has to work around the internal limitations of the device. 
3. There aren't many really standardized, good, full featured protocols, and those that are are often too complicated to want to implement in a simple device. It's harder for the manufacturer, and it'll limit the number of other vendors who will support it.
4. There are way too many factional 'standards' that folks sometimes throw together, and then if you want to support the device you figure out you have to somehow support an alphabet soup of standards, all of which have thick RFCs you have to read to understand them.
 
#1 and #2 are clearly the most common, nothing to do with what technologies are used, or really even the unbearable lightness of writing protocols. 
 
At the other extreme, it's also becoming somewhat more common these days that I read a spec and it's like, this device uses the OOA3, MAPF43, XiiAF, QICAT, PuAW-88, and Goober protocols. And all the freaking thing really needs is a fairly simple protocol, which I wouldn't mind in the slightest if it was home grown. It would be easier for me and just as reliable given that the device doesn't require anything that complicated. You look around to see some example code maybe, and quite likely none of them really implement any of those protocols, they just throw in the least bit of ad hoc possible in order to talk to the device, because no one wants to implement all that stuff.
 
wkearney99 said:
IoT and REST are often about being able to make significantly lighter-weight interactions.  Being able to use reflection to query for results rather than depending on a much more rigid configuration.  Yes, there's plenty of downsides to either approach, but stepping up to being able to use JSON, XML, HTTP and node.js types of interaction opens up a wealth of connectivity options.  It raises the bar on what the devices have to be able to do if they're going to provide for network queries.  Once upon a time that was a pretty tall order, now it's pretty trivial to implement at much lower costs than ever before.  
 
Lots of non-IoT devices have used JSON and XML for a while. That's nothing new and nothing that wasn't understood for some time. OTOH, for a simple device, XML is fairly heavy weight. What people often call XML isn't really XML, it's just text in XML form, and they don't really implement actual XML, which is a pretty heavy standard. I've implemented two XML parsers so I know how much work and code bulk and processing overhead is involved. They are really just using a simple text format that happens to be parsable by XML parsers.
 
But anyway, don't confuse the envelope and the content. This is a never ending source of confusion with folks, syntax vs. semantics. There are many types of envelopes you can deliver the letter in. What's important is the content of the letter. No envelope makes the letter better. There's no substantial benefit of HTTP over a simple text based format really. In fact, HTTP introduces complexities that aren't there in the simple text based format, just as IP based devices introduce lots of complexities that aren't there in serial devices.
 
Dean Roddey said:
There's no substantial benefit of HTTP over a simple text based format really. In fact, HTTP introduces complexities that aren't there in the simple text based format, just as IP based devices introduce lots of complexities that aren't there in serial devices.
 
The biggest benefit is disconnecting from direct serial connections.  Greater versatility in item placement, more ubiquitous connectivity options (which serial cable?  what baud rate?  tip/ring or ring/tip for 3.5mm sockets?  etc).  As opposed to "get the device onto the IP network and use these http calls".   It's indeed not without added complications.  Serial is nice and simple in that it's one connection only, no conflicts with multiple clients (aka Denon AVRs have a pretty crappy http stack).  
 
Envelopes that make the content of the letter ACCESSIBLE in more versatile ways certain DO stand to make USE of the letter BETTER.   Just about anything with a network interface and that can run code is capable, out of the box, of making an HTTP call and processing results.  The same is not true of serial protocols.  If both devices can make an IP connection, on their own with no dodgy/inconsistent converters, that's a definite WIN for 'accessing the letter' to follow your analogy.  
 
In a world of real automation, with a real automation controller, all devices, now matter how they are connected, are available via any other means, since the automation controller becomes the translator. The IoT world wants IP based stuff because none of them are about integration, they are about accessing things individually via separate apps via phones. That's not integration, that's remote controls on the phone. Integration requires actual integration, which you get with real automation controllers. It allows you to have the simpler local connections and still access the devices via any sorts of devices you want to use. So it's better all around, and it means that the device vendor can use a connection mechanism that suits his budget and requirements.
 
And of course much of the reason IoT devices have to be IP based is that many of them want to be cloud based. They don't want to sell a device, and may not even be that interested in selling a device. They want to sell customers to other companies, and that requires that they have a cloud based system. They can sell the device cheap and sell the information to make their real money. And, even if they know that's not likely to actually happen because it's such a fragmented world and they'll never get that many customers, it makes it more likely they'll get bought by a larger company than if they just sold an automation device. 
 
Call me cynical, but I think that's not an uncommon strategy these days.
 
"Real automation", ah, no posturing language there, is there?  
 
Requirements vary and clearly "real automation" has not gained a lot of mass market uptake.  Meanwhile, "remote controls on the phone" are selling in quite considerable numbers.  How profitably one is vs the other is certainly up for debate, as are the motivations of the interested parties.  Yet, more cheap devices certainly helps build momentum.  Sure, there's a lot "wrong" with the approach, especially when it's seen as an unfamiliar perspective.  But it doesn't mean it's of no value or that the market shouldn't entertain following it.  Times changes, products evolve and sometimes progress gets made.  Even in the face of cynicism, fatigue or petty protectionism, squabbling and simple jealousy.
 
Being a late comer to this discussion I may not be answering the right question, but the "fastest rule processing" in any automation will be based on some kind of ASIC, not the high level software, and certainly not on cloud-centric protocols. Most sensors and devices are built with a minimum integrated circuit logic to perform the basic device-specific rule processing. The most low-latency way to integrate such devices is the direct wire connection to another ASIC, like HAI or Elk for example. The less translation you need to do, the better the performance. Speed, flexibility and cost are the usual decision triangle. Since HAI firmware is not as flexible as the software based controllers, one can outsource less latency critical functions to a higher level controller such as CQC. The only way I see this changing is with introduction of inexpensive IP based FPGAs that could be installed in both client device and server integrator (analogous to the dreaded serial ports) with well defined and simple message logic. It is happening already, with $5 raspberry PI, but we are still not there yet. The inexpensive is the key for wide adoption.
 
I think ultimately the question isn't 'fastest' literally. It's more what's fast enough that humans don't get bothered by the delays. Once you get beyond that point, it's sort of time and money spent without any real payoff, at least in the context of this discussion I think.
 
wkearney99 said:
"Real automation", ah, no posturing language there, is there?  
 
Not really. Automation almost by definition is mostly about integration. Without that, it's not really a lot of payoff, at least not enough to get people to spend real money. A world of 'app per device' isn't integrated. That's a world of minimal investment, low margins, minimal barriers to entry for ever lower cost vendors, and so on. 
 
In that world, the only ones who likely will make out are the ones who are:
 
1. Going the 'sell your customers' route
2. Some who do some of it purely because it's a requirement for them to have the right check boxes but they never make any actual money off of it. The just do it to sell something else.
 
None of that is going to actually get anywhere near the 'smart home' that everyone talks about wrt to the IoT world. A smart home is a home full of highly integrated systems. And if the smart home isn't the goal, then there's an awful lot of talk about nothing out there because it's a constantly used buzz word. 
 
Dean Roddey said:
I think ultimately the question isn't 'fastest' literally. It's more what's fast enough that humans don't get bothered by the delays. 
 
Exactly.  And as yet there hasn't there been much discussion as to which systems and approaches are fast enough to not be annoying.  Thus starting the thread.
 
Of course another thing that many folks don't consider these days is that, if you want really fast back end automation response times, you probably don't want to be running Plex on the same machine, doing on the fly transcoding of 3 streams of HD content. There's a trend these days sometimes to buy one big machine and run a bunch of virtual machines on it, some of which may be doing pretty heavy duty stuff. Even if they aren't completely chewing up all of the CPU, there are other system capacity issues involved, such as I/O, that might start slowing things down.
 
For me personally, I'd prefer to have the automation system itself on its own dedicated machine. It doesn't have to be a particularly powerful one if it's dedicated, and keep media processing on its own.
 
Just to continue this thread as it is entertaining....faster to go for the meat of the ultimate automation console (your brain)...skip the audio VR, gesturing, keyboard or touch..the human brain is still much faster and simplier than similiarly sized computer today.
 
Per an earlier posting it is known now to genically grow inert electrical devices in a plant.  It is simple right now.  Imagine creating synthetic computerized remote controlled mitochrondia or replacing / replicating purkinje fibers in vivo with synthetics and optionally remote control it.
 
A little bit of this and that computer magic and you can jump start stem cells to make a duplicate pancreas that you can keep and remote control as a hot spare should yours be failing.
 
We are really much closer today than in June of 2012 to this stuff.
 
Implanting your mobile phone under your skin
Thursday 28 Jun 2012 1:47 pm
 
While it may sound like something out of Iron Man, implanting your smartphone under your skin may become as common as having laser eye surgery in the future, according to a leading scientist.   Moving on from the likes of pacemakers and stent implants, research by Autodesk, a California-based software company, has turned its attention to how traditional user interfaces could also work in the human body.  Giving hope to the idea of an unforgettable mobile phone, researchers embedded touch sensors, LEDs, speakers and vibration motors under the skin of a cadaver’s arm.
 
‘Our work explores the future possibilities of implanting interactive components underneath the skin, which would enable people to directly interact with their implants,’ said Christian Holz, who worked alongside Tovi Grossman and George Fitzmaurice at Autodesk. ‘We discovered that traditional interactive components can work through skin and the metrics collected from this study can inform the future design of interactive implants.’ In collaboration with Professor Anne Agur at the Department of Anatomy at the University of Toronto, the study used artificial skin to attach small implants to participants’ arms to see how they felt walking around with interactive devices. It also tested the effects of skin on traditional controls, such as buttons and microphones, the quality of light, sound and vibration, and communicated with and even charged devices through Bluetooth connections.
 
While implanting devices under the skin is certainly nothing new – more than three million people already have pacemakers – this technology could see those who have medical implants interact with them directly, eliminating the need for regular check-ups and even surgery.  ‘It would be impossible to predict what the future holds, but one immediate application could be to allow them to directly interact with and monitor their devices,’ said Mr Holz. ‘For example, notify the user of a low battery by a slight tingling underneath the skin or recharging the battery through skin, thereby avoiding the need for surgery.’  He added: ‘Implanted devices, along with the information they store, always travel with the user. There is no need for the user to manually attach them. The user can never forget or lose them either. This makes implanted devices available to them at all times.’
 
There are concerns, however, that the technology could be exploited, and that someone could hack into a person’s body.
 
article-1340967991789-13d91fa7000005dc-936376_466x310.jpg

 
In 2002, Kevin Warwick, professor of cybernetics at the University of Reading, was the first human to have the Utah Array/BrainGate implant inserted into his arm to link his nervous system to a computer. He said: ‘Once you’ve got implants into the nervous system and the brain, then the big issue is that potentially someone is going to hack into your nervous system and send signals you don’t want. ‘Having said that, we’ve seen with the Parkinson’s stimulators only a very small percentage of people have side effects, even when it is changing how your brain functions. There are certainly no reports that I’m aware of of people hacking into other people’s brains even though there are thousands of people with such implants.’ Mr Holz said: ‘It would be important to carefully consider the privacy and ethical issues. In particular, data transmission protocols would require careful consideration to ensure security. Storage of any identity information would require legal and ethical debate. We hope our work can serve as a starting point for such discussions.’
 
For Prof Warwick, the benefits far outweigh the risks, with thought communication and networking some of the potential uses in the future.‘I think it will enable a lot of people, particularly people who are classified as disabled because they are paralysed,’ he said. ‘This is often just a break in the nervous system, which could be enabled through technology like this.
‘I don’t see any problems with enhancing the human brain by extending your nervous system across a network, so your body doesn’t have to be where your brain is. It can be wherever the network takes you. We don’t need people travelling to distant planets – you could simply think and control something there.’
 
As for implanting phones and other devices more directly into the nervous system, he said: ‘I think it would open up much faster, much more efficient means of communication and there would be enormous commercial opportunities. ‘Within a decade you could find that it is something that you can have very easily, and ethically people wouldn’t worry about it because it would be something that everyone has to have – a bit like laser eye surgery. Fifteen years ago if someone wanted to blast lasers into your eyes you would have said they were crazy, but now you have to have it.  ‘From a scientific point of view you never really know when this is going to happen. It is like a commercial tsunami – it just goes forward and completely changes the world.
 
pete_c said:
It is a win win for the cell phone makers and cellular transport providers and folks that sleep with their cell phones.
 
The embedded smart phone the size of your finger nail managed by nerve impulses is right around the corner; meanwhile you can sleep with your watch on.  (I used to do this many years ago).  I like the idea of thought texting and hearing voices in my head that tell me the news of the day.
 
My cousin still sleeps with her iPhone but doesn't talk to Siri anymore with a preference to talking to Alexa.
 
Lots of things have screens without being phones, so there's plenty of ways to present interfaces without being stuck with proprietary solutions.  Personally I get the cell phone out of my possession as soon as possible when I get home.  I don't want to be attached to it.  Thankfully with stuff like Android I can mirror my apps across a number of devices, or even swap the logged in persona on any of them.  So I don't have to run for it just to get text messages (not that I would anyway).  Our choice for 'around the house' tablets has been the 7" Nexus 7 2013 and they've really worked out nicely.

Gizmos like the Echo are certainly showing there's more than one way to interact with technology.  We've had one since last year and it was immediately accessible and usable by everyone in the house.  Slowly but surely there's more functionality being added to it, both by Amazon and by 3rd parties.  One could say it's not doing anything "you couldn't do before" but that would be disingenuous.  Because before you typically had to manually invoke some kind of recognizer.  Or the mic wasn't every effective for anything other than being right in front of it.  Or, worse still, the device had no dev path and no automation plans (aka Kinect into an Xbox console, so much lost potential...)  Still, being cloud connected doesn't make it ideal for all situations or customers.   
 
What things like the Echo and IFTTT demonstrate is there's a lot of ways people want to 'stitch together' many different things.  These devices having accessible network interfaces is presenting a lot new ways for people to experiment.  There's lumps along the way, of course, but it's fun to see all of the new energies being put into them.
 
Here's an interesting take to consider, what will show more growth in the near term.  Cloud-oriented gizmos getting better at offline  interaction, or old-school "automation" gaining more connectivity?  I'm guessing a lot more of the former and not as much of the latter.  But at some point it's going to require local equipment to be doing some portion of the integration, otherwise the speed is going to suck.  Circling back to the point of the thread.
 
Back
Top