What is wrong with CQC?

I have a lot to say on this subject, but will keep this brief --
 
Keep in mind I am a HUGE fan of the IoT, primarily for the simple user experience and flexibility.
 
If I were to start today, and design a product from the ground up, I would use something like CQC's logic engine in the backend, and then pour the majority of my time into developing solid support for the array of new devices like Sonos, Nest, EcoBee, August, Caseta Smart Bridge, Hue, Harmony Hub, Tivo IP, Plex, and all of the major players that support IP control like Denon, Yamaha, Pioneer, Sony, LG, Sharp, etc.  Then I would worry about making it all pretty and adding new features - and while that happens, iOS and Android support for IVs needs to be native and FREE.  
 
The future isn't going to be us here on these boards.  The future is going to be the now 25 year old law grad playing with Sonos and Hue, ending up 35 with his wife and 1 kid, living in his 800ft luxury condo in the middle of Manhattan.  That guy isn't going to run any wires anywhere, will want to use the cool toys, and use some simple product to integrate them all together.  The best example of this is Crestron Pyng - which has essentially been designed for this hypothetical 35 year old lawyer.
 
I use that guy as an example because I started with HA when I was about 25, am now 35, with my little townhouse, wife and 1 kid, and absolutely no time to futz around and make things work.  You can watch me rant about this over on the CQC boards while I waste what little free time I have fighting with Z-Wave.  :)
 
jkmonroe said:
I use that guy as an example because I started with HA when I was about 25, am now 35, with my little townhouse, wife and 1 kid, and absolutely no time to futz around and make things work.
It seems to be:
<35 = Lack of Money
35 <= 55  Lack of Time
>= 55 Enough Time and Money  :wacko:
 
David, have you made use of the auto-gen system? You'd get a lot of the RA2 functionality supported nicely with minimal effort using that, though not all yet.
 
jkmonroe said:
 
If I were to start today, and design a product from the ground up, I would use something like CQC's logic engine in the backend, and then pour the majority of my time into developing solid support for the array of new devices like Sonos, Nest, EcoBee, August, Caseta Smart Bridge, Hue, Harmony Hub, Tivo IP, Plex, and all of the major players that support IP control like Denon, Yamaha, Pioneer, Sony, LG, Sharp, etc.  Then I would worry about making it all pretty and adding new features - and while that happens, iOS and Android support for IVs needs to be native and FREE.  
 
I agree with almost everything except the IV has to be free part. The 3rd party IOS app is too slow and the Android app looks like it doesn't display gradients correctly on my Android devices. The future for DIY home automation in my oppinion is not going to run on Windows, it will run on small Linux hubs with a nice GUI on the IOS or Android device of choice. When I go to my local best buy I don't see people buying desktops computers anymore, they are buying Phones and tablets and rarely a laptop. Home Automation apps like Command Fusion can give a hub an excellent GUI on top of a low powered hub.
 
I haven't Dean...I'm 42 and right in the middle of Ano's graph where No Time is Available. :)  But now that you have figured out the RadioRa2 connectivity bug, I will be more interested in playing with that but frankly I would also want the thermostats to be added to the autogen.  Have the RadioRa2 thermostats been added to the autogen?  Thanks.
 
Yes, thermostat support was added to the auto-gen stuff in 4.6, so that's already there. It wouldn't take any time to set it up and have a nice set of user interfaces.
 
On the issue of what it's running on, it doesn't matter what it's running on. No one cares. If it's just a box you buy, they don't care what it runs on. With the pricing for Windows 10, there's not really much of a cost issue anymore with Windows on a small box anymore either. It's a networked product, so it can run on a single small box with separate clients, or a single windows box with built in GUI, or on a server with various types of clients.
 
 There are already lots of hubs with multiple radios built in for less than $100, that is the future, there is nothing magical about windows 10 that is going to make everyone want to dump Linux and build on Windows when they already have many years of developing for Linux. CQC may be great as a window app but you still have to buy a PC and then install it, if you want to add support for Z-wave or zigbee or whatever you have to buy something else for your CQC box, the cheap hubs are ready to go.      
 
I am not saying CQC is not a great product, it is just not something I see the average person wanting to setup, now going and buying a bunch of automated lights like the Philips Hue or a Nest Thermostat and controlling it with a phone or tablet is something I can see the average person doing.
 
I was assuming the option of a hub, or a simple single machine system, or a full blown networked automation system. CQC can support all of those things. We have some folks currently looking at the hub thing. Now that WIndows 10 is going to be available on very small systems, for a low price, it will make all of the difference. There are probably a lot more Windows developers out there, and vastly better development tools under Windows as well. So I imagine you will probably see more Windows based small systems coming up. Though, in many cases, you may not know that it is since the OS is not exposed necessarily.
 
Having a single product that can span all of that range will be a fairly powerful thing. If others end up not doing a CQC based hub, then we will look at it ourselves.
 
And we will definitely be looking at simpler packagings of limited function versions of CQC as well.
 
Dean, aren't you going to face a massive re-write (or at least a massive recompile) if you expect to run CQC on Win10 on an low-power ARM device? I think it will be a much easier task for the .NET apps as I'm assuming MS will provide an appropriate .NET framework, but with your entire stack being in C++ it would seem that you're going to have a much harder time supporting a non x86 architecture.
 
Just curious,
Terry
 
No, which is why I kept repeating over and over in this thread that I write as much code as possible myself, basically 98% or thereabout. So we effectively have our own complete soup to nuts object framework, with the system stuff encapsulated at the very lowest level. So there's hardly any conditional code in the system other than in those few places where were are dealing with external file formats that have big/little endian issues, and those are also well encapsulated and to the degree possible without actually trying it for real, those things have been dealt with. We don't even expose the C++ runtime headers in our code, because we don't use any of that stuff. Everything is in terms of our own class interfaces.
 
Windows is Windows, so the API is the same, and we use only the lowest level, simplest Windows APIs, because we do as much as possible ourselves. Those APIs are going to be the same, and they are fully encapsulated down at the virtual kernel level, where any tricks that needs to be played can be done in secret and nothing else will be affected.
 
So basically it's going to be finding those places where I failed to quite get right the big/little endian thing of which there aren't even many candidates.
 
It certainly will require a compile for each platform, but that's not a biggie. That's all automated nice, and we have our own build tools that make it easy to support multiple targets, though it's been a long time since we have. Of course on these really small systems, the actual development will take place on Windows and the code will be downloaded to the device for debugging and deployment. I'm not sure if any of these small boards have emulators like with Android or iOS, which would be nice but not a requirement. Presumably the VC++ debugger just debugs remotely against the image running on the board.
 
We could even, given a good Linux person, simultaneously support Windows and Linux (on the back end) with basically just a compile per target platform (though a couple things would just end up being stubbed out and not do anything on Linux probably, like the WMA codec.) The virtual kernel layer is split into headers that provide the interface and per-platform directories for the actual implementation cpp files of a given platform. The build tool is designed to support this multi-platform scheme. It's not been used in a long time, but there is in fact the remains there of a Linux support layer from long ago. Each platform just has to implement the functionality of our virtual kernel interfaces and since nothing else is exposed outside of that kernel, everything else will not be affected. As long as we correctly deal with those small number of big/little endian places correctly in the higher level code, then we are good.
 
And CQC is actually very light weight. If you look at it on even a small standard system, most of the time it uses almost no measurable CPU. A bit more if you are processing audio or maybe running a lot of drivers. But it's extremely asynchronous and event driven and uses few resources, particularly by modern standards.
 
Dean Roddey said:
...
I would also point out that, if you plopped that image above in front of someone who new nothing about Premise, and possibly little about automation, he wouldn't know what in the world it was or have a clue how to do anything useful with it.
 
I agree. Which is why I found the video tutorials to be very helpful. Several WebEx presentations were recorded and posted on their web-site for prospective clients to download and review. Nowadays, posting to YouTube or Vimeo would be more advantageous.
 
The WebEx sessions were interesting because the audience members chimed in to ask questions and made the presentation feel less scripted. The sessions introduced concepts using Powerpoint slides and then demonstrated the concepts using the product (then returned to the slides). It's the trusty old "Tell 'em, show 'em, tell 'em again".
 
In 2004, Damon Deen presented a 90 minute session called "System Topology and Controlling Devices". The final half-hour demonstrates Premise Builder and is what convinced me that this was what I wanted to use to automate my home. For the curious, I made a 35 minute "highlights reel" and posted it on Vimeo. By the end of the presentation, the product was much less of a mystery to me.
 
We have lots of video tutorials as well, but it seems to be the consensus here that that's way too much to expect from anyone and if you have to actually watch a video or read some documentation in order to understand it, the product is doomed.
 
Dean Roddey said:
We have lots of video tutorials as well, but it seems to be the consensus here that that's way too much to expect from anyone and if you have to actually watch a video or read some documentation in order to understand it, the product is doomed.
 
I don't think it is quite that simple, folks (at least some) do reads docs and watch videos.
 
But there are some programs that just seem to be easy to learn and some that are hard. And while the nature of the task is part of the ease/difficulty it is also very much the UI that can guide/not-guide a person to the desired goal.
 
Dean Roddey said:
We have lots of video tutorials as well, but it seems to be the consensus here that that's way too much to expect from anyone and if you have to actually watch a video or read some documentation in order to understand it, the product is doomed.
I wish more documentation existed. What exists is sparse, and not always updated when the product is. 
 
I think videos are fine for an overview. If you are using a new product or a new part of CQC, a 15 minute video is fine. But please don't assume a video is a substitute for documentation.  It is not.  So many times I had a question on a call, or some other usage, and it was maddening to try to find the answer.  First you had to figure out the correct documentation. Was it the "Action Guide" or the "Language Reference" or the "Event Guide" or the "Driver Guide" or the "Interface Guide?"  Those names don't just jump out at me to explain what is what.  These things need better order to them. Maybe make a master index to tell you which guide has which calls.
 
So eventually I find the correct guide. But its not easy to get my question answered. There aren't a whole lot of examples, and CQC syntax is certainly not like any language I have worked with.  For function calls, often each parameter is not explained well. There was almost never example code for each call. 
 
Yes, sometimes I did have to resort to watching the video again, but that was very painful.  Especially when I knew most of it but i was only looking for that key bit of information I needed to answer my question.  I may be impatient, but I just don't have the time to watch a 45 minute video to answer a simple question. 
 
i also could post a question to the forum, and everyone is great, but still it would often take a day or two to get a response. Dean would sometimes try to answer, but often his answers at his level were over-my-head, so it would sometimes take a few back-and-forths to fully get my question answered, plus I sometimes didn't feel right posting them because my questions were never that complex where I should need to post them on the forum.  
 
So in my opinion, documentation or the lack there-of is a BIG problem for CQC. Even making it worse is that unlike other home automation programs, that us VB, or .Net or Java or some other common core technology, CQC is all unique, so its not like you can go to Amazon and buy a book to fill in the missing pieces.  I can tell you it can be very frustrating.
 
Back
Top