Anyone interested in discussing building DIY Home Automation software and hardware?

....else it will be tough to make money as HA is still fairly niche, and likely to stay niche until it is easier for the masses and more functional (fragmented standards being a major drawback today).
 
Yes I agree and it is very easy even though we have fragmented standards...it is changing quickly (in dog years)
 
I'm new to CocoonTech having introduced myself a couple of weeks back and sharing some of the work I have done to develop what I consider a unique home automation system in the 'welcome' section. Didn't get a lot of interest there so trying in this forum instead.
 
Welcome to Cocoontech Deandob.  Folks are friendly here.  Enjoy.  You OP has stirred my interest.
 
I personally like doing things from scratch depending on what scratch is.
 
Thanks for the warm welcome! I hope I can be of benefit to the community and likewise also learn from it.
 
In that spirit, as there is interest in how to use a modern UI for home automation front ends I'll use this post to explain how to write a widget based HTML5 UI. As an example, lets look at one of my favorite widgets from my system, the dial widget (you can see it prominent in the screenshots in earlier posts and below). Firstly to talk about HTML5 as a front end platform, HTML is by far the predominant platform for user interfaces, by order of magnitudes, as the web is based on it and there are so many toolsets, libraries and tutorials it is an obvious choice for a HA UI. Before HTML5 and pre 2009 (roughly) it was quite difficult to do rich UI applications as HTML wasn't functional enough, there were other popular UI frameworks like flash and silverlight, JavaScript was slow and differing browser compatibility was a serious problem. Those days are gone now, you can implement almost anything with HTML5 function and browsers (even IE) take standards compatibility much more seriously (although not perfect) and JavaScript speed is vastly improved although not as fast as native code but for 99% of applications there is no practical difference.
 
To use HTML as a rich UI requires a number of different toolsets - the HTML provides the layout description for the page, CSS provides the style (eg. color, animation), SVG provides vector graphics support if needed and javascript programs the CSS/HTML entities and applies logic to make the page smart. So it isn't the easiest to learn especially when using as a UI for applications as it roots are in page layout & markup (displaying web pages) however its ubiquity helps as there is so much support for HTML (eg. helper tools like jQuery) that make it easier to build web pages. The web community is developing UI frameworks like AngularJS which make it easier to build applications by using design patterns like MVVM to connect a dynamic HTML5 front end with the back end. The details of MVC/MVVM design is out of scope for this post, however if you are interested in building your own HTML5 UI I do suggest you take a closer look at angular or similar UI web framework like jQuery mobile as it will help you get started (even though I chose not to use a web framework as I wanted access to the full power of the browser without any of the constraints a framework will bring).
 
OK, back to the widget. The dashboard and graphical designer I have implemented will grab all the HTML widgets in the widgets directory and install them as objects dynamically on loading inline into the dashboard HTML. Here is the appropriate code from the dashboard (it adds the widget as an object to the widgets toolbox so the user can drag the widget to the design surface when selecting a new widget when laying out a dashboard screen). 

var objWidget = document.createElement("object");
objWidget.type = "text/html";
objWidget.data = "widgets/" + widgetTemplates[widgetNum] // location of widget
var title = document.createElement("p")
title.innerHTML = "<span>" + widgetNameType[0] + "</span><br /><br />"
TBContainer.appendChild(objWidget); // Add to toolbox Div


Here is the HTML code for the widget. Although the widget is fully formed HTML, CSS and JavaScript it isn't meant to be used standalone as a web page.
 

<!DOCTYPE html>
<html lang="en">
<head>
<title>Dial Widget</title>
</head>
<body id="body">
<style>
body {
overflow: hidden;
}
</style>

<span id="TBtooltip" data-default="Displays current and average channel values" />
<span id="attrib0" data-type="channel" data-name="Source" data-default="" />
<span id="attrib1" data-type="channel" data-name="Average" data-default="" />
<span id="attrib2" data-type="input" data-name="Range" data-default="100" />
<span id="ontop" data-default="true" />

<div id="group">
<svg id="widget" width="100" height="80" style="position: absolute; left: 0px; top: 0px; z-index:4">
<style type="text/css">
text {
font-family: "Helvetica Neue", Arial, Helvetica, sans-serif;
font-weight: normal;
font-style: normal;
font-size: 20px;
text-align: center;
pointer-events: none;
}
</style>

<g id="svgGroup" style="position: absolute; left: 0px; top: 0px;">
<g id=" noScale">
</g>
<g id="scale">
<path id="seg1" fill="none" stroke="rgb(0, 134, 0)" d="M13.46,66.27 A40,40,0,0,1,10.1,52.79" stroke-width="20" />
<path id="seg2" fill="none" stroke="rgb(50, 134, 0)" d="M10.01,50.7 A40,40,0,0,1,12.18,36.98" stroke-width="20" />
<path id="seg3" fill="none" stroke="rgb(100, 134, 0)" d="M12.91,35.02 A40,40,0,0,1,20.27,23.23" stroke-width="20" />
<path id="seg4" fill="none" stroke="rgb(150, 134, 0)" d="M21.72,21.72 A40,40,0,0,1,33.1,13.75" stroke-width="20" />
<path id="seg5" fill="none" stroke="rgb(200, 134, 0)" d="M35.02,12.91 A40,40,0,0,1,48.6,10.02" stroke-width="20" />
<path id="seg6" fill="none" stroke="rgb(255, 134, 0)" d="M50.7,10.01 A40,40,0,0,1,64.33,12.66" stroke-width="20" />
<path id="seg7" fill="none" stroke="rgb(255, 100, 0)" d="M66.27,13.46 A40,40,0,0,1,77.79,21.23" stroke-width="20" />
<path id="seg8" fill="none" stroke="rgb(255, 70, 0)" d="M79.25,22.72 A40,40,0,0,1,86.82,34.37" stroke-width="20" />
<path id="seg9" fill="none" stroke="rgb(255, 35, 0)" d="M87.59,36.32 A40,40,0,0,1,90,50" stroke-width="20" />
<path id="seg10" fill="none" stroke="rgb(255, 0, 0)" d="M89.95,52.09 A40,40,0,0,1,86.82,65.63" stroke-width="20" />
<polyline id="avg" points="44,0 56,0 50,8 44,0" fill="rgb(20, 20, 230)" stroke-width="0" style="display: none" transform="rotate(-112, 50, 50)"><title id="avgtool">Average: 0</title></polyline>
</g>
</g>
</svg>
<svg id="needle" style="position: absolute; left: 0px; top: 0px; transform-Origin: 50% 62.5%; z-index:2">
<path id="svgNeedle" fill="rgb(100, 100, 100)" stroke="rgb(255, 255, 255)" stroke-width="1" d="M24.39,54.51 A26,26,0,1,1,27.6,63 l-20,4 Z" />
</svg>
<svg id="text" style="position: absolute; left: 0px; top: 0px; z-index: 3">
<text id="numVal" x="36" y="58" fill="rgb(255, 255, 255)">0.0</text>
</svg>
</div>

<script src="../widgetFramework.js"></script>
<script> ......... WIDGET JAVASCRIPT GOES HERE</script>
</body>
</html>


What you see here is:
 
  • <span> entries are not exposed in the UI and are used to describe the widget semantics to the UI framework, in a similar way to the ini files describe the semantics of a device driver that I posted about earlier. In this case the first span holds the text for the tooltip popup when a mouse hovers over the widget (managed by the dashboard using a bootstrap function). The second and the third spans describe the two automation channels that this widget subscribes to (the framework uses a publish subscribe model for events), here the channel that the dial needle responds to (eg. the instantaneous value), and a secondary indicator that spins around the edge of the dial (eg. to represent an average value). The last two span descriptors save the settings for the dial range (eg. 0 - 100 for scaling the dial value) and if the widget should be in the foreground or background when rendered (eg. a container widget should be in the background). The UI framework will expose these settings when you right click the widget when in design mode, and the behavior is defined by the 'data-type' (eg. a channel list so that the user can select a channel as a feed from the server, or an input to prompt the user to enter a value). These settings are stored in a database and the UI framework will customise the widget based on the settings when loading at startup. This is a very powerful approach as you can easily describe a rich set of widget specific semantics to the UI framework as well as having the HTML/CSS/Javascript bring the widget alive.
     
  • The rest of the HTML is embedded SVG which is a vector graphics language used by modern browsers to draw complicated shapes. The first section describes the font for the number display in the middle of the dial. The second bit with the 'path' commands draws the outer segments of the dial, the inner needle, outer secondary indicator and center. Any vector graphics program can be used to do the drawing and you just save the drawing in SVG format and cut/paste the relevant SVG commands into the widget template. I use the open source inkscape but you can also use sophisticated tools like adobe Illustrator for complex drawings or SVG-edit for simple drawings. I like SVG as you can create visually pleasing and sophisticated widgets, but you could also use the HTML5 canvas commands for simpler graphics without the added complexity of SVG.
     
  • The final script tag brings in some of the common UI framework functionality and makes it easy for the widget to communicate with the framework through a separate javascript file that all widgets share.
 
Here is the JavaScript that makes the widget come alive and interact with the user and the automation framework:
 

var needleID = document.getElementById("needle"); // id of the path to rotate
var svgNeedle = document.getElementById("svgNeedle"); // id of the path to rotate
var svgText = document.getElementById("text"); // id of text SVG
var avgID = document.getElementById("avg"); // id average marker
var oldVal = 0;

// Called from framework when widget starts
function widgetStart(param) { // widget specific startup
range = parseInt(_attribs[2].value);
if (_attribs[1].value !== "") avgID.style.setProperty("display", "inline");
// Hide the svg used to display the widget in the toolbox for proper drag/drop (can only drag id=widget SVG element) & put dial face in background
return true;
}

function startDesign() { // called when switching to design mode
}

function endDesign() { // called when switching to design mode
}

function startEdit() { // called when editing started
}

function endEdit(param0) { // called when editing finishes
if (_attribs[1].value !== "") avgID.style.setProperty("display", "inline")
else avgID.style.setProperty("display", "none") // Only display average marker if channel is set
}

function scale(scaleX, scaleY) { // manage scaling
svgText.setAttribute("transform", "scale(" + scaleX + "," + scaleY + ")");
svgNeedle.setAttribute("transform", "scale(" + scaleX + "," + scaleY + ")"); // scale needle
}

// Called from framework for incoming channel events
function feed(channel, scope, data) {
var numeric = parseFloat(data);
if (isNaN(numeric)) return;
if (channel === _attribs[0].value.split("/")[2]) return rotateDial(numeric);
if (channel === _attribs[1].value.split("/")[2]) return setAvg(numeric);
}

// Called from framework for initial channel status
function ini(channel, scope, data) {
return feed(channel, scope, data);
}

// Set the average indicator
function setAvg(avgVal) {
document.getElementById("avgtool").textContent = "Average: " + avgVal;
if (avgVal > range * 1.05) avgVal = range * 1.05; // allow a little overrun
if (avgVal < range * -0.05) avgVal = range * -0.05; // allow a little underrun
var angle = parseInt(avgVal * 227 / range - 114);
avgID.setAttribute('transform', 'rotate(' + angle.toString() + ' 50 50)');
}

// Rotate dial between old and new
function rotateDial(newVal) {
var textVal = newVal;
var newVal = Math.abs(newVal);
if (newVal > range * 1.05) newVal = range * 1.05; // allow a little overrun
if (newVal < range * -0.05) newVal = range * -0.05; // allow a little underrun
if (range > 10) { // format displayed range
numID.textContent = Math.round(textVal);
} else {
numID.textContent = Math.round(textVal * 10) / 10;
}
numID.setAttribute("x", (document.getElementById("widget").clientWidth / 2 - numID.getBBox().width / 2)); // Adjust number to be center
needleID.style.setProperty('transition', 'transform ' + Math.abs(newVal - oldVal) * 2 / range + 's cubic-bezier(0.680, -0.550, 0.265, 1.550)');
needleID.style.setProperty('transform', 'rotate(' + newVal * 223 / range + 'deg)');
oldVal = newVal;
}


 
The JavaScript for this particular widget is a little easier to read than the HTML/SVG.
  • The variable initialization at the top of the code gets the 'links' to the HTML objects we will be interacting with (eg. to spin the needle). 
     
  • The widgetStart function is called by the framework when the widget is first loaded into the dashboard and all the initialization code goes here. The _attribs[] array is provided by the UI framework and stores the settings for this particular widget instance (eg. if there is no secondary channel set when the widget was setup during design mode, don't display the secondary value indicator).
     
  • The empty functions below the widgetStart function are optional and will be called when the dashboard enters or exits design mode, and when starting or finishing editing the widget properties in the designer. This allows the widget to do different things during design time and can be very useful for more sophisticated widgets (eg. add more design time functionality in addition to the standard design features all widgets inherit from the framework). In this example, if the secondary channel isn't set, the secondary indicator marker on the dial is hidden, a small but nice touch possible with only needing a line of code.
     
  • The scale function handles how you scale the widget HTML/SVG entities when the user edits a widget and uses the mouse on one of the scale 'handles' to expand or contract the widget size. The framework will automatically scale simple widgets but sometimes you need the scaling logic to be different which you can implement in this function. In this function, a standard CSS transform function is used on the needle and text.
     
  • The feed function is called by the framework when there is an event that the widget has subscribed to. Here we direct the incoming message depending if it is for the primary channel (the dial) or the secondary channel (the secondary indicator).
     
  • The ini function is called when the widget is first instantiated. The automation framework maintains state of all events so when the browser first launches the framework sends the current channel state to the widget via this function. So it is similar to the feed function (that receives all live events) but is separate as you may want to do some initial work on the state data. Here we don't care, so we call the feed function to display the initial value.
     
  • The next two functions are specific to the dial widget, and automate the needles. Here we take advantage of the power of CSS by using a nice bezier curve to vary the acceleration rate of the needle so it feels like a real dial needle - it accelerates quickly from 0, and if the dial has to swing a long way on the dial it will overshoot the mark and smoothly return to the true setting. It is small touches like this that bring the dashboard to life, for example I have a power dashboard screen where there are 6 of these dials (showing solar power, $$ saved per day, power per power phase etc.) and it looks great when you select the page and all the dials come to life! And only 1 line of code needed. That is the power of using these modern computing platforms like HTML5.
 
Below is a screenshot of the actual widget (from the power page). The widget is from the power dashboard screen and shows current power use on the dial, and the blue marker on the outside shows the average power for the day (ie. I'm using more power than average at that moment). 
dial.JPG
 
So I hope you like my widget and that the stuff above is informative enough to show how flexible and powerful technologies like HTML are for home automation and to get you excited about trying something similar yourself! The dial widget is a simple example, not so difficult to build (most of the code above came from a template) - there really is so much possible with a UI like this framework for desktops, tablets and phones, for future posts I'd like to describe the UI designer as well as a more sophisticated widget, the time series graph that uses the excellent D3 javascript libraries (eg. zoom and pan with finger gestures).
 
I'm happy to answer any questions about HTML development in general or my solution specifically and I'm considering making a demo site where you can play with the dashboard, WYSIWYG designer and widgets.
 
Here is an example of editing the dial widget in the graphical designer. Click the image below to see the details, the image thumb is too small...  
 
Here you can see the drag handles for scaling the widgets (the small grey boxes on the edit outline of the widget bottom right, bottom middle and right middle sides), as well as see how the framework exposes the widget properties for editing (the source channel - in this example the daily rain amount, the channel for the average indicator as well as the dial range input).  
 
The widget doesn't need to deal with any of the design time semantics, the framework does all the work (eg. dragging the widget to a new spot on the screen). Also clearly displayed on the left hand side of the screen is the widget toolbox, which you can scroll through to pick your widget to drag onto the design surface.  
 
Just below the toolbox are icons that allow you to create new and delete dashboard screens (showing up as a new tab), as well as saving screen changes.  
 
design surface1.jpg
 
deandob said:
On a different topic, there are a new breed of automation solutions coming on the market with the 'internet of things' hype but most of them are not as functional as the solutions 'pre-IoT' and have an emphasis on UI (especially mobile). I notice that many of these new HA solutions are cloud based, which I think is a fundamental flaw because of the latency to communicate with the cloud causes small delays (especially noticeable when sending commands, like switch on a light) and more importantly if the cloud or the network is down the automation is down. IMO a local hub that handles both the device interfacing as well as the automation logic is better design, still use the cloud but to augment the hub (eg. use WhatsApp for mobile notifications). What do others think about these new systems (like Smartthings or Ninja Sphere)?
Up until Amazon Echo came on the scene, I was skeptical about why anyone would want the cloud to be a part of their home automation.  Up until Echo, the tradeoff seemed to be higher latency for lower cost, but it just didn't make much sense to me, except maybe as a cheap stepping stone for new users wanting to test the water before diving in.  After all, why would anyone care about saving a paltry few dollars on the main hub if it seriously undermined the performance of the much greater hundreds/thousands of dollars they were spending on light switches and other home automation enabled devices?
 
With Echo, though, it's possible we're at the beginning of a whole new ballgame.  If it turns out the main touchpoint of your home automation interaction is with a cloud based entity, then maybe you won't feel the performance penalty as much from having some other control elements in the cloud as well.  There would, ideally, still be a partitioning, optimized for performance,  between cloud services and local automation, but it's maybe too early to be fully confident where those lines will get drawn.
 
The echo is an interesting device, I have been reading the thread on this forum. It has three aspects that make it unique - excellent speech recognition, phrases recognized sent to the cloud for interpretation, and a decent audio quality all in a cute package. The teardown of the echo shows it is well engineered and reports of the audio and speech recognition look to be market leading. It is an example of the new breed of automation systems coming onto the market riding the IoT wave. I'm thinking of getting one just to play with and just for the speech recognition alone.
 
It is a good example of a cloud device / service that can augment a home automation hub. It's main function is to interact with you via voice and relay commands to the cloud for processing. Useful but not the complete picture. How do you interface with your home devices? How do you apply logic to events? If you want voice as the main interface, then the approach that the CastleOS guy is taking will give you the best of both worlds as it integrates voice with automation functionality. For the Amazon Echo you will need to use services like IFTTT and also some method to connect to/from your devices to the cloud (likely needing your internet modem/router to be configured for access from the cloud).So when Dean Roddey in a post on the last page mentions the complexity of relying on all these different services stitched together impacting reliability, I agree with him for these types of scenarios (I don't agree if he means using development platform toolsets like HTML5 or .NET impacting reliability).
 
But Amazon isn't the only one who can do this. Apple have Siri, Google have their voice cloud and Microsoft have Cortana. As I live in the Microsoft ecosystem, I am quite excited about the new Windows 10 for IoT which is a cut down version of Windows (like Windows CE or WIndows Embedded) that is designed to run on devices like Raspberry Pi that will enable Echo type scenarios but with toolsets available to the DIY community. I have a Pi sitting next to me at the moment running a pre-release of Windows 10 IoT and have deployed Universal Windows applications on it that also run on my desktop and phone with the same code. As the Windows API set will be available on the IoT version, I will (eventually) be able to run Cortana for speech recognition as well as media services for audio and video, just like I can do on a full desktop machine, using a $35 piece of hardware the size of a credit card. I have been testing Cortana on my desktop amazed that the speech recognition is very quick and accurate even though it's using a cloud service. I have a design ready for a small wall mounted raspberry pi with a 5" screen the size of a light switch, connected & powered over Ethernet cable run inside the walls, and includes a design for a class D amplifier and USB micro webcam so that this wall mounted screen has full audio, video and touch / speech input, similar to a phone, as well as input/outputs for local sensors like motion detectors.
 
I have an Echo on order and should be receiving it mid-July.  I let my first two invitations expire because I thought that if what Echo was doing was mainly just speech recognition and a fixed handful of staged tricks, then Echo would be only minimally interesting.  However, I've lately been getting the vibe that Echo is not just that but actually more like IBM's Watson, which would make Echo far more interesting.  I haven't heard the same degree of positive buzz regarding the Microsoft, Apple, or Google offerings.  Are they just as good?
 
Can you post a photo of the screen-camera thing you just mentioned?  Getting all that to fit into a typical switch box would help keep things tidy.  Perhaps it could also do gesture recognition, maybe by having the video piped somewhere it could be processed in near real-time?  Is that the reason for having the camera there?  
 
I previously had HomeSeer running on a Pi and some other SBC's, but six months ago for roughly the same money I purchased a Baytrail J1800 motherboard with an integrated Intel CPU, and it runs circles around all of them.  I haven't tried a Pi2, but I expect it would beat that as well.  Of primary significance: all the Linux flavors seem to work out of the box on the Intel platform, but (so far anyway) the same isn't true for the ARM platforms.
 
I've heard that Windows 10 IoT is missing the Windows 10 GUI.  Is that true?  Without the GUI, then whatever's left had better be damn good or it's gonna flop.  I do run Windows on plenty of other computers, but on none of them has it ever been completely 100% rock solid--not like running Debian Linux is.  For that reason, I think Linux is probably the better choice for home automation.  Windows always starts out fine when you first install it, but it inevitably gets sluggish with time.  Maybe it's bloating from updates.  Who knows?  That just doesn't seem to happen with Linux.  This was discussed in one of the threads on the HomeSeer forum, and the only folks who reported having no trouble running Windows for home automation are the ones who keep their automation computer isolated and who never install any Windows updates.
 
Microsoft, Google and Apple currently have similar solutions on the phone and as cloud services, Amazon is the first to package it up into a form factor suitable for home use. The Echo has promise if they open up the API and expand the services available for its VR. Definitely worth watching.
 
Regarding prototypes, I have an older working prototype with speech recognition but uses a different architecture (PIC based using RS485 with audio but no video) but the new version using the Pi is in bits at the moment, I have the mini USB camera, Pi and screen module (RGB24 DPI parallel interface) and some of the software bits prototyped on the desktop, so nothing is together yet although the Power over Ethernet module is completed. I'm waiting for Windows 10 IoT to be released (August) before working any more on this project. No plans for gesture control (at the moment) with voice and touchscreen being the main input interfaces. The camera is for a video intercom.
 
Your intel motherboard is good value for $60 and definitely faster than the ARM boards (and better software support as you mention). However the Pi2 has a significantly smaller form factor.
 
Windows IoT is missing the desktop GUI but it's not meant to be a desktop. You can deploy Windows apps to it and the UI of the app will run on the display, which is exactly what you want for an IoT device, no need for desktop. It is really for developers - you can develop a full Windows application and run it on the Raspberry PI with all the Windows APIs and functions available (well, almost all). You would never need a desktop on an IoT device, running your app is sufficient.
 
Regarding what makes Windows slower over time is registry bloat, and happens if you are constantly installing / de-installing apps/features. I usually rebuild my Windows machines after 18 months or 2 years, and they do run better, although this is less a problem with newer versions of Windows. I try to stay away from Linux, I find the commands hard to remember and hard to use, although the shells are pretty good. Not that there is anything wrong with Linux, I just have not invested the time to be an expert and find myself frustrated when something doesn't work, and annoyed when you want to make changes or install you have to end up compiling the change, and it invariably fails and tough to work out what went wrong. Windows is just easier to use, albeit I'm more familiar with Windows (and its quirks). Agree for a PC with a dedicated function like a media center or automation hub, don't do updates (although you will miss out on security fixes - still low risk).
 
mdonovan said:
I've been using a hybrid approach to automation. I decided on an architecture where I write an interface to a device that runs as a service and broadcasts data using MQTT. Within this interface, there is an object that talks to the hardware and provides the data to a wrapper that handles the broadcasting of MQTT messages. This approach of splitting he interface and the protocol saved me a lot of work when I converted from xAP to MQTT. Then I create a driver for the home automation software (right now I'm using Elve) that reads the MQTT messages and provides the data to the software. Keeps everything separate and I can just create a small driver for whatever software I'm using and I don't have to redo all the device interfacing.
 
I have always been interested in coding for home automation, but I was not really interested in writing a complete, comprehensive home automation package, so I experiment with different products. Elve comes the closest to what I've always wanted in HA software, but it's non-supported and not upgraded. The touch screen functionality is great and not very difficult to do, and the price is right (free), so I can't really complain about the lack of support.
 
For what it's worth...
 
Matt
Sounds  interreesting.  Have you written up anyting with a bit more detail about what you did and more detail about how you did it?  Maybe it would be a blueprint of sorts for how to do this.   If Elve didn't exist, what home automation software would have been your next pick for buiilding on?
 
deandob said:
Your intel motherboard is good value for $60 and definitely faster than the ARM boards (and better software support as you mention). However the Pi2 has a significantly smaller form factor.
I got mine from NewEgg around Christmastime for something like $33 with free shipping.  When the Pi first launched, there was a much greater price spread between the original Pi and the cheapest Intel motherboard+CPU.  For headless operation it seems to work fine.  The asking price tends to bounce around a lot, so if you're in no hurry you can simply wait for the price to fall again, as that happens fairly often.  Debian plus HomeSeer Pro3 occupy only about 350K of memory in total, so 1GB of memory is plenty for that combination.  So, figure roughly an extra $10 for the 1GB of memory if you don't already have some laying around.  
 
Thanks for explaining Windows 10 IoT.  I think I get it now: it's basically a delivery vehicle for Microsoft software app developers.  Unless the software I want to use has been recompiled for it by the owner of the source code, it sounds like I'd be SOL on using it.  In that sense, from an end-user perspective, it's not really as versatile as what I would have hoped.  Meh, it is what it is.
 
Well, it's not something like the old Windows RT or CE, which were completely different platforms. It's really Win32. So, for client/server products like CQC, the back end (which has no GUI anyway) would not be hampered by the lack of a GUI and (modulo perhaps some higher level features that perhaps got left out and would have to be conditionally compiled out) it shouldn't represent a gigantic effort to run there. If it were RT/CE, it would be all a flat out rewrite.
 
But CQC only uses the lowest level OS features it can, so I think it will fair reasonably well. But I'm not sure how stripped down it is. Is DirectShow supported, of which the audio part of is necessary for our headless audio player? I'm not sure. Worst case, we would have to segregate a few things so that they could just not be installed in that environment.
 
As Dean mentions for a HA hub that already runs on Win32 should be able to run on Win IoT although some of the APIs aren't supported (eg. for enterprise apps). There is a list of APIs supported but I can't find it at the moment.
 
For the more casual 'maker' it will be easy to develop an application in Visual Studio and with a click deploy it and run it on a device like a Raspberry Pi, especially as Microsoft is promoting its WinRT APIs for universal applications (same application will run on desktop, phone and IoT device) so you can build and test on a PC then run it on a Pi (even though it is ARM based).
 
Note that devices like the Raspberry Pi with Windows IoT won't be perfect for all IoT applications, as you do not get microsecond resolution on the GPIOs for timing sensitive activities like driving pins on a sensor chip as it is not a real time operating system. Sometimes a more basic device like an Arduino is more fit for purpose (and cheaper).
 
Here is a project that highlights the benefits of running Windows on these small form factor devices - 2 Raspberry Pi's used to translate on the fly from English to Chinese over a network (each Pi drives a headset). Pretty cool!
https://www.hackster.io/5310/speech-translator
 
As long as we're on this topic: will regular Windows device drivers (e.g. for a USB device or sensor) automatically work under Windows 10 IoT, or will they, as I'm assuming, also require recompilation by whoever owns the original driver source code in order to function under Windows 10 IoT?  There are a lot of legacy devices (e.g. multimeter readings from a DMM that has a USB interface, RFXCOM readings, etc.) that it would be nice to access from Windows 10 IoT, and for which there may not exist a linux device driver.  However, I'm guessing that in most cases if the device is no longer being actively manufactured, the odds of getting the driver recompiled are close to NIL.
 
IIRC, Intel's Edison is x86 code compatible, so if there were a Windows 10 IoT for Edison (?), it might be better in terms of driver re-use than running on the Pi (?) [and if not, was there ever any true benefit to be had from making Edison x86 code compatible?].  Anyone happen to know?  All by itself, this might make the Edison seem like more than just an oddball product.
 
You should be fine on a x86 device as Win IoT uses all the Windows infrastructure including device detection & updating your machine with the right device driver ala normal Windows (I'm guessing a bit here as I haven't tried it). But only the only board that Win IoT x86 supports at this point is the Intel minnowboard. For ARM (Raspberry Pi2) you will need the right device driver and Microsoft is working on having (a small) list of the more popular USB drivers ready when the O/S is released and support will improve after release. Intel Edison AFAIK is not supported (at least in the short term).
 
See here for more info on Win IoT https://dev.windows.com/en-us/iot
 
At the moment It looks as though the cheapest (and easiest?)  IoT wireless network may be sensored ESP8266  MQTT clients which connect to a free MQTT broker in the cloud.  
 
Here's an example of using an MQTT ESP8266 driven OLED display:
https://www.youtube.com/watch?v=x9_1w9EPb-s
 
and here's an example (using an Arduino, but based on the preceding youtube I presume  it might just as easily have been an ESP8266?) of publishing/subscribing to a cloud broker:
https://www.youtube.com/watch?v=pn9tPRRNX50
 
From there, I presume it wouldn't be too hard to have any value published to a cloud broker also automatically graphed in real-time for free using xively or the like (I've been using plot.ly) and/or automatically stored in a mysql database.
 
I suppose if (?)  there's a free cloud IFTTT that could monitor the MQTT broker, you'd even have the possibility of some rudimentary automation.
 
ESP8266's cost around $3 and consist of Wifi plus a small programmable microcontroller and some GPIO pins.  Different versions of it have sprung up, due to its popularity.  You can use the Arduino IDE to program them, which I think helped popularize it.
 
I'm not sure how it might all hook-up with regular home automation--maybe that's where the OP might fit into this?-- but at least for quick and dirty DIY wireless sensors, I can see the appeal.  I think we'll probably eventually see ultra low power narrowband variations of this for long range battery powered (or energy harvested) IoT devices.  Some chips for that have been for sale on digikey for a while now.  Crammed in a nutshell, that's what I understand to be the form that the IoT will take.  Once we're swimming in a highly sensored environment, I can imagine the option will exist for at least some automation to happen automatically (based on recurring patterns) and without explicit programing.   
 
Is all that about right, or have I left something out? 
 
I just installed Windows 10 in a VM and am setting up VS2015 with it now so I can use W10 IoT on my Rpi2.  Curious to see how well IoT runs on the Pi.  I may look at adding a small touch screen to the Pi to do an automation interface, but honestly, with a phone and a tablet always handy, I'm not seeing the draw of dedicated automation panels much any more.
 
I would be up for helping where I could on a HA software package based on MS tools.  I've done a bit using the HAI API to create a Windows service that performs SQL logging of all activity.  I'm not a developer by trade, but I do quite a bit of VB.NET and Delphi for miscellaneous apps.
 
Back
Top