M1XRF2G and collisions

vc1234

Active Member
Hi,

I am thinking of using Elk with some wireless probably GE senors. I've read a lot about M1XRF2G capabilities, supervision and so on, but one issue keeps bothering me.

Let's say we have a dozen wireless "crystal" transmitters. There is some probability of more than one sensor trying to transmit at the same time which might lead to the data packet(s) corruption and loss of say window break notification.

How does a wireless receiver handle such situation if at all ?

Thanks

vc
 
Interesting question. I would suppose, but certainly don't know, that the probability of a collision in a meaningful circumstance is minuscule.

I would ask in addition, what measures are in the design that prevent wireless Denial of Service attacks (e.g., packet floods, spoofing, jamming, etc.)?
 
Interesting question. I would suppose, but certainly don't know, that the probability of a collision in a meaningful circumstance is minuscule.

I would ask in addition, what measures are in the design that prevent wireless Denial of Service attacks (e.g., packet floods, spoofing, jamming, etc.)?

Imagine two motion detectors located in the same room, for redundancy, or whatever, and firing at the same time. Granted, collision probability would be lower than with Ethernet traffic where collision handling mechanism is a critical protocol feature although the nature is the same as with any shared medium.

Do we rely on luck when using wireless security sensors, even supervised ? That would be odd ...

vc
 
Do we rely on luck when using wireless security sensors, even supervised ? That would be odd ...

You could call it luck or even magic.... but probability and statistics might make it sound better. When communications go in only one direction math is about all that's left to rely on. With TCP/IP or similar protocols the data flows in both directions. This makes is possible for the receiver to confirm reception. When there is no means by which the receiver can confirm reception, the transmitter has little choice but to keep transmitting. Two transmitters that collide will continue to collide in subsequent transmissions unless one or both of them change the interval at which they transmit. Of course, they cant both make the same changes either. By making "random" changes, eventually, they will stop colliding. The same is true when even more transmitters collide. Keep adding transmiters and there will come a point where so many transmitter have been added that one will always collide with another. Before reaching this point an analysis of the timing strategy will tell you the probability of the message getting through. That probability multiplied by the average number of transmissions per time unit will tell you how long to expect to for the message to get through. For a wireless (transmit only) sensor to be supervised, all you are asking is that the receiver hears from it every now and then. This might be measured in hours. When a sensor goes into an alarmed state, it can start talking up a storm (at random intervals) to make sure the message gets through quickly.
 
Do we rely on luck when using wireless security sensors, even supervised ? That would be odd ...

When there is no means by which the receiver can confirm reception, the transmitter has little choice but to keep transmitting. Two transmitters that collide will continue to collide in subsequent transmissions unless one or both of them change the interval at which they transmit. Of course, they cant both make the same changes either. By making "random" changes, eventually, they will stop colliding.

Ah, but what you've just described is what Ethernet does with its collision detection and random back-offs. With wireless security sensors, supervised or otherwise, no collision detection is happening (I may be wrong on that -- after all it was my original question !). The security transmitters apparently do not know whether a collision occurred, so they cannot implement Ethernet style random back-offs/retransmissions. If that's indeed the case, the situation is a bit worrisome since there's clearly a potential to lose a security alarm keep-alive/supervision piece of information.

From reading other forums, I gather that collision related information loss has occurred in real life with Oregon Scientific sensors, for example, that are rather but not extraordinarily chatty. While losing a couple of temperature readings is frustrating, losing security information may have more serious implications.
 
Ah, but what you've just described is what Ethernet does with its collision detection and random back-offs. With wireless security sensors, supervised or otherwise, no collision detection is happening (I may be wrong on that -- after all it was my original question !). The security transmitters apparently do not know whether a collision occurred, so they cannot implement Ethernet style random back-offs/retransmissions. If that's indeed the case, the situation is a bit worrisome since there's clearly a potential to lose a security alarm keep-alive/supervision piece of information.

It's pretty close. The key difference is that when the transmitter cant know there was a collision, it has no choice but to assume that there was (or risk not getting the message through). I havnt looked at many wireless sensors but, like you, I dont expect them to have receivers, so I dont expect them to know about collisions either. This however, does not imply that they cant implement a random retransmission scheme. Since they are always transmitting they just need to mix up the time between transmissions.

I can see where it might sound worrisome if all we knew was that a packet had only (say) a 25% chance of being heard correctly. The piece that's missing from the story is how often the packet is transmitted. Imagine if the packet took 100ms to transmit but was transmitted twice each second. It would only take a few seconds to get to point where the packet had over a 99% chance of being heard. With more seconds, you can start adding 9's to the right of the decimal point. Now strictly speaking, there is some chance a door could be opened and closed again before the sensor transmitted enough packets to get one through. The sensor could be smart enough to know this, and even if the door is closed again, it might still keep transmitting for say 10 seconds or how ever long the engineer decided was needed to get enough 9's to the right of the decimal point to feel good about sleeping at night.

So with a good design, about all that would concern me with RF sensors is noise. Jamming the signal is likely very easy. So if you are looking for something to worry about....
 
The piece that's missing from the story is how often the packet is transmitted...
Right, we need some numbers here, from someone who knows. In a system of 12 sensors as suggested, various design assumptions can easily lead to calculations of 25% or greater (although unlikely in the extreme, as far as I'm concerned) or as low as insignificant fractions of a percent (likely).

So with a good design, about all that would concern me with RF sensors is noise. Jamming the signal is likely very easy. So if you are looking for something to worry about....
Probably the more significant issue.
 
I am also curious about what a more expensive wireless sensor collection, such as for example GE NX650 sensors, would give in comparison to the cheaper W80032RF/DS-10 option.

I do realize that the radio has presumably a much better range and quality, but other than that, what else ?

They appear fundamentally the same with respect to collision handling (none) and, stretching it up a bit, supervision.

Take the X10 Ds-10 for example. It reports heartbeat every hour and low battery status. A GE NX650 sends a "supervisory report", whatever it is, every hour. Looks pretty similar.

Parenthetically, there is frustrating lack of information as to what exactly that supervisory report might contain, as well as with respect to other "crystal" sensors features. Knowing in what exact respects they might be superior to the budget X10 solutions would have made a design decision much easier. If anyone has any info and could point me in the right direction, it would be much appreciated.

Thanks.

vc
 
I think you are making assumptions that the physical layer access is ethernet-CSMA/CD. These systems are likely using CDMA or FDMA which allows multiple nodes to transmit at the same time (probably way more than you can have sensors) so no collision detection is required.

There is probably some error detection or even correction and bad packets are just thrown away. Most likely the receiver needs to lose several before it flags the sensor as lost. Also, a lost sensor is not the same as an open sensor, low battery, etc.

I think one critical difference in real security sensors vs. the X10 stuff is the coding. X10 sensors aren't individually coded so you could intercept and recreate the signal. The security sensors have unique ID's that must be put into the supervisory unit and that code itself is never actually transmitted, so the signal can be compared to the code and determined if it came from the actual device.
 
The GE Transmitters send 8 packets of alarm data with random spacing between the packets. When System Depot was in the same building as ELK, several hundred transmitters were on the shelf all transmitting their hourly supervisory signal. We never experienced any loss of signals from transmitters coming into our test M1 systems.

A GE keyfob transmitter only sends two rounds of data, so when you push the button if it does not work, push the button again.

A fire transmitter sends 16 rounds of data.
 
I think you are making assumptions that the physical layer access is ethernet-CSMA/CD.

I am not sure how you've come to that conclusion as nowhere do I make such assumption. To the contrary, I suspect there is no collision detection and just would like to clarify the whole issue !.


These systems are likely using CDMA or FDMA which allows multiple nodes to transmit at the same time (probably way more than you can have sensors) so no collision detection is required.

I doubt it very much because: (a) CDMA implies spread spectrum, and the FCC allows the following SS bands 902-928 MHz, 2400-2483.5 MHz and 5752.5-5850 MHz; (b) FDMA which is dramatically different from CDMA implies a set of channels operating on different frequencies and static channel assignment to different transmitters. As we know M1XRF2G operates on 319.5MHz which means neither (b) nor especially (a) are possible.
 
The GE Transmitters send 8 packets of alarm data with random spacing between the packets.

Well, it's similar to what an X10 radio transmitter does: they send I believe five packets in a row although I am not sure about the random spacing. Even so, as I mentioned earlier, with Oregon sensors and RFXCOM, folks observed some data loss.

When System Depot was in the same building as ELK, several hundred transmitters were on the shelf all transmitting their hourly supervisory signal. We never experienced any loss of signals from transmitters coming into our test M1 systems.

That's interesting, but since traffic was so light you might have just gotten lucky ;) Seriously, perhaps I am exaggerating the potential problem: in a typical security application, sensors would fire in a sequential manner when say a burglary is in progress, so the packets will be naturally staggered. Interestingly, supervisory traffic may cause some harm here because with 300 sensors they would be sending supervisory packets almost every second. Knowing what randomization time frame GE uses and how long one packet data transmission takes would let one evaluate probability of total data loss (all 8 packets). Do you happen to know ?

Thanks.

vc
 
The current one way RF transmitters with supervisory signals used by the security industry are a transmit and pray system, but they tend to be very robust and reliable in operation. Some are UL listed for fire applications.

Unfortunately in the past, many of the mass market security systems have demanded low cost over functionality.

There is a glimmer of hope for the future with two way Zigbee. A two way Zigbee security mesh network in a premise will offer the solution to RF clash.
 
The current one way RF transmitters with supervisory signals used by the security industry are a transmit and pray system, but they tend to be very robust and reliable in operation. Some are UL listed for fire applications.

Unfortunately in the past, many of the mass market security systems have demanded low cost over functionality.

There is a glimmer of hope for the future with two way Zigbee. A two way Zigbee security mesh network in a premise will offer the solution to RF clash.

Thank you. That clarifies the issue.

You've mentioned Zigbee, but does not Z-Wave also offer the same two way communication ability which can also be used in security apps ?
 
Zwave is a good system for lighting and automation control. It also offers mesh networking. My only issue is the ability to have security devices only route through battery backed up repeaters. If AC fails, you do not want the security system to stop. I have not checked in a while to see if Zensys has addressed this issue for security.
 
Back
Top