E1.31 bandwidth and DMX refresh rate

ptone

New elf
Joined
Nov 7, 2010
Messages
23
Location
Santa Barbara
So I've recently received a couple test units of stageape.com RGB LED par lights.

These are pretty low end gear, but the price is right.

I'm controlling them through OLA and an EthCon DR4

Now in the software (custom Python software) I was initially using a refresh rate of 40fps, as that is my understanding of the fastest a full 512 universe can refresh at. However when I dim these LED par lights at that rate, the step change is VERY obvious. Near as I understand it, this is in part a function of the cheaper controller in the light in that it is not doing any effective PWM dimming between steps.

So, with my single test light, I tried increasing the framerate - and tried as high as 400fps.

At somewhere close to 200 fps, the dim became smooth as silk. But lets use a framerate 4x of what I started with, 160 - it was still good. Now with my basic understanding, that would roughly limit me to 128 channels of DMX per universe at that rate. With the EthCon DR4 and 4 universes, this still gives me a full 512 channels.

My question is about overdriving. What are the refresh limits of the EthCon? Seems like there would be one limit that is about bandwidth (only so many bits per second), the other is some limit of refresh rate (regardless of frame size).

How would one know if you are overdriving the rate of the EthCon? Would it just drop packets, buffer them - I'm guessing that is what those other columns in the slave stats are about?

Also - now that I might use all of the universes, I might be running some longer stretches of cat5 - are there any line length limits on the Cat5 runs from the EthCon? I'm then converting to 3-pin XLR before going direct into some DMX fixtures (no a controller dimmer).

Any wisdom here appreciated.

-Preston
 
When testing your DR4, log into it (http) and check the statistics to see how many overrun errors you are getting.

I've only started playing around with mine and some SACN code I wrote (in Delphi), but found the DR4 showed slave overrun errors when sending a full universe to it at 100Hz. I dropped the rate back and all seemed ok.
 
OK - on doing some more testing.

The DR4 is hitting overuns with a full 512 at a refresh rate as low as 60hz

However with fewer channels, this can be driven much higher. With only a few channels, I've gone as high as 500hz and still not had overruns.

My sweetspot seems to be 128 channels at 160hz

I'm not sure how the master part of the device will handle 4 universes of 128 channels at 160hz

I'm also seeing a missed signal with some regularity - I'm not sure what the refresh rate of the keep alive it, in my code it was at .5 sec.

What I was seeing was that on a dim, a light would get stuck at the last dim level, and was only going to zero with the next keep alive - so there would be this .5 second delay before going to black. I'm thinking this could be a missed signal in the EthCon as I think the keep alive is more frequent that 2hz.

When I simply pull out the logic from my program that only sends the DMX if it has changed - and send a full frame at every go through the loop - the responsiveness is much better.

I'm still seeing some stepdown flicker on lights at the going from a low level to off on lights with a higher DMX start address. I'm assuming it has something to do with the cheaper circuits in the LED lights that split their effort between watching the DMX signal for their byte, and spending time doing PWM dimming of the LEDs. Lower numbered lights get their byte and get to work, while higher numbered fixtures have to watch the DMX stream longer before spending time on dimming, and so the dimming is more stepped. Does this sound right to others?

This flicker is less, but still there at the higher refresh rates - hence the push to go > 100hz.

-Preston
 
Preston et al -

Everything you are seeing and/or reporting seems in line with the realities of the real world and the constraints of the MANY levels of protocols and coprocessors in the chain of communication.

In most cases you will always need to look at the aggregate data stream (frames-per-second x bytes per frame) for each slave plus the total throughput for the unit to see if the real world matches your hopes and expectations.

Without setting 'rules' I have tried to give as much flexibility in the ECG design as possible and had an original target design that I believe I have exceeded and have even far exceeded with the ECG-M32MX and DR4 upgrade to a 32 bit processor. BUT it is still not the ultimate answer for every situation. YMMV and there will be other software and hardware options available from us and others that may be better suited for a specification task or combination of hardware.

--- sorry for the length of the next section(s) - some background to see how we got here ---

First a little background of goals and expectations (some not met) as we rapidly progressed down this product development line.

In early March 2010 we saw what we thought was a need for an Ethernet based communication interface to replace/supplement USB dongles for Renard and DMX connectivity to Vixen. We set as an initial target the ability to handle 8 full 512 slot universes at a 25ms update rate (40 fps). We already were working with PIC 18F 10Base-T technology for other projects and thought there might just be enough horsepower to squeeze that through. We first worked on a new plugin for Vixen. We investigated several protocols and settled on the newly ratified E1.31 protocol. We worked for about 45 days and could not get a reliable data stream that could handle 4 universes. 3 were fine, but 4 pushed the poor little 8 bit processor over it's limit.

So around April 15th we scrapped the hardware design and rapidly built up a working prototype using a PIC24HJ256GP206A with an ENC624J600 ethernet co-processor and up to eight separate PIC24HJ64GP202 slave co-processors so we could burst the universe packet to each slave and have it handle the actual serial transfer along with the differences between Renard and DMX and the future implementation of a robust RDM protocol capability. From this design the ECG-DMXRen8 w/M24H was born. It was designed to be DIY for most of the product to allow for those people that love the smell of solder flux in their hair. Since we were designing for 8 slaves with 1 full universe per slave I set a target of 12 full universes of 512 slots each 25ms and tested that for days on end without dropping a packet. I wanted the extra head room in case future tweaks of the code introduced some additional latency.

During final testing of the DMXRen8 I could see that although some people might like a DIY version many more would like an assembled and tested 'plug-n-play' solution. I had also discovered that I could practically drop-in replace the PIC24HJ256GP206A with a PIC32MX340F512H. The cost/size/design tradeoffs in my opinion also pointed to a 4 slave version being a good mix. So we now had a master processor with 3-4 times the processing power, twice the flash, and four times the RAM and we were only trying to drive 4 slaves across a shorter and cleaner signal path. Little did I know what I had 'allowed'. So we progressed as rapidly as possible with the ECG-DR4 and designed enclosures and packaging with more of a consumer feel. We also had to revamp the software to support the two different architectures from one base to ease continued development and not orphan the DMXRen8 early adopters. To keep from this being a bombshell to prospective buyers we announced the ECG-DR4 the day before formerly putting the DMXRen8 on sale so people could decide if they wanted to wait for the newer model. We also announced that ALL M24H which have been shipped would get a free M32MX upgrade as soon as they were available to bring the DMXRen8 up to a similar master processor design.

Then while working with Phil & Tabor on some ideas for their TPxxxx product line we discussed that the standard DMX data rate of 250Kbps was robust but perhaps antiquated in its targets. WE AGREE WITH STANDARDS!!! They are the glue that allows for interoperability between all these products but also feel that additional enhancements for a specific purpose as an optional feature is sometimes the best way to achieve growth even if digressing from the standards. So I proposed a hyperDMX mode that is optional in the ECG product line. ANY DMX receiver that wants to implement this optional mode can increase its throughput down a single RS-485 to higher speeds and interleave semi-standard multiple universes within the same data stream. This is in it's infancy and I am waiting for feedback as more people start to play with it. The reason I am bringing it up is that we have tested 16 full universes at 25ms of throughput (4 full universes per slave) with no apparent packet loss. So the 32 bit processor and extra speed necessary for 16+ full universes is there and should not present a bottleneck.

--- end of background - now on to the flow and technical spec ---

So now for a review:

An ECG is an Ethernet Controller Gateway. Being a gateway it has to be concerned with the data formats and transmission speeds of two different mediums plus any latencies introduced in its own hardware and software architecture. It's primary input (although future units will be bi-directional) is 100Base-T Ethernet. It's primary output (at this time) is 250KBps DMX RS485 transmissions for interoperability with most common DMX equipment. It also supports the Renard RS485 protocol, its new hyperDMX, and will include RDM as an input protocol. Future units in the pro-line will also act as mergers/routers/separators/repeaters so they will be more bi-directional on the RS485 and Ethernet.

In MOST cases the most restricting bottleneck is the 250Kbps DMX signal. One thing that I like that I see Preston doing is divorcing himself from the assumption that a Universe MUST be 512 slots. The specification calls for the ability for there to be different sizes of universes (1-512 slots) but DOES recommend that for a given physical media that the same size be used during the entire session. Then they go on to 'break' this recommendation with the introduction of RDM which uses different sizes packets. But BEWARE!! There may be very simple inexpensive DMX receivers that will operate erratically if given anything but a full sized universe.

With that said Preston's math seems proper. If doing a full universe it is impossible to drive at much more than 40fps (technically 44fps, but i always would like to see a little wiggle room for the other latencies that I will discuss below). If you reduce the size of the universe then you can, theoretically, increase the fps. BUT it is not 100% linear. There is some MTBF, MAB, start codes, etc that are part of the frame that do not reduce or go away as you reduce the size of the frame. For 128 or higher slots it is still a small percentage but if you go lower (64 slots) don't expect an 8:1 ratio. I can dig up the numbers or you can find the DMX packet format on the net and see these items.

But to answer Preston's question I don't believe the overheads involved before this level will take away more than 10% of total throughput unless you try to go to ridiculous levels like an 8 slot universe at 2000 fps. (even if the math said you could).

What are the overheads, where are they and how big are they? Sorry I don't know 'exactly' but let's at least define them:

1) The application:

a) Vixen works at a lock step on a clock but it doesn't seem to clock very accurately IMHO. More a limit of windows than Vixen. At the end of a session the Vixen plugin can report total packet transmissions so you can trace that through the counters in the ECG and verify packet transit.
b) LSP seems to be able to do some VERY smooth transitions but in early tests it was found that it was sometime sending 1000s of fps out the E1.31 which did overdrive the ECG. I believe that has been fixed and or is tunable. The LSP plugin includes the same packet counters for testing.
c) OLA, MaqicQ, etc. - sorry no knowledge. but for ALL of these the use of Wireshark or some ethernet monitor to really see the packet bursts and timings is an important testing procedure.

2) the operating system: every operating system will have some variations in its hardware/software TCP/IP stack so it could add unknown latencies.

3) unicast vs. multicast - I don't want to start this argument again. I allow for it both as a sender AND receiver. It is IMHO required by the specs as a receiver and is IMHO better as a transmitter. You must realize that the ECG is still just an embedded microcontroller and it's ethernet capabilities are not as robust as a Pentium running at 2GHZ. So if I can reduce the load that it can see and have to filter I believe it is worth the extra work. v1.1 and v1.3.0 are simple promiscuous multicast receivers and MUST examine every multicast packet to see if it should process it. In a large system with 16 ECG-DR4s running MANY pixels this could push the processor over its limit or start to cause some packet loss. Again, testing and checking packet filter counters that we display on the web interface can help to identify these problems. v1.3.1 (in final test) adds the use of hash table filters that the ENC624J600 provides but they are limited and will still allow a lot of traffic through. These will be displayed as "NMPkt" meaning "Not My Packet" counters in v1.3.1 display.

4) physical environment (network) - when driving a lot of lights in a high frame rate environment I believe that mixing this traffic with your teenager watching their online videos would be a mistake. so i think the show network and/or computer should be isolated totally or have dual ethernet cards or a router or something robust to isolate traffic. this is all UDP. if it doesn't make it it gets dropped on the floor.

5) physical environment (switches, routers etc) - don't forget unicast vs. multicast. a lot of cheap switches don't do it well and with our current simple promiscuous and filtered multicast it could add latency. Also think about wire speed vs. store-and-forward switches/routers. I don't think that will add much latency but a millisecond here or there will start to add up if you are trying to do 100 fps updates.

6) now for the ECG layers -

a) the ENC624J600 is a coprocessor that 'watches' the ethernet data and performs the actual reception and transmission from some internal buffers. These buffers are only about 28KBytes and are shared by reception and transmission. The ENC624J600 does include filters to allow it to drop things on the floor that it should discard but this filtering is simple and must still store a lot of packets in the receive buffers for the master controller to sift through. In v1.1 and v.1.3.0 it allows for all unicast, broadcast, and ALL multicast. v1.3.1 adds a multicast hash table filter that only includes 64 entries so it still can let a lot of traffic through if the hash table calculation for multiple universes matches. The new counters and the hash table values are available in status screens to help reduce this load if needed. So there is some unknown latency in this buffering.

b) PIC32MX340F512H or M24H is the master processor. It runs in a fairly simple main polling loop mode dictated by Microchips TCP/IP stack. Besides a simple tick counter it uses no interrupts or DMAs. It polls around looking for work to do to either process incoming data, or transmit outgoing data. All communication with the ENC624J600 co-processor is in blocking mode over an 8 bit parallel bus at high speed. The UDP data for the E1.31 protocol is given top priority after all needed TCP/IP stack processing is complete. It then, in a blocking mode, sends the universe to the appropriate slave(s) through a shared SPI connection. This connection can take up to 1ms to send a full universe but will take less time (95% linear) for smaller universes. This is probably a limiting factor in the current implementation but should allow 16 full universes every 25 ms with no packet loss as is our target.

c) PIC24HJ64GP202 slave processors - each slave will accept/reply communication over its SPI connection from the master via interrupt driven routines and then processes all transmissions out the RS485 in a tight main polling loop that tracks transmission state. The keep-alives for DMX and Renard can come into play when no reception of new universe data has been received. The actions of the keep-alives were added as fixed items in v1.3.0 and are user tuneable in v1.3.1

All of the code is written in C and is only compiled with simple Optimization level 1 since that is all that is available in the free or student version. Replacing some of the polled routines with interrupt or DMA driven routines could be done if needed to squeeze some more processing speed if needed in the future.

For our newer pro and consumer versions we are working on POE, total power isolations, faster processors, more channels, and actually believe it or not, less channels. We think that a small dongle sized unit with a single DMX line can be used for a lot of simple retrofit use. Plus with only one output the processor should be sitting there twiddling it's thumbs most of the time. So expect an ECGpro-DMX1, ECGpro-DMX2, ECG-DR2, ECGpro-DMX4, ECGpro-DMX8 coming soon. We also may depart from the ECG and also add E1.31 direct Pixel controllers in the same footprints. The pro versions will be available with Nuetrik XLR-3 or XLR-5 pinouts for easy connection as well as RJ-45 as allowed by the specification for 'permanent' installations. Plus a full complement of firmware upgrades for merge/split/route/repeat of DMX slots.

-Ed
 
Thanks Ed for that comprehensive reply. Knowing more of the EthConGateway background is helpful.

My DR4 showed slave overrun errors when I really hammered it with the simple sACN test code I wrote. Can you elaborate on what "slave overrun errors" are please?
 
j1sys said:
Preston et al -

Everything you are seeing and/or reporting seems in line with the realities of the real world and the constraints of the MANY levels of protocols and coprocessors in the chain of communication.

In most cases you will always need to look at the aggregate data stream (frames-per-second x bytes per frame) for each slave plus the total throughput for the unit to see if the real world matches your hopes and expectations.

Without setting 'rules' I have tried to give as much flexibility in the ECG design as possible and had an original target design that I believe I have exceeded and have even far exceeded with the ECG-M32MX and DR4 upgrade to a 32 bit processor. BUT it is still not the ultimate answer for every situation. YMMV and there will be other software and hardware options available from us and others that may be better suited for a specification task or combination of hardware.

What, you mean I shouldn't expect this to be able to handle 1MB packets at 500khz?? Really?

Seriously - you should be very proud of the this sweet gizmo. I'm quite pleased with it.

So I proposed a hyperDMX mode that is optional in the ECG product line. ANY DMX receiver that wants to implement this optional mode can increase its throughput down a single RS-485 to higher speeds and interleave semi-standard multiple universes within the same data stream. This is in it's infancy and I am waiting for feedback as more people start to play with it.

It would be interesting to see how hyperdmx compare's to RJ's new PixelNet protocol - I bet there are some similarities.

Is there a spec for hyperdmx somewhere published - or is it on request.


With that said Preston's math seems proper. If doing a full universe it is impossible to drive at much more than 40fps (technically 44fps, but i always would like to see a little wiggle room for the other latencies that I will discuss below). If you reduce the size of the universe then you can, theoretically, increase the fps. BUT it is not 100% linear.

I knew it wouldn't be linear - but close enough. I figured if someone really wanted to, one could calculate all possibilities and graph that out as a reference tool - but I was just seeking some practical ranges to work with.

c) OLA, MaqicQ, etc. - sorry no knowledge. but for ALL of these the use of Wireshark or some ethernet monitor to really see the packet bursts and timings is an important testing procedure.

A couple quick notes RE OLA which is what I'm working with. It's capable of generating a high frequency of packets, but a feature it has is you can, at the library config level, throttle a port. So if you have multiple output ports 'E1.31', USB dongle, etc. With different universes mapped to different ports, you can send all your data at the highest freq and the library will throttle each port appropriately.

4) physical environment (network) - when driving a lot of lights in a high frame rate environment I believe that mixing this traffic with your teenager watching their online videos would be a mistake. so i think the show network and/or computer should be isolated totally or have dual ethernet cards or a router or something robust to isolate traffic. this is all UDP. if it doesn't make it it gets dropped on the floor.

I'm not arguing FOR mixing this traffic, but will point out that at a high frame rate, if one pkt gets dropped, it won't be long before another comes along and sets things straight...

6) now for the ECG layers -

Excellent information - thanks.

For our newer pro and consumer versions we are working on POE, total power isolations, faster processors, more channels, and actually believe it or not, less channels. We think that a small dongle sized unit with a single DMX line can be used for a lot of simple retrofit use. Plus with only one output the processor should be sitting there twiddling it's thumbs most of the time. So expect an ECGpro-DMX1, ECGpro-DMX2, ECG-DR2, ECGpro-DMX4, ECGpro-DMX8 coming soon. We also may depart from the ECG and also add E1.31 direct Pixel controllers in the same footprints.

A single dmx dongle form factor with POE would also be a great way to localize the DMX signal to a string of fixtures, letting the ethernet carry the lighting control over the greater distances. Very cool, esp if the pricing is low enough to get 4 1xDMX dongles for a reasonable premium over 1 DR4. Part of my newb assumption here is that DMX over cat5 doesn't do well over long runs - how would it compare to ethernet. Either way - DMX over cat5 does not go over a switch ;-)

I'm still a little confused as to why these particular fixtures flicker on dim so bad at 40fps at a channel as low as 6 - oh well.

I have 2 areas where my software hits issues trying to go too fast. First is that I'm calculating dim values, colors, etc of lights in close to realtime, and if I have too many calculations (lights x effects etc) I'll hit a CPU bottleneck where the light values will not be updated in time for the next frame. In this case I'll have an option to preprocess the show into just raw DMX data and/or a vixen file etc. Second is depending on channels, overdriving the EthCon. I'm considering having a dynamic framerate that adjusts itself based on these factors. For the former, going as fast as possible will be the name of the game, with a warning if the speed drops to a "perceptable" rate. For the latter, I've thought about spinning off a thread that polls the SlaveStats page at 1-2hz and scrapes the overrun data, and will scale up or down the framerate accordingly. Not sure what portion of the EthCon processor would be burdened in repeat http serving of that page. This is a low priority feature, more just an interesting "autotune" idea for the best framerate.

Thanks again for the great response.

-Preston
 
ptone said:
Part of my newb assumption here is that DMX over cat5 doesn't do well over long runs - how would it compare to ethernet.
DMX over CAT5 should work over much greater distances than Ethernet. I don't know exactly what the limit is, but would guess many hundreds of metres. Ethernet has a 100 metres max cable segment limitation as far as I know.
 
j1sys said:
For our newer pro and consumer versions we are working on POE, total power isolations, faster processors, more channels, and actually believe it or not, less channels. We think that a small dongle sized unit with a single DMX line can be used for a lot of simple retrofit use. Plus with only one output the processor should be sitting there twiddling it's thumbs most of the time. So expect an ECGpro-DMX1, ECGpro-DMX2, ECG-DR2, ECGpro-DMX4, ECGpro-DMX8 coming soon. We also may depart from the ECG and also add E1.31 direct Pixel controllers in the same footprints. The pro versions will be available with Nuetrik XLR-3 or XLR-5 pinouts for easy connection as well as RJ-45 as allowed by the specification for 'permanent' installations. Plus a full complement of firmware upgrades for merge/split/route/repeat of DMX slots.

Ed, any thought on having a DR1, DR2, DR4, etc that has a built in 802.11n wireless network card in it? This could really help support the growing channel counts everyone seems to be doing? Thanks Chris
 
I won't say we won't consider it but IMHO wireless and high slot counts and refresh rates don't mix. This is simple UDP at high update rates and doesn't necessary fit the wireless footprint. Also, again IMHO, you need power distributed to these points and to also distribute ethernet signals seems logical to me. One reason we are looking at POE is to eliminate the need for power for the ECG. We may even look at POE driving small strings of pixels, thus doing everything over one cable.

-Ed
 
Back
Top