Home/Support/Support Forum/How to get correct MTU size for LTE Cat-1 UDP packets

How to get correct MTU size for LTE Cat-1 UDP packets

0 votes
I am trying to UDP stream a bunch of 64 data packets very fast using a cellular xbee. The xbee is combining my 64 bye packets together to create a larger packet...which is fine, however, the data portion of the packet is 1500 bytes which larger than the allowable value of 1472 for a network (28 is reserved for the header info) with a MTU of 1500 bytes. As such, my linux kernel is outputting a bunch of errors which look like 'UDP, bad length 1500 > 1472' and is dropping the packets. anyone run into this before? I think I need to lower the MTU on the xbee, but I don't see a way to do this directly from the xbee firmware. Perhaps go into bypass mode and using the telit AT commands to change the MTU to something like 1400???? But will that value get saved?

Any help would be appreciated.
asked Sep 25, 2019 in XBee Cellular by JRDavisuf New to the Community (0 points)

Please log in or register to answer this question.

1 Answer

0 votes
Are you using transparent mode, API mode, or MicroPython?

It sounds like you are using transparent mode, since you claim the XBee is combining the packets together to make a single larger packet. If this is the case, then I would suggest you use API mode instead if that is possible, since with API mode you can control the size of the UDP packets more precisely. Transparent mode is really designed as a serial line replacement tool; I assume you are using transparent mode because if the ATRO "packetization timeout" is not observed between your 64-byte packets, the XBee treats the input as a single datagram.
answered Sep 26, 2019 by tckr Veteran of the Digi Community (404 points)
Yes, sorry, should have mention that I'm trying to do this the easy way :) ... aka transparent mode.

I am observing the ATRO time.  In fact, just now for testing, I put in a 5ms delay (smaller times didn't seem to affect things much) before I put a packet on the UART.  At 115200 baud, this should be well above 3 character times (< 1 ms)...And my packets are still being combined together...although now they are averaging about 1000 bytes, which does fix my > 1500 byte problem reasonably well.

To get actual the UDP packet to be the actual size of the data packet I'm putting on the UART, I need to drop by Sample Rate to 8 hz.

So at this point, I'm not sure that ATRO is actually doing anything at all.

I'll look into the API mode, however, I'm not entirely clear on where the packet combining is happening since the ATRO doesn't seem to be adhered too...such that, I could see the API mode combining packets too if its the actually cellular chip that's combining things packets together.
I'm not sure why you're observing the ATRO behavior that you are reporting, but the only things which influence how data is grouped together in transparent mode are ATRO (silence time), ATTD (text delimiter - should be set to 0 in your case, which is the default), or, if there is no silence time seen, then reaching the maximum size for a packet, which for UDP in our system is 1500, OR finally we will send the "packet" if you enter command mode using +++. (Of course, respecting guard times for the +++ escape sequence will likely trigger the RO timeout anyway.)

Using API mode will completely prevent the packet combining. When using transparent mode, to send, let's say, "hello" to 11.22.33.44 port 0x5678, you need to configure:
- ATAP 0
- ATIP 0
- ATDL 11.22.33.44
- ATDE 5678
- ATCN
then send in "hello" and wait at least the ATRO silence time.

In API mode, you would send a complete API frame that contains all of that information, including the size of the payload, as follows:

7E 00 11 20 01 0B 16 21 2C 56 78 00 00 00 00 48 65 6C 6C 6F AE

You can use XCTU's Frames Interpreter to parse that frame, but rest assured that API frame says to send "hello" to 11.22.33.44 port 0x5678.

Hopefully from this information you can see why I pointed out that transparent mode is designed as a serial line replacement tool.

If you find that using API mode as described here still results in large/too-large UDP datagrams arriving at your server, then there must be some server or network equipment between the XBee and your server recombining datagrams.
Finally got around to making my code handle either API mode 0 or 2.  As it turns out for my application, API mode 2 "loses" significantly more data than API mode 0.  Is this to be expected?

(By loses, I mean that the xbee is unable to stream as much data because the CTS line is high more often.)

This would kind of make sense to me because the xbee would need to process each packet individually for destination....but I just wanted to confirm.
Yes, the CTS line going high more often when in API mode 2 versus 0 does make sense, mainly because of the inherent added overhead of each API frame (delimiter, length, frame type, addressing fields). But there is only "lost" data if you don't respect the CTS signal. And the added "overhead" for using API mode is well worth it because of the added visibility of transmit errors and the source for any RX data, ability to control the destination for each transmit, and additional features only available via API mode.

Transparent mode (API mode 0) is useful if your application ONLY needs to provide a "serial line replacement", i.e. making it appear that the UART of one XBee is connected to the UART of another (or in the case of XBee Cellular devices, the UART is 'connected' to the selected TCP/UDP server). But when using XBee Cellular, and UDP, there is no 'connection'. As such, the "serial line replacement" design makes it a lot harder to reason about what data is being sent when, and to control the sizes of the packets, etc. That is another reason that API mode (AP = 1 or 2) is superior.
I definitely can see why API mode 2 is better in theory, however, it practice it's almost unusable for the relatively "high speed" streaming I'm doing.

As I'm using a small MCU with very little memory available for buffering.  With this limited memory  and using transparent mode, I can sample at 128hz and UDP stream 64bytes per sample and generally not missing sending any samples (not counting the MTU problem I originally mentioned) once its established a connection (I do often missing initial samples so I've resorted to sending fake data to establish the connection when I setup the initial UDP connection).

If I switch to API Mode 2, about 90% less data makes it to the end point as CTS stays high so often that the internal buffer on the MCU gets overwritten before the data can be sent.

So while, yes, there are some nice features that API Mode 2 provides...getting my data to the end point is more important :)

Anyway, for now, I guess I'll just stick with transparent mode and hope the MTU bug gets fixed some day...
...