Home/Support/Support Forum/S3B 900HP - RAM allocation issues for an 1:N coordinator node architecture

S3B 900HP - RAM allocation issues for an 1:N coordinator node architecture

0 votes
Good morning!

I am having a blast with these modules after having re-familiarized more with these platforms. One area of concern I am having before a release, however, is the scalability of my architecture as it pertains to system memory.

I am using the following:
1 x S3B-900HP (coord) for managing a node table and other custom functions
N x S3B-900HP (routers) for synchronous Tx, and asynchronous Rx of endpoint data

The ideal device count N would be about 50 devices, but I'm seeing a pretty serious limitation to that with the S08QE32 platform (2048 B RAM). This seems to be attributed mostly to the structure of the node_table, composed of xbee_node_id_t objects, and specifically node_info, especially if XBEE_DISC_MAX_NODEID_LEN is kept to the default of 20 chars (bytes).

It seems that if you wish to keep these defaults, then you'd be very hard pressed to maintain a client architecture of more than 80 or 90 nodes of even the leanest payload, saving room for a little overhead. What is the best practice here to avoid this pitfall? Should I reduce my node_id character array size? I can keep it at about 8 characters if needed, which would help quite a bit; but even configured as such, there is still a big limitation there.

Is this where FAR memory comes into the discussion? Where can I get a good explanation on this as it pertains to the 900HP?

Thanks,
asked Dec 5, 2020 in DigiMesh Proprietary Mesh Networking by D2K New to the Community (12 points)
edited Dec 5, 2020 by D2K

Please log in or register to answer this question.

1 Answer

0 votes
D2K,

When you get to networks of this size, it is recommended that you use Source routing instead. This then allows you to create your own external Routing table.
answered Dec 8, 2020 by mvut Veteran of the Digi Community (14,249 points)
From a memory management perspective, what do you think would be consuming the majority of the RAM? Is it simply the array size (N) of node_table, or is the routing table you speak of taxing me a lot as well?

What if I am almost 100% certain I could keep my node count to <30? Is it advisable to start reducing the size of XBEE_DISC_MAX_NODEID_LEN if my application can afford it? My transmit and receive buffers are about 65 chars big. I could probably reduce some of that.
I am sorry but I am not sure I understand. What code is it you are working from?  Can you provide a link to it?
I suppose a very similar example would be in the XBee SDK. If we look at _nodetable.c within the digimesh_unicast_demo_monitor example project, it initializes node_table:

xbee_node_id_t node_table[NODE_TABLE_SIZE] = { { { { 0 } } } };

.. where NODE_TABLE_SIZE is defined in _nodetable.h as NODE_TABLE_SIZE.

I am saying that if I increment that number to 30 or so, that I can not compile a Release application due to RAM allocation issues. I wasn't sure if there is a way to get by this somehow.
I do not know. What I would do is to create the app and run it from your PC and see what size the table get to with 1, 5, and 10 nodes. Then you should be able to predict a size needed for the specific size of network you create and what you put in that table.
Ok. I'm going to attempt to pare down the sizing anyway, so hopefully that will give reasonable results. But am I missing something here? Maybe most people don't use more than 15-20 nodes with these devices, but..

1.) If a node being added to the node table is required, wouldn't there eventually be a limit to maybe 30-40 devices not just from a routing/broadcasting perspective, but from the 900HP's memory as well?

2.) Is it safe to do a simple modification to discovery.h for this purpose? I'm sure it's fine, but the IDE gives derived file warnings, so wasn't sure if there was something else I didn't know.
...