Processing multiple packets with nanopb

Hi,
I’ve been looking into the protocol buffers and nanopb implementation. I would like to use it to provision configuration which would be potentially split into multiple packets to nodes. It would be easy to just receive all packets first into some buffer and then process it. But I would prefer to process it as stream without need of extra buffer. But there is big problem caused by different philosophy of nanopb and Tower SDK.

Nanopb uses callback function to fetch more data from stream. But as far as I know, as SDK is not multithreaded, this callback function cannot just wait for the data, as it would block the whole scheduller loop including incoming radio.

For this to work, I would probably need some kind of “cooperative multitasking”, that is from nanopb callback switch context to scheduller and after receiving the data (or timeout) switch context back to nanopb.

Is something like this feasible with SDK? Or use some workaround?

Thanks.
Mixi

I used only nanopb encoder in few embedded projects, but from what I quickly read seems like it doesn’t need any fancy scheduler or multitasking.
It is called stream but it is just buffer wrapper. You always have to pass complete message to be decoded in a buffer. See this example, where the buffer is wrapped into a stream and then pb_decode is immediately called to decode data.

As I said, to receive all messages into buffer and decoding whole buffer at once is simple. But this buffer needs to be large enough to hold largest possible data set which can be quite big. And as PBs are conceptually streamable, I was looking for a way to process packets as they come as stream without need for large intermediate buffer.

In this case I cannot help since I have no experience with streaming decoding.

OK, one more question then. Does tower SDK support dynamically alocated memory using malloc() or similar functions? I did not find it used anywhere in SDK, but stdlib.h is included in bc_common.h so function itself is available in SDK.

If I will not use streaming, I could at least dynamically allocate proper sized buffer and free it after decoding.

Streaming be more complex than it seems. If you split the stream across multiple radio messages, you may need to find a way to deal with lost and reordered packets. In other words, you would need a protocol akin to TCP to implement streaming reliably over the the radio interface.

Well, this complexity is not really related to streaming. I need to split data into multiple messages anyway, because they will be often bigger than maximum message size. So this is something I need to take care of anyway.

I plan to have 2 bytes at start of each message with message index number and message count. On receiving message with index 0, I will start filling the buffer (or streaming data into the nanopb) and every consecutive message with index +1 will be appended until msg_index == msg_count-1, then I will send confirmation message. In case I receive message with different index or timeout will be reached, I will generate error message. In case I will not get message with index 0, all other messages will be ignored (as it may be just delayed out of order message and generating error would be redundant).

Service responsible for configuration provisioning will retry whole transmission after some delay in case it will not get confirmation.

There is no support for malloc() in SDK. In small embedded target with 20kB RAM it does not makes sense. Fragmentation is your enemy.
You can start search for heap and increase it but it might need more work than that.

If you need just a single buffer for decoding, then you can make it static, or just make it on the stack if the logic allows it. Just increase the stack size.

OK, I thought this is the case. I just don’t like the idea having it static, as this buffer can be quite large and will be likely needed just for several seconds few times in a year. But I guess as long as my data+stack fit into 20kB, this is purely academical problem.

If the configuration data is less than half the size of the EEPROM (6 kB), you could use the EEPROM as temporary storage. Partition it into two halves. Decode incoming stream into one half, read live configuration from the other half. Once you have decoded the entire stream correctly, swap the halves.

I haven’t thought of using EEPROM as temporary storage. While EEPROM has some drawbacks (limited number of writes and performance), for this use case it is not problem.

Thanks for interesting idea, I will keep it in mind as alternative when I’ll be tight on RAM.