> If you send 100 1kb messages using mqtt_as they will arrive in order and reliable with qos==1.
Hmm, good point, I'll have to go back to the spec... (Do you have a section number for this btw?) Is the ordering guarantee true in the face of retransmissions, i.e. disconnect/reconnect & resuming the session? It seems safe to assume that if message N on a TCP connection is lost, then all messages >N will be lost too. So as long as the broker is careful and retransmits unack'ed messages in the right order on the new connection the ordering guarantee should hold. It's mostly a matter of ensuring that the rexmit timeouts stay in-order.
But even with the in-order assumption the code increase on the python end seems quite significant. Right now my subscription callback (minus the topic matching part) for a file put is:
Code: Select all
async def do_put(fname, read_msg, msg_len):
with open(fname, 'wb') as fd:
buf = await read_msg(1024)
while len(buf) > 0:
fd.write(buf)
buf = await read_msg(1024)
return "OK"
Where `fname` is the filename portion of the topic, read_msg is the stream handle to read the payload from, and msg_len is the total payload length. If this came in in 1024-byte messages you'd have to explicitly manage the state between messages and have some explicit timeout on the whole thing. Maybe not horrible code expansion, but certainly more that the above.
The get file handler is even less code:
Code: Select all
async def do_get(fname, read_msg, msg_len):
return open(fname, 'rb')
The return value (resp) goes into a `mqclient.publish(rtopic, resp, 0, 1)` in the subscription dispatcher.
I'm liking the simplicity of it all right now and am taking the attitude of "show me the problem" before throwing more complexity at it. At least then I'll also have a concrete problem to solve. I can imagine that users using unconfigurable MQTT brokers (e.g. public services) will run into issues pushing a MP firmware update through in one message
. Overall I like to avoid premature optimization...
Thanks for the discussion!