@peterhinch replied viewtopic.php?f=16&t=3608&start=10#p23642Do you have an intuition for the number of messages that an ESP8266 could receive each second using this library?
Certainly I would accept that MQTT was originally intended for different applications, but at the same time a fundamental of the design is low-overhead and simplicity and I can't get a handle on how an alternative implementation intended for realtime would be more performant, hence the question I raised about anticipated overheads from MQTT with the resilient driver, in case there was something obvious which would introduce a slow-down.My personal view is that MQTT (especially running over WiFi) is really only suitable for applications where a response time >= 1s is acceptable. MQTT was developed by IBM for remote data acquisition over unreliable links. It was never intended as a realtime command and control protocol.
In my case, I am sending real-time RGB data to drive colors to individual segments of a 16-segment character display (with one wifi-connected ESP8266 per character, and 20 characters altogether). The data could easily be marked QOS0 given received data is outdated within a fraction of a second. However, to my understanding assuming we establish a stable wireless lan serving to the 20 Micropython NodeMCUv2 endpoints, business-as-usual performance would mean that no TCP sockets are ever disconnected, meaning that 'redelivery' of Qos0 is a given anyway and that while a client might miss events when retained topics are overwritten by the data source before they got a chance to handle that data, they will eventually be sent the latest values in all cases.
By the way in our configuration I consider that the broker side is performant enough to handle all the data, given the rendering and the broker are running on the same machine - a 4 core Raspberry Pi, and we have a segments-per-second data-rate-limiter in place for generated data from the renderer.
Given the desire to have well-tested client/server handshake, simple bytewise protocol and liveness for multiple 'endpoints' (topics mapping to segments in my case) served over a single socket, I struggle to identify a way in which a roll-your-own client-server implementation is likely to be better. In my mind it comes down to the overhead of marshalling and unmarshalling the topic bytes, which could perhaps be more efficiently done if you don't assume the bytes should be interpreted as UTF-8. That seems a very low impact to me, and perhaps even avoidable if topic bytes are treated directly as bytes by a client library.
As a by-product of adopting MQTT we also benefit from bonus features such as redelivery on connect (using retained topics), as well as the means to track endpoint liveness (using Last Will topics), which we might otherwise have to implement ourselves. We also have the simplicity of being able to implement, debug, monitor and log the protocol through mainstream clients and tools, and all kinds of future-proofing for example being able to have Web-based subscribers to the display data over Websockets.
Are there any specific things which are known overheads beyond the topic marshalling/unmarshalling assuming QoS0? I am speculating whether there can be any advantage whatever to rolling my own server and protocol for this case of sending realtime RGB data.