How to reduce the memory consumption of the object definition
Re: How to reduce the memory consumption of the object definition
You can compress it using CCITT G1 or G2 encoding (this is what fax machines use) although I'm not sure that there would be much advantage. G1 basically encodes runs of black and white pixels using variable width bitcodes. G2 is a 2-dimensional variant. I know I've implemented some fairly fast decompressors for both of these in the past, but it involved using some fairly large lookup tables. You can also implement it using a bit-at-a-time technique as well.
It would definitely be beneficial if you had lots of fonts.
It would definitely be beneficial if you had lots of fonts.
Re: How to reduce the memory consumption of the object definition
LZO would be an interesting choice. The decompresser is very small. It would be fun to make a plugin. I'll give it a go next week.
- pythoncoder
- Posts: 5956
- Joined: Fri Jul 18, 2014 8:01 am
- Location: UK
- Contact:
Re: How to reduce the memory consumption of the object definition
A fax machine is transmitting a stream of data down a serial interface: the classic application of compression. Consider file compression. The data, a text file for example, is stored on disk in compressed form. When you access the file with (say) unzip, the decompression algorithm acts on the compressed stream from the file and outputs a stream of decoded characters.
The problem with fonts is that you need fast random access to character data. As I understand it classical decompression algorithms can't do this efficiently: to access character N you need to process all the data prior to N. This is because the compression process is nonlinear. The offset to character N depends on the compressibility of the preceding data. This is why classic compression operates on sequentially accessed streams.
So, while you could compress the font data on disk, to use it you'd need to decompress it to RAM; the opposite of what we require. Unless, of course, you guys know a suitable technique.
The problem with fonts is that you need fast random access to character data. As I understand it classical decompression algorithms can't do this efficiently: to access character N you need to process all the data prior to N. This is because the compression process is nonlinear. The offset to character N depends on the compressibility of the preceding data. This is why classic compression operates on sequentially accessed streams.
So, while you could compress the font data on disk, to use it you'd need to decompress it to RAM; the opposite of what we require. Unless, of course, you guys know a suitable technique.
Peter Hinch
Index to my micropython libraries.
Index to my micropython libraries.
Re: How to reduce the memory consumption of the object definition
The classic approach to this is that don't actually decompress all of the characters to RAM.
Instead, you create a cache of N characters, and you keep the N most recently decompressed characters in RAM, and blit those. Obviously you pay a performance price for the reduced RAM usage. It depends on the cost of the decompression.
You could do the same thing with uncompressed data stored on the external media as well.
Since the font data is 1-bit-per-pixel data, normal byte-wise compression algorithims typically don't work that well. The CCITT G1/G2 are designed for working with 1-bit-per-pixel data.
The performance penalty for decompression might be too great to take, but its all about tradeoffs.
Instead, you create a cache of N characters, and you keep the N most recently decompressed characters in RAM, and blit those. Obviously you pay a performance price for the reduced RAM usage. It depends on the cost of the decompression.
You could do the same thing with uncompressed data stored on the external media as well.
Since the font data is 1-bit-per-pixel data, normal byte-wise compression algorithims typically don't work that well. The CCITT G1/G2 are designed for working with 1-bit-per-pixel data.
The performance penalty for decompression might be too great to take, but its all about tradeoffs.
- pythoncoder
- Posts: 5956
- Joined: Fri Jul 18, 2014 8:01 am
- Location: UK
- Contact:
Re: How to reduce the memory consumption of the object definition
Interesting, TFT.
If we're going to access character data on disk and decompress it to RAM it's fair to ask what's achieved by using compression. With an SD card disk space is huge compared to font files. We would be accepting the computational complexity of compression, along with the slow speed of disk access, for no benefit in RAM usage relative to accessing uncompressed data.
[edit]You could read the entire compressed font into RAM. Initially you'd perform a single pass, constructing the Huffman table and a character offset lookup table. If these were in locals they'd use stack rather than heap. Depending on the entropy of font data you might save 50% on the heap. Note these observations are broad-brush and based on generic understanding of compression, not on the specific algorithms you mention.
The idea is unappealing to me, for these reasons:
Quite. This is exactly what I do in the e-paper driver if persistent bytecode is not used.dhylands wrote:...You could do the same thing with uncompressed data stored on the external media as well...
If we're going to access character data on disk and decompress it to RAM it's fair to ask what's achieved by using compression. With an SD card disk space is huge compared to font files. We would be accepting the computational complexity of compression, along with the slow speed of disk access, for no benefit in RAM usage relative to accessing uncompressed data.
[edit]You could read the entire compressed font into RAM. Initially you'd perform a single pass, constructing the Huffman table and a character offset lookup table. If these were in locals they'd use stack rather than heap. Depending on the entropy of font data you might save 50% on the heap. Note these observations are broad-brush and based on generic understanding of compression, not on the specific algorithms you mention.
The idea is unappealing to me, for these reasons:
- Persistent bytecode works so much better.
- I've already implemented it and I'm lazy
- Damien plans to implement a way of string persistent bytecode without the need for a firmware rebuild.
Last edited by pythoncoder on Mon Apr 18, 2016 1:47 pm, edited 1 time in total.
Peter Hinch
Index to my micropython libraries.
Index to my micropython libraries.
Re: How to reduce the memory consumption of the object definition
In my oppinion we need to support different approaches to font rendering/management/storage. There are totally different types of dependencies for this topic with different use-cases. Dependencies of different use-cases can even conflict with each other.
We can think of different combinations of these dependencies depending on use-case:
Maybe we should spawn a new thread with a topic like: "Font Resource-Management"?
We can think of different combinations of these dependencies depending on use-case:
- Storage in ROM (maybe external static RAM)
- Small RAM footprint
- Small ROM footprint
- High resolution font
- Low resolution font (e.g. 5x7 pixel characters)
- Screen/Framebuffer update times specified in seconds/minutes
- ePaper display running on battery/in low-power environment
- Small RAM footprint
- High resolution (dpi) font
- Screen/Framebuffer update times (measured in seconds/minutes)
- Storage on external ROM
- Interactive Display/UI with hierarchical menu/dialogue
- High resolution (dpi) font
- Ultra-fast screen/framebuffer update (measured sub seconds)
- Storage on external ROM
- Indoor environmental sensor node running on battery/in low power environment
- Small RAM and ROM footprint
- Small character subset (digits, very few upper-case only chars in strings like "%RH" and "°C")
- Screen/Framebuffer update times (measured in seconds/minutes)
- Low resolution font
Maybe we should spawn a new thread with a topic like: "Font Resource-Management"?
- pythoncoder
- Posts: 5956
- Joined: Fri Jul 18, 2014 8:01 am
- Location: UK
- Contact:
Re: How to reduce the memory consumption of the object definition
@kfricke I raised some points regarding @Damien's framebuffer in #1817. I have doubts about the practicality of a generalised framebuffer because display devices vary so much, with the fastest ones not even needing a frame buffer in the driver code (because it exists on the device). But devices which do require a buffer can expect data in row or column format, and with N bits per pixel.
As I said above, I think the solution to font storage is persistent bytecode, especially when @Damien implements the ability to load bytecode modules into Flash without the need for a rebuild.
As I said above, I think the solution to font storage is persistent bytecode, especially when @Damien implements the ability to load bytecode modules into Flash without the need for a rebuild.
Peter Hinch
Index to my micropython libraries.
Index to my micropython libraries.
Re: How to reduce the memory consumption of the object definition
I would generally not see that as a big problem. For a specific application of Pyboard and similar devices I would expect a limited set of fonts and font sizes. I looked up the RAM (or flash) usage of some sample fonts in the TFT project:
6x8 font, 96 characters, 770 bytes (6 pts)
8x12 font, 128 characters, 1794 bytes (7 pts)
8x14 font, 96 characters, 1538 bytes (10 pts)
23x23 font, 96 characters, 3283 bytes (14 pts)
33x32 font, 96 characters, 6226 bytes (20 pts)
33x48 font, 96 characters, |19k bytes (36 pts)
Any of these, maybe besides the last one, would fit into PyBoard's RAM of into flash as frozen byte code. And even if it's not in flash. ll you could do save memory by holding a buffer for the largest font and loading it as binary pattern from SD on demand. And larger fonts could be loaded character-wise from the filesystem. If, líke pyhhoncoder mentioned, frozen bytecode could be loaded independently from the micropyhton image, it would more look, the it more comparable to files in the file system.
If you need many fonts of varying sizes, then Pyboard and its companions are the wrong devices. You would better then go for a lightweight Unix environment with the proper graphical OS.
6x8 font, 96 characters, 770 bytes (6 pts)
8x12 font, 128 characters, 1794 bytes (7 pts)
8x14 font, 96 characters, 1538 bytes (10 pts)
23x23 font, 96 characters, 3283 bytes (14 pts)
33x32 font, 96 characters, 6226 bytes (20 pts)
33x48 font, 96 characters, |19k bytes (36 pts)
Any of these, maybe besides the last one, would fit into PyBoard's RAM of into flash as frozen byte code. And even if it's not in flash. ll you could do save memory by holding a buffer for the largest font and loading it as binary pattern from SD on demand. And larger fonts could be loaded character-wise from the filesystem. If, líke pyhhoncoder mentioned, frozen bytecode could be loaded independently from the micropyhton image, it would more look, the it more comparable to files in the file system.
If you need many fonts of varying sizes, then Pyboard and its companions are the wrong devices. You would better then go for a lightweight Unix environment with the proper graphical OS.
- pythoncoder
- Posts: 5956
- Joined: Fri Jul 18, 2014 8:01 am
- Location: UK
- Contact:
Re: How to reduce the memory consumption of the object definition
You'd think so, wouldn't you The problem arises with heap fragmentation which is (in my experience) a bugbear with the Pyboard when you import substantial amounts of code. Applications of my e-paper driver and of your TFT driver have been plagued with memory errors, sometimes on importing a module and before any code is actually run. These can occur even if only 30% of the heap is used (in the case of your driver). And, in the context of a fragmented heap, fonts do require significant amounts of contiguous RAM.
I intend to investigate this further - in particular whether the compiler itself gets into trouble when significant amounts of code are already loaded.
I've found a significant practical gain in storing fonts as random access files or persistent bytecode.
I intend to investigate this further - in particular whether the compiler itself gets into trouble when significant amounts of code are already loaded.
I've found a significant practical gain in storing fonts as random access files or persistent bytecode.
Peter Hinch
Index to my micropython libraries.
Index to my micropython libraries.
Re: How to reduce the memory consumption of the object definition
I had the experience of the compiler running out of heap space several times for larger code. Using precompiled code avoides that. But still, using frozen bytecode would be the cure for static data like fonts. That remindes me of splitting my driver into smaller pieces, wher some of them could be precompiled and/or frozen.
My major point here is, that a small set of fonts always fits into flash, so compressing is hardly a benefit.
My major point here is, that a small set of fonts always fits into flash, so compressing is hardly a benefit.