RFC: optimizations for SPI screens

C programming, build, interpreter/VM.
Target audience: MicroPython Developers.
Post Reply
User avatar
deshipu
Posts: 1388
Joined: Thu May 28, 2015 5:54 pm

RFC: optimizations for SPI screens

Post by deshipu » Thu Apr 27, 2017 10:55 pm

There are lots of SPI-based screen modules available out there, including the one made specifically for the PyBoard. However, controlling them with a microcontroller can be challenging, especially when there are a lot of pixels and a lot of possible colors: we don't have an infinite amount of memory for a frame buffer, and sending all this pixel data over SPI can be slow. There are however ways to mitigate this problem. I would like to discuss some of them, and perhaps research the possibility of adding a dedicated method to the SPI class for handling them.

The most obvious solution is to simply not have a frame buffer at all, and instead only update the pixels that need to be changed computing it on the fly. This works for some simple use cases, such as displaying a static menu or an icon. But as the image gets more complex and perhaps needs to be animated, the amount of information we need to send to the display grows fast. Since there is a cost associated both with the communication overhead and with overdraw, this will quickly degrade with any more complex application.

The second approach is to keep a frame buffer that only covers a part of the screen, and repeat the drawing instructions moving that buffer around, until we cover the whole screen. That's how the u8glib works. In my opinion this approach has the downsides of both the frame buffer and on-the-fly rendering, perhaps with the exception of avoiding overdraw. But it's not suitable for any more advanced use cases, such as games or animations.

Let's see what else could be done to limit the amount of memory used.

We can reduce the amount of memory used by our buffers and images by limiting ourselves to a fixed palette of colors (or several palettes), using fewer bits per pixel. I am right now working on a pull request for the framebuf module that would allow bliting images with different numbers of colors/pixel formats on each other, remapping the colors on the fly according to a provided palette. Once that is in place, it should become much easier to operate on indexed images. However, it still doesn't solve the problem with a large amount of memory taken by the screen's fame buffer. We have to send the pixel data to the screen in the format it expects, and no screens I know support palettes -- they usually either use a 16-bit 565 format, or a 24-bit RGB color. And since the SPI write functions take a buffer, there is no way for us to quickly send an indexed image data over SPI, converting it to the desired pixel format on the fly. We can do that in chunks, but then we are facing a trade-off: use a large buffer, wasting memory, and send a lot of data in one go, or use a small buffer, send it in small chunks, going back to slow Python code in between. This could potentially be solved by having a dedicated SPI write function that takes a palette, and converts the bytes on the fly just like the framebuf bliting will soon do.

If we are displaying mostly text, or mostly images consisting of repeating rectangular elements, we can benefit from a tile map. The idea is simple: we have our "tiles" -- an array of small images, stored as static data in the flash memory, representing the different possible elements we can display (letters of text, walls and floors in a platformer game, playing pieces in a chess game, etc.), and in memory we only have a low-resolution 2D array that says which tile should be displayed in that place of the screen. This way we don't need a lot of memory for the frame buffer -- we can just reach directly to the tiles definitions when pushing data to the screen. There are currently two problems with implementing this in MicroPython. First, if we want to use the framebuf module to store the images, we have to copy them from flash into RAM, because currently framebuf can't work with a read-only buffer. Second, to send the pixel information to the display, we have to create another buffer and put the information there -- it's not possible to calculate the values on the fly, as the SPI methods need a n object with a buffer protocol. Of course, we could implement a tile engine (and even add a sprite engine on top) in C, but to be compatible with different ports, it would still need to go through the buffer protocol of the SPI write functions.

Those are the thoughts I have from experimenting with animations on the ESP8266 a little bit. I would love to hear about other ideas, and about opinions on this.

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Re: RFC: optimizations for SPI screens

Post by pythoncoder » Fri Apr 28, 2017 6:55 am

deshipu wrote:... We can do that in chunks, but then we are facing a trade-off: use a large buffer, wasting memory, and send a lot of data in one go, or use a small buffer, send it in small chunks, going back to slow Python code in between.
Have you quantified this to establish whether it's a practical problem?

There are some interesting ideas there. The tile map sounds efficient but would seem to rule out mixing bitmap and vector graphics which is rather limiting. For applications using a limited range of colours a mapping SPI function would enable much a smaller framebuf to be used: a very appealing idea.

My main interest is in GUI interfaces: my solutions use a mix of bitmaps and vectors. I guess a bitmap-only solution is possible, but I'm not sure I'd fancy writing it ;) One point worth considering is that in a GUI there are situations where RAM can be saved by a facility to read from the device the contents of a rectangular array of pixels. This would have implications for the mapping SPI function: its inverse would be useful.
Peter Hinch
Index to my micropython libraries.

User avatar
deshipu
Posts: 1388
Joined: Thu May 28, 2015 5:54 pm

Re: RFC: optimizations for SPI screens

Post by deshipu » Sat Apr 29, 2017 8:39 am

Yes, the whole thing comes from my playing with the ili9341 and st7735 displays using ESP8266's hardware SPI. I noticed you can get pretty nice speeds with it, enough for animations, provided that you don't go back to Python code too often.

Unfortunately, reading from the display is often not practical. Many modules don't even have a MISO pin, and those that do require a much slower SPI clock for reading (my ili9341 works fine at 40MHz writing, but reading requires less than 5MHz). Not sure why.

As far as I can tell with the modern GUI interfaces and 2D computer games, there is relatively little vector graphics. Mostly bitmaps are used for different parts of the UI, sometimes scaled, stretched or repeated when they need to cover dynamically changing areas. For example, if you look at your GTK window theme, it's mostly bitmaps for corners, edges, etc.

Computer games are similar. There are whole generations of game consoles that only have a hardware tile+sprite engine.

User avatar
pythoncoder
Posts: 5956
Joined: Fri Jul 18, 2014 8:01 am
Location: UK
Contact:

Vector vs bitmapped graphics

Post by pythoncoder » Wed May 03, 2017 6:27 am

Modern GUI's run on kit with rather more horsepower than an STM chip ;) An issue with GUI's is scalability: it simplifies the design of screens if objects such as controls can be drawn at any scale. With vector graphics this comes for free in terms of runtime CPU time, an example being the GUI I wrote for the LCD160CR skin. This uses vector graphics for everything except text rendering (which is not runtime scalable). The GUI inevitably has a "technical" look to it which might be seen as old fashioned, so I can see the merits of using bitmaps. But there is a downside in my view.

Developers of commercial applications have toolkits enabling controls to be drawn on screens; they ensure a result with a consistent appearance. Python has PyQt, TKinter, PMW which offer considerable help. In the absence of such support bitmapped graphics are a complete hassle for people such as myself with zero artistic talent. You waste endless time trying to source ready-made graphics with the right appearance and at the right scale for your display or making doomed attempts to adapt them to your needs.

I suppose runtime scalability might go some way to bridging the gap between vector and bitmapped graphics but it seems to be little used; a screen layout really needs to be fully thought through at design time. So in the absence of toolkit support I find vector graphics far easier to use.

Another factor is the issue of resource files. With vector graphics these are of course non-existent. Bitmaps have to be stored somewhere, as files in a filesystem (slow) or as frozen Python source - suitable only for users able to build the firmware.
Peter Hinch
Index to my micropython libraries.

User avatar
deshipu
Posts: 1388
Joined: Thu May 28, 2015 5:54 pm

Re: RFC: optimizations for SPI screens

Post by deshipu » Wed May 03, 2017 7:33 am

Then again, even with vector graphics, you need to draw them in some buffer first before sending it to the display (unless you use something like the ssd1351, which has commands for drawing lines), so you would still benefit from a fast paletted SPI transfer method.

Post Reply