Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
232 changes: 232 additions & 0 deletions lib/ssd1327/examples/blink_animation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
"""Blink animation example using framebuf blit and pixel scaling on SSD1327 OLED.

Displays a scaled 16x16 eye bitmap centered on the 128x128 round OLED screen
and animates a smooth blink sequence by cycling through five eye states.

Comment on lines +1 to +5
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR description/issue checklist mentions updating lib/ssd1327/README.md examples table, but this PR only adds the example file. Please add blink_animation.py to the README Examples table (or update the PR description if that task is intentionally out of scope).

Copilot uses AI. Check for mistakes.
Demonstrates:
- framebuf.FrameBuffer creation from raw bitmap data (MONO_HLSB)
- Pixel-by-pixel scaling using fill_rect
- framebuf.blit() to copy a scaled bitmap onto the display framebuffer
- Frame-based animation with variable timing per frame
"""

from time import sleep_ms

import framebuf
import micropython
import ssd1327
from machine import SPI, Pin

# === Display setup ===
spi = SPI(1)
dc = Pin("DATA_COMMAND_DISPLAY")
res = Pin("RST_DISPLAY")
cs = Pin("CS_DISPLAY")
display = ssd1327.WS_OLED_128X128_SPI(spi, dc, res, cs)

# === Bitmap dimensions and scale ===
EYE_W = 16
EYE_H = 16
SCALE = 6 # Each pixel becomes a 6x6 block → 96x96px on screen

# === Eye bitmaps (MONO_HLSB, 16x16, 2 bytes per row) ===

EYE_OPEN = bytearray(
[
0b00000000,
0b00000000,
0b00111111,
0b10000000,
0b01000000,
Comment on lines +32 to +40
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says the eye bitmaps are “16x16, 2 bytes per row”, which implies 32 bytes total for MONO_HLSB, but each bitmap literal below contains 33 bytes (based on the number of entries). This extra trailing byte is unused and can be confusing; consider trimming to 32 bytes or updating the dimensions/comment to match the actual data layout.

Copilot uses AI. Check for mistakes.
0b01000000,
0b10011001,
0b10010000,
0b10100101,
0b01010000,
0b10000001,
0b10000000,
0b01000000,
0b01000000,
0b00111111,
0b10000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
]
)

EYE_SQUINT = bytearray(
[
0b00000000,
0b00000000,
0b00111111,
0b10000000,
0b01000000,
0b01000000,
0b10011001,
0b10010000,
0b10011001,
0b10010000,
0b10000001,
0b10000000,
0b01000000,
0b01000000,
0b00111111,
0b10000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
]
)

EYE_HALF = bytearray(
[
0b00000000,
0b00000000,
0b00111111,
0b10000000,
0b01000000,
0b01000000,
0b10000001,
0b10010000,
0b10111101,
0b01010000,
0b10000001,
0b10000000,
0b01000000,
0b01000000,
0b00111111,
0b10000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
]
)

EYE_CLOSED = bytearray(
[
0b00000000,
0b00000000,
0b00111111,
0b10000000,
0b01000000,
0b01000000,
0b10000000,
0b00010000,
0b10111111,
0b11010000,
0b10000000,
0b00000000,
0b01000000,
0b01000000,
0b00111111,
0b10000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
0b00000000,
]
)

# === Animation: open → squint → half → closed → half → squint → open ===
BLINK_FRAMES = [
EYE_OPEN,
EYE_SQUINT,
EYE_HALF,
EYE_CLOSED,
EYE_CLOSED,
EYE_HALF,
EYE_SQUINT,
EYE_OPEN,
]

FRAME_DELAYS = [1200, 60, 50, 40, 40, 50, 60, 400] # ms per frame


@micropython.native
def draw_eye(bitmap):
"""Draw a scaled eye bitmap centered on the 128x128 display."""
buf = framebuf.FrameBuffer(bitmap, EYE_W, EYE_H, framebuf.MONO_HLSB)

scaled_w = EYE_W * SCALE
scaled_h = EYE_H * SCALE
scaled_bitmap = bytearray((scaled_w * scaled_h) // 8)
scaled_buf = framebuf.FrameBuffer(
scaled_bitmap, scaled_w, scaled_h, framebuf.MONO_HLSB
)
Comment on lines +202 to +207
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

draw_eye() allocates a new scaled_bitmap bytearray and FrameBuffer on every frame. For an animation loop this creates avoidable GC pressure and can cause stutters on constrained boards. Consider pre-scaling/caching the scaled framebuffers once (e.g., at module init) and only blitting in the hot loop, or reusing a single preallocated buffer with fill(0) each frame.

Copilot uses AI. Check for mistakes.

# Scale up pixel by pixel
for y in range(EYE_H):
for x in range(EYE_W):
if buf.pixel(x, y):
scaled_buf.fill_rect(x * SCALE, y * SCALE, SCALE, SCALE, 1)
Comment on lines +204 to +213
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

display.framebuf is GS4_HMSB (4-bit grayscale, values 0–15), but scaled_buf is created as MONO_HLSB and the scaling writes color 1. When blitted, the ‘on’ pixels will map to value 1, which is almost black on this display (making the eye very hard to see). Consider either scaling into a GS4_HMSB buffer and using color 15 for ‘on’ pixels, or providing a palette to blit() that maps 0→0 and 1→15.

Suggested change
scaled_bitmap = bytearray((scaled_w * scaled_h) // 8)
scaled_buf = framebuf.FrameBuffer(
scaled_bitmap, scaled_w, scaled_h, framebuf.MONO_HLSB
)
# Scale up pixel by pixel
for y in range(EYE_H):
for x in range(EYE_W):
if buf.pixel(x, y):
scaled_buf.fill_rect(x * SCALE, y * SCALE, SCALE, SCALE, 1)
scaled_bitmap = bytearray((scaled_w * scaled_h) // 2)
scaled_buf = framebuf.FrameBuffer(
scaled_bitmap, scaled_w, scaled_h, framebuf.GS4_HMSB
)
# Scale up pixel by pixel
for y in range(EYE_H):
for x in range(EYE_W):
if buf.pixel(x, y):
scaled_buf.fill_rect(x * SCALE, y * SCALE, SCALE, SCALE, 15)

Copilot uses AI. Check for mistakes.

x_offset = (128 - scaled_w) // 2
y_offset = (128 - scaled_h) // 2
Comment on lines +215 to +216
Copy link

Copilot AI Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hard-codes 128 for centering. Since the display object already exposes display.width / display.height, using those would keep the example correct if the driver is instantiated with a different resolution (and avoids duplicating the docstring assumption).

Copilot uses AI. Check for mistakes.
display.fill(0)
display.framebuf.blit(scaled_buf, x_offset, y_offset)
display.show()


# === Animation loop ===
try:
while True:
for frame, delay in zip(BLINK_FRAMES, FRAME_DELAYS):
draw_eye(frame)
sleep_ms(delay)
except KeyboardInterrupt:
pass
finally:
display.fill(0)
display.show()
Loading