Last November I attended the GIRAF independent animation festival in Calgary. I have been a few years in a row now, and it is always well curated and a lot of fun. Inside, a short by Paris-based artist Mattis Dovier was one of my favorite films of the festival last year. Check it out below:
I appreciated the lo-fi, 90’s video game aesthetic that the artist created in the short film. It reminded me of the MS-DOS video games I grew up on, like Commander Keen and Math Blaster. Nostalgia is a powerful thing.
Dovier uses a graphics technique called “dither” extensively in his animation. The short is entirely black and white, so dither is used to create shading and depth. Wikipedia describes dither as:
An intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images.
Dither is a versatile concept with many cross-domain applications, it especially useful when processing audio, image, and video data. Some other uses for dither that I found interesting include:
- The mastering of audio in order to override harmonic tones produced by some digital filters
- Reduction of color banding and other visual artifacts during image processing
- Seismic data processing
- The addition of random delay (referred to as temporal buffering) to financial order flow in order to reduce the effectiveness of high frequency trading
Dither in image processing
A common use of dither in image processing is to reduce visual artifacts and preserve information when moving to a more restricted color space. Today’s screens and graphics cards support the display of more than 24 bits color (16+ million different colors). Older screens, however, did not have this capability, although even today some media formats are more restrictive than you may realize (GIF files only support 256 unique colors).
We can demonstrate visual artifacts associated with a reduction in color space by converting an image to monochrome (black and white). The Python function below changes each pixel in the source image to whichever color it is closest to, black or white.
#!/usr/bin/env python3 from PIL import Image def monochrome(source): """Convert an image from color to black and white. Ref: https://stackoverflow.com/a/18778280""" img = Image.open(source) greyscale = img.convert('L') monochrome = greyscale.point(lambda x:0 if x < 128 else 255, '1') return monochrome
As you can see the the image above, if you convert an image to a more restrictive color space a great deal of image detail can potentially be lost. This is an extreme example, 1-bit monochrome color is the most restrictive color space possible!
Applying a dithering step to this process is a way of preserving detail by tricking the brain while still moving to a more restrictive palette of colors.
There are over ten different implementations of the dithering algorithm, all which look slightly different and produce distinct visual patterns. The display medium for the image (animation, printed, screen) and the size of the output are considerations for picking the algorithm.
Below is a Python implementation of Floyd-Steinberg dithering. The Floyd-Steinberg algorithm makes use of the concept of error-diffusion. The residual error from the conversion of a pixel is passed to it’s neighbours. If the algorithm has rounded a lot of pixels in one direction, it becomes more likely that the next pixel will be rounded in the other direction. This error diffusion is responsible for the pointilism effect which preserves image detail and fools our brain.
#!/usr/bin/env python3 from PIL import Image import numpy as np def fl_dither(image, palette): """Convert the colors in image to those in the supplied palette using the Floyd-Steinberg dithering algorithm.""" img = Image.open(image) for y in range(img.height): for x in range(img.width): oldpixel = img.getpixel((x, y)) newpixel = closest_color(oldpixel, palette) img.putpixel((x, y), newpixel) error = np.subtract(oldpixel, newpixel) if x < img.width - 1: diffuse(img, x+1, y, error, 0.4375) if x > 0 and y < img.height - 1: diffuse(img, x-1, y+1, error, 0.1875) if y < img.height - 1: diffuse(img, x, y+1, error, 0.3125) if x < img.width - 1 and y < img.height - 1: diffuse(img, x+1, y+1, error, 0.0625) return img def diffuse(img, x, y, error, coeff): """Diffuse the conversion error at (x,y) to neighbouring pixels.""" newpixel = np.add(img.getpixel((x, y)), np.multiply(error, coeff)) img.putpixel((x, y), tuple(np.int_(newpixel))) def closest_color(pixel, palette): """Return the closest color in palette to the provided pixel.""" array = np.asarray(palette)-pixel index = np.argmin(np.linalg.norm(array, axis=1)) return palette[index]
The black and white photo of Gracey created using Floyd-Steinberg
dithering has a significant amount more visual information retained
than the image created using the
monochrome Python function from
earlier. Shading is visible in the dithered photo, represented by a
differing density of black or white pixels. The new image also
possesses that 90’s video game feel I mentioned at the beginning
of this post.