(1)    Example of input geometry, each part assigned a unique colour (clipping not relevant to effect.)

(2)    First the cop is prepared and exported to sop level, renaming to Cd to write rgb values. The image plane can then be moved around in 3D space, the effect is produced by moving the plane to intersect through the object. For each pixel a path is randomly bounced around, averaging the colour on each step creating the soft blurring as the path traces around the space.


(3)    For the object tagging, the images are first converted into points, the points are then projected to exist in two spaces, their original stationary position and in colour space, RGB representing XYZ coordinates. As each piece of the original object has already been seperated by colour, we can perform a lookup to idenitfy the densest region of each colour, the average XYZ value informs the position of a *relatively accurate* label.

(4)    Example of points moving between XYZ to RGB space.


(5)    Example of using Cd values to drive a custom velocity field and colour for a grains simulation.