What does "fsh" mean?

A short history of the rendering algorithm

Short Answer

It doesn't mean anything.

Long, possibly rambling answer

Actually, the short answer is "floating-point, shaded", but what does that mean? Well, here goes....

The program that generates all these pictures has a long geneology of lesser programs, dating back almost 10 years. These started out as 2-D attractor drawing programs which drew Henon attractors, so these programs tended to have "henon" in their name. Three to four years ago, I mada a version of this program to produce high-resolution renderings of various 2-D attractors, to be printed on my laser printer.

At about the same time, I visited a friend's house, and one of the other people living there was an artist who was stongly influenced by the art of Roger Dean (the Yes album covers...). He had some sketches on the wall that he had done of abstract objects that I thought looked amazingly similar to some of the attractors I had been generating, except that his drawings were of 3-D objects.

This caused me to think about how to do 3-D attractors. I had recently bought Pickover's "Computers, Pattern. Chaos and Beauty", which had an equation for a 3-D attractor, so I modified the above-mentioned program to handle this 3-D attractor (the name stayed "henon", now called "henon3d", although the attractor was no longer henon).


My first tries at rendering the attractor where to make stereo images with just points, making pictures that looked quite similar to those I had seen before.

example here?


The stereo images worked rather well, but I figured I could do better. Thus I added a Z-Buffer to the algorithm. With a Z-Buffer, I could do visibility calculations, making more realistic renderings possible. The first few passes had no lighting calculations: each point could be thought of as a very small sphere, respesented by 4 points:
	white  gray
	gray   black

This gave the illusion of the sphere shaded with lightsource to the upperleft. I also placed the gray and black pixels at a slightly larger depth than the white pixel, so that when a bunch of "spheres" overlapped to produce a smooth surface, the surface would look smooth, otherwise the most recently calculated points would seem to sit on top of the previous points instead of blending into the surface.


I called the outputs of this program "henon3dh1", "henon3dh2", etc. (the "h" meant "shaded")

Gradient Shading

I soon realized that I could get a more realistic (i.e. grayscale) shading by using the depth information in the zbuffer. In a post-processing step, I could apply a directional derivative to the zbuffer and get a shaded version of the image (this is a simple form of what is often called "gradient shading"):

Convolution Kernel:

	1  0
	0 -1     
(scaled by some appropriate factor)

I first tried this by generating an image of the Z-buffer, with intensity representing height, and using the "Custom" filter in Photoshop to perform the convolution. This worked, but to get a satisfactory illusion of depth, the 8 bits used to represent depth were unsatisfactory, because the 256 discrete depths were visible in the picture, as if it had been sliced up and stacked together again. So I instead wrote some code to do the same calculation on the original floating-point Z-buffer data. These pictures were surprisingly good.


I called the pictures calculated using this method "floatshade1", "floatshade2", etc. for "floating-point, shaded" (this later was shortened to "fs*"), referring to the fact that the full floating point Z-buffer data was used to calculate the picture rather than 8-bit data.

As a side note, I later realized that the above shading did not have to be a post-processing step. The image could be calculated at the same time the zbuffer was being filled: since each Z-buffer value affects only two pixels (see the convolution kernel above), each time a Z-buffer value is changed, the two pixels it affects can be recalculated (simply a subtraction and scaling for each pixel). This because my standard preview method (used when selecting parameters and viewpoint).


I soon wearied of this style of shading, since my simple implementation could only approximate light sources in the plane of the zbuffer (i.e. I couldn't have a light source behind the object or near the eye, I could only light the object from the side). So decided I should implement a more complex lighting model. I extended the program to derive a surface-normal vector for each point and use it in a standard lighting model (diffuse reflection only, for the moment). I also figured that if I was going to the trouble to implement a real lighting model, I might as well add the ability to do shadows (see another page for a more technical description of the rendering algorithm).

I called the pictures calculated using this method "fsh1", "fsh2", etc. for "Floating-point, SHadows".

The program was now essentially in the state it is in now: the program that calculated all the pictures on the previous page. Recent additions have been minor compared to how the prgram was before (new attractors, upgraded lighting model, pathetic attempts at a user interface, etc.).

I soon decided to try other attractors, especially when I found Sprott's book. Thus I had to name the pictures of attractors which used the other equations "chaos", "julia", etc. But I kept the name "fsh" for the attractors which used the original formula, a very small variation on the formula in Pickover's book. So I call those attractors "fsh".

The end.

Back Up
Tim Stilson, 3/8/95