"
Tutorial Archive
March 25, 2013
This is what's left of the tutorials from the old blog, provided for reference.
Demos are unavailable, links are broken, and code may be outdated.
PNG files are handy for Stage3D games since they offer great compression and alpha channel support for transparent textures. If you make your PNG files in a program like Photoshop and try to render them, you might notice ugly black 'halos' or 'outlines' around any transparent parts and that even the solid colours don't look quite right.

The reason you're seeing blackness behind transparent pixels is because it saves PNGs with pre-multiplied alpha and that really messes them up if not handled properly.

Luckily, we can fix this by 'un-multiplying' in the fragment shader quite easily.

//First, here's the most basic texture sampling shader you can get
fragmentShader = "tex oc, v1, fs0 <2d,repeat,linear>"; //sample texture and output RGB

//Here's what it looks like un-multiplied
fragmentShader =
"tex ft0, v1, fs0 <2d,repeat,linear> \n" + //sample texture
"div ft0.rgb, ft0.rgb, ft0.a \n" + // un-premultiply png
"mov oc, ft0" //output fixed RGB

This is a tutorial on how to set up Flash Professional CS5 to compile for Flash Player 11.
(also works for CS5.5. Not sure about CS4)


1. Getting the necessary files
  • a) Download and install the Flash Player 11 Beta 2 32-bit Content Debugger for whichever browser you want to work with (there is no 64-bit debug version).

  • b) From the same page, download the Flash Player 11 Beta 2 Global SWC and then rename it to "playerglobal.swc".

  • c) Download and extract Vizzy, a free tool which will let you capture trace() calls from within the browser so you can do basic debugging.

  • d) Download this XML file (right click and Save As...)


2. Setting up Flash Pro
  • a) Open your Flash Pro installation directory.
    example: C:\Program Files (x86)\Adobe\Adobe Flash CS5.5

  • b) Navigate to ...\Common\Configuration\ActionScript 3.0 and create a new folder named "FP11beta2".

  • c) Place the playerglobal.swc file from step 1b into the "FP11beta2" folder you just made.

  • d) Place the XML file you downloaded in step 1d into ...\Common\Configuration\Players

  • e) If you used any name other than "FP11beta2" for step 2c, open the XML file and adjust this line:

3. Compiling and Debugging
  • a) Flash Player 11 Beta 2 should now appear as an option in the Publish Settings of your FLA.

  • b) Select that option, make sure Flash is set to export an HTML file as well, and build the project.

  • c) If it throws errors during the build process: double-check you've followed all instructions correctly so far.
    If it builds but crashes immediately from a "VerifyError": you're fine, move on to the next step.
    If it builds correctly and runs: you're either fine or magic, move on to the next step.

  • d) So now you've successfully built a SWF for Flash Player 11 but Flash Pro won't run it once you start mucking with Context3D and you don't have a standalone player either, so instead open that HTML file you just published in step 3b.

  • e) If it runs: you're fine, move on to the next step. Otherwise? *shrug*

  • f) We need to modify the HTML file so Flash can actually use your GPU for hardware acceleration.
    First, turn off the HTML option in Publish Settings from now on so it never gets overwritten.
    Now open the HTML file in any editor and set all 'wmode' parameters to the value 'direct'.

  • g) Before you go test it, boot up Vizzy. No special configuration or installation; just start it up.

  • h) When you run your SWF, Vizzy should automatically pick up any trace() calls and errors, and Context3D.driverInfo should be able to list your video card and the renderer being used (DirectX/OpenGL).


Congrats! You're done and ready to start making awesome stuff!


4. Additional Troubleshooting
  • If Vizzy isn't picking up any traces, make sure you installed the Content Debugger version of the flash player in step 1a and not the release versions.

  • If driverInfo says you're in software mode, check the HTML and make sure wmode is ALWAYS set to 'direct'.

  • If that doesn't help, make sure you aren't running any browser extensions that change the wmode when you load the page. HoverZoom on Chrome caused problems for me, it's likely not the only one.

  • If that doesn't help, try a different browser.

  • If that doesn't help, contact Adobe with your OS/Video Card/Browser versions so they can work on better compatibility.

  • If that doesn't help, throw a hissy-fit and run down the street naked.
Breakdance McFunkyPants was kind enough to share his code for auto-generating mip-maps using Stage3D.

Mip-mapping means using low-resolution versions of textures to:
  • prevent rendering errors when viewing surfaces at severe angles
  • increase performance on distance objects

Let's say you have a 256x256 texture but are viewing an object that's so far away it only takes up 16x16 pixels of your screen. That's pretty wasteful, so mip-mapping tells the renderer that it should automatically choose the most appropriate texture size. You can make all the sizes by hand, or do it automatically with this code:


/*Note: All code is by McFunkyPants (http://www.mcfunkypants.com)

Keep an eye out for his upcoming book on Stage3D game programming, too!*/

public function uploadTextureWithMipmaps(dest:Texture, src:BitmapData):void
{
var ws:int = src.width;
var hs:int = src.height;
var level:int = 0;
var tmp:BitmapData;
var transform:Matrix = new Matrix();

tmp = new BitmapData(src.width, src.height, true, 0x00000000);

while ( ws >= 1 && hs >= 1 )
{
tmp.draw(src, transform, null, null, null, true);
dest.uploadFromBitmapData(tmp, level);
transform.scale(0.5, 0.5);
level++;
ws >>= 1;
hs >>= 1;
if (hs && ws)
{
tmp.dispose();
tmp = new BitmapData(ws, hs, true, 0x00000000);
}
}
tmp.dispose();
}

private function init_texture(bmp:Bitmap):Texture
{
var tex:Texture;
tex = context3D.createTexture(bmp.width, bmp.height, Context3DTextureFormat.BGRA, false);
uploadTextureWithMipmaps(tex, bmp.bitmapData);
return tex;
}
This is a quick tutorial on how to set up Texture objects for rendering models in Stage3D. I'll cover how to:

1) create a Texture from some BitmapData
2) create a Texture from an image file in the library (Flash Pro)
3) tell the GPU which Texture to render with
4) sample the texture in your fragment shader
5) optimize your texture usage


/* #1. create a Texture from some BitmapData */
import flash.display.BitmapData;
import flash.display3D.Context3D;
import flash.display3D.textures.Texture;

var bdata:BitmapData; //assume we already have image data in here
var context:Context3D; //assume we have a rendering context

var tex:Texture = context.createTexture(bdata.width, bdata.height, "bgra", false); //ask the rendering context for a Texture object
tex.uploadFromBitmapData(bdata); //upload out texture to the GPU



/* #2. Create a Texture from an image file in the library (Flash Pro) */
import flash.display3D.Context3D;
import flash.display3D.textures.Texture;
import flash.utils.getQualifiedClassName;

var context:Context3D; //assume we have a rendering context
var X:uint, Y:uint; //assume we know the dimensions of the image we'll be using

var c:Class = getDefinitionByName('myImage.png') as Class; //get the image file's class name from the library
var tex:Texture = context.createTexture(X, Y, "bgra", false); //ask the rendering context for a Texture object
t.uploadFromBitmapData(new c()); //create the BitmapData from the library class and upload it

/* Note: you can store 'new c()' in a BitmapData variable
and then use .width and .height if you don't know X and Y */



/* #3 Tell the GPU which Texture to render with */
context.setTextureAt(0, tex); //this Texture is now 'fs0' in the fragment shader. Simple!



/* #4 Sample the texture in your fragment shader */
"tex oc, v1, fs0 <2d,repeat,linear>" //sample texture (fs0) using UV coordinates (v1)

//Note: 'repeat' allows you to tile textures on a surface while 'clamp' locks the texture to the 0-1 UV range



#5. Optimization Tips:

  • Texture swapping is expensive; group models that share the same texture so you only need to swap once for the whole group.

  • If you have an animated character, put all their frames in the same texture sheet ('sprite sheet') and adjust the UV coordinates to point to the right section rather than using a bunch of separate image files.

  • If possible, combine textures for environments as well, or just try to use the same textures on as much as you can get away with (you can even combine textures in a BitmapData at load time!).

  • Always make your texture sizes are powers of two (e.g. 256x256). GPUs are designed to take advantage of these numbers for extra performance.

  • Keep your texture size as low as possible! Painting textures at high resolution is a good idea, but always scale them down as far as you can stand before exporting them from Photoshop (or whatever you use). You'd be surprised how small you can get things before losing detail!

  • Don't waste rendering power; use different fragment shaders for textures that are fully opaque and textures that have transparency. You can even use BitmapData.transparent (boolean) to check for alpha transparency data dynamically!

  • Use 'repeat' in fragment shader texture sampling to save on performance (render one model with a tiling texture vs. multiple models with the same texture) and load time/file size (one small texture repeated vs. a large texture)
I've been meaning to update the old Ambient Lighting tutorial for a while now as I found that the effect it accomplished was not exactly what I intended. It could still be useful in its own right, but let's look at how to do ambient lighting the way you would expect it to work.

And while we're at it let's throw in a three point lighting setup because I'm lazy and that is what is in the copypasta shader I'm grabbing from a project of mine having multiple lights is really useful!

Much of this tutorial is identical to the last one with some additional constants (multiple lights!) and a new fragment shader. Don't be daunted by the length, much of it is just copy/paste and tweaking values.


/* First we'll set up lighting constants for the fragment shader */
context.setProgramConstantsFromVector("fragment", 0, Vector.([0,0,0,0])); //fc0, for clamping negative values to zero
context.setProgramConstantsFromVector("fragment", 1, Vector.([0.25,0.25,0.25,0])); //fc1, ambient lighting (1/4 of full intensity)

//key light
context.setProgramConstantsFromVector("fragment", 2, Vector.([0.37, 0.56, 0.74, 1])); //fc2, INVERSE light direction
context.setProgramConstantsFromVector("fragment", 3, Vector.([0.9,0.9,0.9,1])); //fc3, light color & intensity

//fill light
context.setProgramConstantsFromVector("fragment", 4, Vector.([-0.82, 0.41, 0.41,1])); //fc4, INVERSE light direction
context.setProgramConstantsFromVector("fragment", 5, Vector.([0.6,0.5,0.4,1])); //fc5, light color & intensity

//backlight
context.setProgramConstantsFromVector("fragment", 6, Vector.([0, -0.1, -1,1])); //fc6, INVERSE light direction
context.setProgramConstantsFromVector("fragment", 7, Vector.([0.2,0.25,0.3,1])); //fc7, light color & intensity

/* Our vertexbuffers are set up like so */
context.setVertexBufferAt(0, vertexbuffer, 0, "float3"); //va0 (x,y,z)
context.setVertexBufferAt(1, vertexbuffer, 3, "float3"); //va1 (normalx,normaly,normalz)
context.setVertexBufferAt(2, vertexbuffer, 6, "float2"); //va2 (u,v)

/* Assume these matrices have been set up with values already. */
var proj:PerspectiveMatrix3D; //the perspective
var view:Matrix3D; //the camera
var mod:Matrix3D; //the model

/* Set up our vertex transformation constant */
view.append(proj);
var modview:Matrix3D = mod.clone();
modview.append(view);
context.setProgramConstantsFromMatrix("vertex", 0, modview, true); //vc0

/* Set up our normal transformation constant */
var invmat:Matrix3D = mod.clone();
invmat.transpose();
invmat.invert();
context.setProgramConstantsFromMatrix("vertex", 4, invmat, true); //vc4

/* make sure you set a texture. I'll write a quick tutorial about creating Texture's later */
context.setTextureAt(0, modelTexture);

/* vertex shader is the same as before and quite simple */
vertexShader =
"m44 vt0, va0, vc0 \n"+ //transform vertex x,y,z
"mov op, vt0 \n"+ //output vertex x,y,z
"m44 v1, va1, vc4 \n"+ //transform vertex normal, send to fragment shader
"mov v2, va2"; //move vertex u,v to fragment shader

/* Now the real work: going through each light (including ambient) and adding all of their effects together.
Note the reuse of temporary registers (especially ft1) in this shader. Why are we doing this?
Temporary registers are limited in number so if you don't reuse them you will run out! */
fragmentShader =
"nrm ft0.xyz, v1 \n"+ //normalize the fragment normal (v1)
"mov ft0.w, fc0.w \n"+ //set the w component to 0
"tex ft2, v2, fs0 <2d,clamp,linear> \n"+ //sample texture (fs0) using uv coordinates (v2)

"dp3 ft1, fc2, ft0 \n"+ //dot the transformed normal (ft0) with key light direction (fc2)
"max ft1, ft1, fc0 \n"+ //clamp any negative values to 0
"mul ft1, ft2, ft1 \n"+ //multiply original fragment color (ft2) by key light amount (ft1)
"mul ft3, ft1, fc3 \n"+ //multiply new fragment color (ft1) by key light color (fc3)

"dp3 ft1, fc4, ft0 \n"+ //dot the transformed normal (ft0) with fill light direction (fc4)
"max ft1, ft1, fc0 \n"+ //clamp any negative values to 0
"mul ft1, ft2, ft1 \n"+ //multiply original fragment color (ft2) by fill light amount (ft1)
"mul ft4, ft1, fc5 \n"+ //multiply new fragment color (ft1) by fill light color (fc5)

"dp3 ft1, fc6, ft0 \n"+ //dot the transformed normal (ft0) with back light direction (fc6)
"max ft1, ft1, fc0 \n"+ //clamp any negative values to 0
"mul ft1, ft2, ft1 \n"+ //multiply original fragment color (ft2) by back light amount (ft1)
"mul ft5, ft1, fc7 \n"+ //multiply new fragment color (ft1) by back light color (fc7)

"add ft1, ft3, ft4 \n"+ //add together first two light results (ft3, ft4)
"add ft1, ft5, ft1 \n"+ //add on third light result (ft5)

"mul ft2, ft2, fc1 \n"+ //multiply original fragment color (ft2) by ambient light intensity and color (fc1)
"add oc, ft2, ft1" //add ambient light result (ft2) to combined three-light result (ft1) and output

Update July 14th: Fixed missing part of the fragment shader and added a note about binding a texture

It's been a while since my last Molehill tutorial. Working on five games at once doesn't leave much time for writing shaders and blogging. Still, I've snuck in some time to play with it and thought I would share my new lighting setup with everyone.

As always, I'm assuming you've got a handle of the basics first.

/* First we'll set up lighting constants for the fragment shader */
context.setProgramConstantsFromVector("fragment", 0, Vector.([0,0,0,0])); //fc0, for clamping negative values to zero
context.setProgramConstantsFromVector("fragment", 1, Vector.([0.25,0.25,0.25,0])); //fc1, ambient lighting (1/4 of full intensity)
context.setProgramConstantsFromVector("fragment", 2, Vector.([0,0,1,1])); //fc2, INVERSE light direction
context.setProgramConstantsFromVector("fragment", 3, Vector.([1,0,0,1])); //fc3, light color & intensity (pure red)

/* Our vertexbuffers are set up like so */
context.setVertexBufferAt(0, vertexbuffer, 0, "float3"); //va0 (x,y,z)
context.setVertexBufferAt(1, vertexbuffer, 3, "float3"); //va1 (normalx,normaly,normalz)
context.setVertexBufferAt(2, vertexbuffer, 6, "float2"); //va2 (u,v)

/* Assume these matrices have been set up with values already. */
var proj:PerspectiveMatrix3D; //the perspective
var view:Matrix3D; //the camera
var mod:Matrix3D; //the model

/* Set up our vertex transformation constant */
view.append(proj);
var modview:Matrix3D = mod.clone();
modview.append(view);
context.setProgramConstantsFromMatrix("vertex", 0, modview, true); //vc0

/* Set up our normal transformation constant. This is a superior way to how I did it in the previous lighting tutorial */
var invmat:Matrix3D = mod.clone();
invmat.transpose();
invmat.invert();
context.setProgramConstantsFromMatrix("vertex", 4, invmat, true); //vc4

/* make sure you have a texture set to memory. I'll add more detail about creating Texture's later */ context.setTextureAt(0, modelTexture);
/* And now the fun stuff */
vertexShader =
"m44 vt0, va0, vc0 \n"+ //transform vertex x,y,z
"mov op, vt0 \n"+ //output vertex x,y,z
"m44 v1, va1, vc4 \n"+ //transform vertex normal, send to fragment shader
"mov v2, va2"; //move vertex u,v to fragment shader

fragmentShader =
"nrm ft0.xyz, v1 \n"+ //normalize the fragment normal
"mov ft0.w, fc2.w \n"+ //set the w component to 1
"tex ft2, v2, fs0 <2d,clamp,linear> \n"+ //sample texture

"dp3 ft1, fc2, ft0 \n"+ //dot the transformed normal with light direction
"max ft1, ft1, fc0 \n"+ //clamp any negative values to 0
"mul ft3, ft2, ft1 \n"+ //multiply fragment color by light amount
"mul ft4, ft3, fc3 \n"+ //multiply fragment color by light color

"add oc, ft4, fc1"; //add ambient light and output the color


It's really easy to add more lights too, just add more fragment constants for direction & color, then repeat those middle four steps of the fragment shader for each light and add them all together at the end (note: not the best solution ever, but it works well enough for now).
Faster Vertex Lighting
March 7, 2011
This tutorial is deprecated. I've left it up for reference, but I highly recommend checking out the more up-to-date tutorials.

A previous tutorial outlined how to do vertex lighting in your vertex and fragment shaders. It had one major flaw: the normals for each vertex had to be transformed in AS3 every frame if the object they were part of was rotating.

The GPU is much faster at doing that sort of calculation than AS3, so let's offload the math to the GPU! Luckily, we don't need to add very much.

vertexShader =
"m44 vt0, va0, vc0 \n"+ //transform vertex x,y,z
"mov op, vt0 \n"+ //output vertex x,y,z
"mov v0, vt0.z \n"+ //more vertex z to fragmant shader
"mov v1, va1 \n"+ //move vertex r,g,b to fragment shader
"m44 v2, va2, vc4"); //transform vertex normal, send to fragment shader

fragmentShader =
"dp3 ft0, fc1, v2 \n"+ //dot the transformed normal with light direction
"mul ft1, fc2, ft0 \n"+ //invert light amount to get correct direction
"mul ft0, ft1, v1 \n"+ //multiply vertex r,g,b by light amount
"mul ft2, fc0, v0 \n"+ //multiply fog r,g,b by fog strength (vertex depth)
"add oc, ft2, ft0"); //add fog r,g,b to lit vertex r,g,b

/* we're using a vertexBuffer that is (x,y,z, r,g,b, nx,ny,nz)
nx,ny,nz is the vertex normal that you should already have calculated */
context.setVertexBufferAt(0, vertexBuffer, 0, "float3"); //va0
context.setVertexBufferAt(1, vertexBuffer, 3, "float3"); //va1
context.setVertexBufferAt(2, vertexBuffer, 6, "float3"); //va2

/* now we need to tell the GPU how the object is rotated
so let's put together a rotation matrix */
var rotmat:Matrix3D = new Matrix3D();
var r = mesh.rotation;
if(r.x != 0) { rotmat.appendRotation(r.x, new Vector3D(1,0,0)); }
if(r.y != 0) { rotmat.appendRotation(r.y, new Vector3D(0,1,0)); }
if(r.z != 0) { rotmat.appendRotation(r.z, new Vector3D(0,0,1)); }

/*now we set up our vertex buffers and constants */
context.setVertexBufferAt(0, mesh.vertexBuffer, 0, "float3");
context.setVertexBufferAt(1, mesh.vertexBuffer, 3, "float3");
context.setVertexBufferAt(2, mesh.vertexBuffer, 6, "float3");
context.setProgramConstantsFromMatrix("vertex", 0, mesh.mat, true);
context.setProgramConstantsFromMatrix("vertex", 4, rotmat, true);
//note the 4 above. Matrices take up 4 registers each

/*here are the fragment constants for fog and the directional light */
context.setProgramConstantsFromVector("fragment", 0,
Vector.([color.x, color.y, color.z, 1]));
context.setProgramConstantsFromVector("fragment", 1,
Vector.([0,0,-1,1]));
context.setProgramConstantsFromVector("fragment", 2,
Vector.([-1,-1,-1,1]));

Not so bad, right? By using this technique over the previous tutorial you can push out hundreds of thousands of extra polygons in your scene.
AGAL Basics
May 4, 2011
I've written several tutorials on how to achieve some basic effects in your Flash Player 11 shaders using the Adobe Graphics Assembly Language (AGAL). I thought it would be a good idea to put up a quick reference guide for what the various niggly little registers and things actually are.

Consider it AGAL 101:

In vertex shaders:
va[x] = values from a vertex buffer
set with context.setVertexBufferAt(x,buffer, ...);

vc[x] = constants
set with context.setProgramConstantsFromMatrix/Vector("vertex",x, ...);

vt[x] = temporary register, useful for multi-step calculations
set in the AGAL code (ex: "add vt0, va0, vc0")

v[x] = varying register
used to pass values to the fragment shader (ex: "mov v0, va0")

op = vertex output (ex: "mov op, va0")

In fragment shaders:
v[x] = varying register, values passed from the vertex shader (explained above)

fc[x] = constants,
set with context.setProgramConstantsFromMatrix/Vector("fragment",x, ...);

ft[x] = temporary register, useful for multi-step calculations
set in the AGAL code (ex: "add ft0, fc0, v0")

fs[x] = textures,
set with context3D.setTextureAt(x, ...);

oc = colour output (ex: "mov oc, ft0")


If you get a handle on those registers and their corresponding Actionscript 3 functions, you'll be able to start writing your own shaders in no time.
Vertex Lighting
May 2, 2011
This tutorial is deprecated. I've left it up for reference, but I highly recommend checking out the more up-to-date tutorials.

Continuing on with my series of Molehill experiments/tutorials, we come to vertex lighting (+the distance fog from before).

At this point I've started building up my code as an actual engine, so it would be confusing to copy and paste whole sections of code with engine functions calls and everything. What I will do it just try to focus on the most important snippets and trust you'll know where to plug them in.


vertexShader =
"m44 vt0, va0, vc0 \n"+ //transform vertex x,y,z (va0) through view matrix (vc0)
"mov op, vt0 \n"+ //output transformed vertex x,y,z (vt0)
"mov v0, vt0.z \n"+ //move vertex z to fragment shader (v0)
"mov v1, va1 \n"+ //move vertex r,g,b to fragment shader (v1)
"mov v2, va2"); //move vertex normal to fragment shader (v2)

fragmentShader =
"dp3 ft0, fc1, v2 \n"+ //dot the vertex normal (v2) with light direction (fc1)
"mul ft1, fc2, ft0 \n"+ //invert light amount (ft0) to get correct direction
"mul ft0, ft1, v1 \n"+ //multiply vertex r,g,b (v1) by light amount (ft1)
"mul ft2, fc0, v0 \n"+ //multiply fog r,g,b (fc0) by vertex depth (v0)
"add oc, ft2, ft0"); //add fog r,g,b (ft2) to lit vertex r,g,b (ft0)

/* we're using a vertexBuffer that is (x,y,z, r,g,b, nx,ny,nz)
nx,ny,nz is the vertex normal that you should already have calculated */
context.setVertexBufferAt(0, vertexBuffer, 0, "float3"); //va0
context.setVertexBufferAt(1, vertexBuffer, 3, "float3"); //va1
context.setVertexBufferAt(2, vertexBuffer, 6, "float3"); //va2

/* the basic vertex transforming view matrix you always have */
context.setProgramConstantsFromMatrix("vertex", 0, mat, true); //vc0

/* set up fog color */
context.setProgramConstantsFromVector("fragment", 0,
Vector.([color.x, color.y, color.z, 1])); //fc0

/* set up a directional light vector
In this case it's directly down the -Z axis into the screen */
context.setProgramConstantsFromVector("fragment", 1,
Vector.([0,0,-1,1])); //fc1

/* set up an inverting vector or else our light direction will be backwards */
context.setProgramConstantsFromVector("fragment", 2,
Vector.([-1,-1,-1,1])); //fc2

For this example, I:
-calculate the vertex normals when creating the cube
-then calculate the transformed vertex normals each frame based on the cube's rotation in AS3
-and pass those transformed normals into the vertex buffer.

My next step is to figure out how to get that transforming done on the GPU instead as it would likely be faster.
I'm starting to get the hang of Adobe Graphics Assembly Language (AGAL) a bit now. I've got a rough sense of what all the registers are and how to use them. As a result I have a massively improved distance fog shader which works per-vertex and doesn't care what your camera orientation is (as it uses the final vertex depth after being transformed through your view matrix).



/* Some constants for us */
private const FARPLANE:Number = 10000;
private const INVFARPLANE:Number = 1/FARPLANE;

/* Set up the fragment constant for a cyan fog color (fc0) */
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT,
0, Vector.([0, INVFARPLANE, INVFARPLANE, 1]));

/* Set up vertex shader which will pass colour and depth to the fragment shader */
var vertexShaderAssembler:AGALMiniAssembler = new AGALMiniAssembler();
vertexShaderAssembler.assemble(Context3DProgramType.VERTEX,
"m44 vt0, va0, vc0 \n"+ //transform vertex x,y,z
"mov op, vt0 \n"+ //output vertex x,y,z
"mov v0, vt0.z \n"+ //move vertex z (depth) to fragment shader
"mov v1, va1" //move vertex r,g,b to fragment shader
);

/* Set up a fragment shader */
var fragmentShaderAssembler:AGALMiniAssembler= new AGALMiniAssembler();
fragmentShaderAssembler.assemble(Context3DProgramType.FRAGMENT,
"mul ft0, fc0, v0 \n"+ //multiply fog r,g,b by vertex z (depth)
"add ft1, ft0, v1 \n"+ //add final fog r,g,b to vertex r,g,b
"mov oc, ft1" //output color
);

That wasn't so bad!

There are still improvements to be done but this should get you started if you want distance fog in your games.
Per-Polygon Distance Fog
February 28, 2011
This tutorial is deprecated. I've left it up for reference, but I highly recommend checking out the more up-to-date tutorials.

There isn't much information out there right now on the Molehill API and how to program for it, so unless you work on a 3D engine framework like Away3D you are really flying blind right now. I thought I would throw my meager experiments up here in the hopes that someone can run with them and do much cooler stuff (and hopefully share code of their own!)

Per-Polygon Distance Fog
Disclaimer: I'm making this shit up as I go. I am far from being an experienced 3D programmer and don't really know what I am doing.
I'm not going to go over how to set up Flash CS5 or Flex to compile Flash Player 11 builds, because there are a number of tutorials on that already. I'm going to assume you've been able to get something like this great starter source code by Ryan Speets running.

/* Create a square object. Structure is (x,y,z, r,g,b) for each vertex */
vertexBuffer = context.createVertexBuffer(4, 6);
vertexBuffer.uploadFromVector ( Vector.([
0, 10, 0, 0.5,0.5,0.5,
10,10, 0, 0.5,0.5,0.5,
10, 0, 0, 0.5,0.5,0.5,
0, 0, 0, 0.5,0.5,0.5]), 0, 4);
indexBuffer = context.createIndexBuffer(6);
indexBuffer.uploadFromVector(Vector.([0, 1, 2, 0, 2, 3]), 0, 6);

/* Set up our vertex buffer */
context.setVertexBufferAt(0, vertexBuffer, 0, FLOAT3); //x,y,z into va0
context.setVertexBufferAt(1, vertexBuffer, 3, FLOAT3); //r,g,b into va1

/* Basic vertex shader */
var vertexShaderAssembler:AGALMiniAssembler = new AGALMiniAssembler();
vertexShaderAssembler.assemble(Context3DProgramType.VERTEX,
"m44 op, va0, vc0 \n"+ //process x,y,z (va0) through view matrix constant (vc0)
"mov v0, va1" //pass r,g,b values (va1) to fragment shader (as v0)
);

/* Per-polygon Fog Shader */
var fragmentShaderAssembler:AGALMiniAssembler= new AGALMiniAssembler();
fragmentShaderAssembler.assemble(Context3DProgramType.FRAGMENT,
"mul ft0, fc1, v0 \n"+ //multiply color (v0) by fog fragment constant (fc1)
"mov oc, ft0" //output colour
);

/* Setting up camera/model matrices is for another tutorial.
I'm assuming you have a model matrix 'mat' ready to go */
context.setProgramConstantsFromMatrix(VERTEX, 0, mat, true);

/* Now for each polygon you are rendering,
figure out a fog intensity value based on its depth.
To keep things simple I'm just using the Z coordinate of each object.
This assumes you are looking straight down the Z axis.
If you had a rotating camera, you would need to do some more fancy math */
private const FARPLANE:Number = 10000;
var depth:Number = 1+(obj.position.z / FARPLANE);

/* Now we set up the fog fragment constant 1 (fc1) vector structure is (r,g,b,a)
In this case I'm going to use a fog that ADDS RED as objects get farther away. */
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT,
1, Vector.( [ depth, 1, 1, 1 ] ) );