Rendering a Gravitational Wave
This post is about rendering a gravitational wave (which may or may not be an actual thing) using a marching cubes algorithm. If anyone is wondering what a gravitational wave is I point you to the Wikipedia article. It is defined as a curvature in space and time which propagates as a wave traveling outward from the source. Whoever discovers the existence of such waves will win a Nobel Prize in Physics. In this post I’ll describe the dataset and how I went about rendering such a wave. I’ll also talk about optimizing RenderMan, in this case PRMan 15.1, when drawing 3 million transparent triangles.
The data I was using was already conveniently defined within a 200x200x200 grid. The visualization software also had a plugin for reading and storing the data, which saved me some time. I also wrote some python scripts aside to save some more time in rendering the software and to actually find out what the data was that I was dealing with. This also led me to use marching cubes to visualize the volume.
I started rendering with the RiPoints objects using the HSL color space increasing the Hue value with the density (normalizing it with the max value) which resulted in this render.
This looked cool, but it was hard to see the detail and depth in the image. The next thing I tried was to render the points as solid spheres, this way it would be possible to see the geometry. Rendering spheres took way too long, but I ended up rendering about 20 layers in depth which is approximately as much as my computer allowed given the memory I have. Mapping the color to the z value and normalizing it from 0 to 20 resulted in the following render.
Rotating things to the side looked like the image below.
Now if we map the color based on density, and not z value we get the following result.
This was interesting cause it showed basically what the data would look like. The next thing I did was make a video with frames from each slice of the data. The density goes from blue to white. This would further allow you to see what happens to the data as you fly through the data in the z direction.
The original renders in Java OpenGL had these weird circular artifacts in the renders. The professors in the astrophysics department blamed it on aliasing, but it turns out, through visualizing the data that the artifacts were actually in the data and not a byproduct of resolution issues in the render. The data just naturally forms these rings as the volume is cutoff by the bounds. It’s sort of like imagining an onion with rainbow colored layers growing and when it hits a certain point it gets cut off and you end up with all these rings on the edge.
I realized that RenderMan is ridiculously slow if you give it on RIB file with more than a few million objects because it reads everything into memory at once. Even though it uses buckets while rendering to divide geometry into micropolygons, it’s still going to read in the whole file. This is ok if you have the memory in your computer, but if you don’t it’ll start to swap with the hard drive and CPU utilization will drop from 100% to something ridiculously low around 10%. This isn’t a desirable because the computer spends most of the time writing things in memory to the hard drive and reading the hard drive to put things back into memory. I didn’t have the patience to see how long it would take since the swapping crippled everything else on my machine.
The solution I came across was to use RIB archive files and load them in using the syntax Procedural “DelayedReadArchive” [ “ribarchive.rib” ] [ -10 10 -10 10 -10 10 ] where the -10 10 -10 etc are the bounding box. This is apparently how Maya and other software that uses RenderMan does it. This is also convenient in situations when a certain model is used over and over again, and instead of blasting all the triangles to the rib file, you can create one archive and just call Procedural “DelayedReadArchive” [“my_object.rib”] [bounds] and it will only load in the geometry for the object when it reaches it’s bounding box while rendering.
Since I had a 200x200x200 volume, I broke it up into 40x40x40 cubes which resulted in 125 sub rib files. The main rib file for the first frames is called 0000000000.rib where 0000000000 is the frame number and the sub files are 0000000000.000.rib where the additional numbers are the rib archive for that frame.
So my rib file ended up looking like the code below.
Color .16 .85 .19
Procedural "DelayedReadArchive" ["0000000000.000.rib" ] [-100 -60 60 100 -100 -60]
Procedural "DelayedReadArchive" ["0000000000.001.rib" ] [-60 -20 60 100 -100 -60]
Procedural "DelayedReadArchive" ["0000000000.002.rib" ] [-20 20 60 100 -100 -60]
Procedural "DelayedReadArchive" ["0000000000.003.rib" ] [20 60 60 100 -100 -60]
Procedural "DelayedReadArchive" ["0000000000.004.rib" ] [60 100 60 100 -100 -60]
And each sub-rib file looks like the code below.
Polygon "P" [-63.24194876568626 91 -99 -63 91.15589980794408 -99 -63 91 -99.15650275881634]
Polygon "P" [-62.09496035785075 91 -99 -63 91 -99.15650275881634 -63 91.15589980794408 -99]
Polygon "P" [-63.47324235625897 90 -99 -63.24194876568626 91 -99 -63 91 -99.15650275881634]
Polygon "P" [-63 90 -99.18679860249857 -63.47324235625897 90 -99 -63 91 -99.15650275881634]
Polygon "P" [-62.09496035785075 91 -99 -62.72519461881413 90 -99 -63 90 -99.18679860249857]
Going back to my situation this brought the render down to less than a minute when rendering at 1000×1000 and about four minutes at 4000×4000 resolution and 4 samples per pixel with a shading rate of .25.
Having solved the memory problem I decided to implement a marching cubes algorithm on the volume. The way I implemented it in my case was to look at 8 voxels at once. These would be the eight corners of the marching cube. I define a threshold for density, lets say 0.01. Now I examine all the density values at each corner. In the simplest case if one corner is smaller than 0.01 and the rest of the corners are greater then we would make a triangle like in the image below. (Image borrowed from http://paulbourke.net/geometry/polygonise/)
In the image above the corner #3 is below our threshold. Based on the values at corner #3 and the 3 other corners connected to it via the edges, we can interpolate a value across the three edges based on each value to find where the density is exactly at 0.01, our target value. This is where we put the vertices of the triangle above. If we make the threshold 0, we get the image below, which is basically the surface that was rendered in the above images.
I also tried adding in contour lines based on the u and v coordinates, but it wasn’t so great because the lines tend to overlap and you end up having no idea what is going on. For those curious here is what it looks like.
This was my experience with marching cubes and volume rendering. I’m not going to post any code here because it’s very application specific and the shaders can be found in any textbook. There’s a few more thing I’m going to try like rendering half the volume so you can see the inside.