Friday, June 3, 2016

Mandelbox Fractals and Flights

Deus Ex Machina – Example Mandelbox fractal image I created.
Deus Ex Machina – Example Mandelbox fractal image I created.


I’ve previously posted some images of Mandelbox fractals, so this time I’ll write more about them and provide a video I made of various flights through the Mandelbox.

The Mandelbox is a folding fractal, generated by doing box folds and sphere folds. It was discovered by Tom Lowe (Tglad or T’glad on various forums). The folds are actually rather simple, but surprisingly, produce very interesting results. The basic iterative algorithm is:

if (point.x > fold_limit) point.x = fold_value – point.x
else if (point.x < -fold_limit) point.x = -fold_value – point.x

do those two lines for y and z components.

length = point.x*point.x + point.y*point.y + point.z*point.z

if (length < min_radius*min_radius) multiply point by fixed_radius*fixed_radius / (min_radius*min_radius)
else if (length < fixed_radius*fixed_radius) multiply point by fixed_radius*fixed_radius / length

multiply point by mandelbox_scale and add position (or constant) to get a new value of point

Typically, fold_limit is 1, fold_value is 2, min_radius is 0.5, fixed_radius is 1, and mandelbox_scale can be thought of as a specification of the type of Mandelbox desired. A nice value for that is -1.5 (but it can be positive as well).

There’s a little more to it than that, but just as with Mandelbrot sets and Julia sets, the Mandelbox starts with a very simple iterative function. For those who are curious, the fold_limit parts are the box fold, and the radius parts are the sphere fold.

One of the parts that is left deals with what’s called ray marching. Since these types of fractals don’t have a simple parametric equation that can be easily solved without the need for iterations, etc., one must progress along the ray and ask “are we there yet?”. To help speed up this progression, an estimate of a safe distance to jump is calculated (using a distance estimator). Once the jump is made, the “are we there yet?” question is asked again. This goes on until either we get close enough or it’s clear we will never get there. The “close enough” part involves deciding ahead of time how precise we want the image to be. Since fractals have infinite precision/definition (ignoring the limitations of the computer, of course), there’s no choice but to at some point say “we’re close enough”. This basically means we’re rendering an isosurface of the fractal. To see what I mean, refer to both my “Deus Ex Machina” image and my “Kludge Mechanism” image. Kludge Mechanism uses a less precise setting and therefore has less features.
Kludge Mechanism – Example Mandelbox fractal image I created
Kludge Mechanism – Example Mandelbox fractal image I created

The ray marching technique (and distance estimator method) can be used to create a Mandelbulb, 4D Julia, Menger Sponge, Kaleidoscopic IFS (KIFS), etc. as well as non-fractal objects like normal boxes, spheres, cones, etc. But many of the non-fractal objects are better and faster calculated with parametric equations.

Now for the fun part (hopefully). Here’s a video I made using my raytracer. It shows various Mandelbox flights and even a section varying the mandelbox_scale from -4 to -1.5

One of the flights points out an interesting by-product of an attempt to speed up the ray marching. One can specify a bound radius in which anything outside of that radius doesn’t need to be run through the ray marching process. In my “Futuristic City Under Construction” flight, I accidentally set the bound radius too small which cut out some of the Mandelbox. But, in this case, it was, in my opinion, an improvement because it removed some clutter.
Futuristic City Under Construction (Small Bounding Radius)
Futuristic City Under Construction (Small Bounding Radius)
Futuristic City Under Construction (Correct Bounding Radius)
Futuristic City Under Construction (Correct Bounding Radius)


I’ve also created another video showing just Deus Ex Machina but with higher resolution, more detail, and more frames. Even though it’s 1080p, I recommend using the 720p setting.


And another video showing just Futuristic City Under Construction but with much better camera movements, further flight, more detail, more frames, and 16:9 aspect ratio.

To better view these videos on my YouTube channel (with full control) go to: http://www.youtube.com/MrMcSoftware/videos


A lot more can be said on the subject of Mandelbox fractals, as well as a lot more images I created, but this will do for now.

Thursday, June 2, 2016

My Interpretation of Pink Floyd’s Song, “Set the Controls for the Heart of the Sun”

Pink Floyd’s Ummagumma album.
Photo by Ian Burt / CC BY 2.0
 
I used to take the meaning of the title to be quite literal, as if you were literally setting your spacecraft’s controls for the heart of the sun. Then I read that the phrase “Set the Controls for the Heart of the Sun” came from Hunter S. Thompson. Not knowing really anything about Hunter, except “Gonzo”, I didn’t really know what to do with that knowledge. But now, I can’t find any reference to that connection to Hunter, so I’m not sure of the validity of that. Looking further, it was claimed it came from William S. Burroughs. But, the truth appears to be that it came from Michael Moorcock. Hey, knock it off  :-)

Then, having not known the exact lyrics (some lines are hard to hear), I again took it somewhat literally as the progression of the sun from morning to evening.

Looking again, I found out that a few of the lines are borrowed from Chinese poetry. It’s not unheard of to borrow from literature. Led Zeppelin borrowed from J.R.R. Tolkien (“The Lord of the Rings” and “The Hobbit”) for “Battle of Evermore”, “Ramble On”, and others. I also found the exact lyrics (with some disputed words). I’ve come to believe the song is really about a relationship from start to end.

So here goes (lyrics by Roger Waters, Chinese poetry quoted from Bathrobe (http://cjvlang.com), but originally from “Poems of the Late T’ang” (translated by A. C. Graham)):

“Little by little the night turns around”
Chinese poetry version “watch little by little the night turn around”

Night would be a period of not being in a relationship – no love. An initial interest in someone gives a hint of something good to come. The drums in the beginning of the song could be thought of as one’s heartbeat, skipping a beat when seeing the person of interest.

“Counting the leaves which tremble at dawn”
Chinese poetry version “countless the twigs which tremble in the dawn”

Of course there is uncertainty. Will they like me? Will they reject me? Will it be a mistake? Will it be great? Will they be the one? Will I like them? Can I do this? At some point, the uncertainty must be overcome by desire.

“Lotuses lean on each other in yearning” (some say “union” not “yearning”)
Chinese poetry version “So many green lotus-stalks lean on each other yearning!”

The desire takes over. Each yearns for the other.

“Under the eaves the swallow is resting”
Chinese poetry version “two swallows in the rafters hear the long sigh”

You got me :-) Maybe it means that for now, your concerns are gone. The relationship is going well. You feel comfort in the protection of the relationship.

“Set the controls for the heart of the sun”

Since this is about the path of a relationship from start to end, dawn to dusk as it were, it’s a journey on the path of the sun (obviously, the apparent path of the sun). So, in a sense, you are setting the controls for the heart of the sun. Inevitably, one can’t ignore the fact that a literal trip to the sun would be a suicide mission. Perhaps Waters is making a statement about what kind of mission the pursuit of love is.

“Over the mountain, watching the watcher”

A relationship naturally has to overcome things. It might have to overcome what seems like a mountain. Each one is focusing on the other (each is both watching and being watched by the other). Could even be thought of as looking into each other’s eyes.

“Breaking the darkness, waking the grapevine”

Each is awakening from a period of loneliness and no love. And feelings of love are waking up.

“One inch of love is one inch of shadow”
Chinese poetry version: “one inch of love is an inch of ashes”

This one is slightly borrowed from someone on the internet. Love is thought to be a selfless act, but it really isn’t. You love because it makes you feel good. You wouldn’t do it if it didn’t. So, inherently, it is selfish as well. For some reason, I’m reminded of one of the seven deadly sins – pride. Pride is the worst of all because no selfless act can overcome this sin – it would feed the sin.

Another interpretation is that love brings sorrow. Every bit of love brings an equal bit of sorrow. To love someone is to inflict pain on them and yourself. [ my interpretation of Bathrobe, http://cjvlang.com ] I can’t deny that ashes does seem to be more negative than selfishness. Perhaps Waters chose shadow to foreshadow the pain of loss at the end of the relationship. All good things must come to an end.

Addendum: Some sources indicate this line is:
“Knowledge of love is knowledge of shadow”

“Love is the shadow that ripens the wine”

With the selfishness of love combined with the selflessness of love, once in equal amounts, the relationship is made right. Ideally, each one loves the other equally.
Or, using the alternative interpretation of the previous line, this would now mean that love ripens the wine of sorrow.

Now the song goes into a long instrumental part, with highs and lows, just like a relationship. Chaotic passion. Losing oneself in the uncharted territory. But also the calm of enjoying the ride. The beautiful rhythm of love. Again, if using the alternative interpretation, the ride would not be calm nor beautiful.

“Witness the man who raves at the wall”
Chinese poetry version “Witness the man who raved at the wall”

Oh no, the relationship is just about over. Either raving at a literal wall or a figurative wall of something that can’t be overcome – something stopping the relationship. The music tends to indicate a numbness. An “Uncomfortably” numb feeling (reference to another Pink Floyd song which would be written later). Or a sadness.

“Making the shape of his question to heaven”
Chinese poetry version “as he wrote his questions to Heaven”

Inevitably, questions come. Don’t they love me anymore? What went wrong? Will they come back to me? Can I get over what they did? Can they get over what I did? Why God, did you do this to me? Why did I fall in love when pain and sorrow is all that comes of it?

“Whether the sun will fall in the evening” (some say “Knowing” not “Whether”)

Any chance the relationship can be saved? Or at least a friendship? Or knowing it will end?

“Will he remember the lesson of giving”

Now that the pain of the ended relationship has set in, will they try to find love again, or not want to go through the pain again. Will they remember the joys of their love? Will they give love again in order to get love? Alternatively, it could be a caveat – will they remember to not love again, for love brings sorrow. Of course, just as the sun must rise again, so must the love/pain cycle continue. So, there’s no escaping it.

I’ve read other interpretations on the internet that are interesting and could be right. Of course, only the members of Pink Floyd (and specifically, Roger Waters) truly know what the song means. At any rate, it’s still a nice and unusual song anyway you look at it.

Wednesday, June 1, 2016

The Electronics Behind My Raytraced NAND Gate IC Mask Layout Using A Gate Array

A raytraced colored glass integrated circuit mask layout of a NAND logic circuit made from a gate array.
A raytraced colored glass integrated circuit mask layout of a NAND logic circuit made from a gate array.

Since my raytraced NAND circuit image (Integrated Circuit Mask Layout, etc.) is the most popular (in terms of views and downloads) image in my deviantArt gallery, I decided to write a blog post explaining the electronics behind this circuit.  Admittedly, chances are anyone interested in this image already knows the electronics part, but in case they don’t…  Now for the disclaimer:  I modeled (for my raytracer) the IC mask layout about 11 years ago, and I had a class in this subject about 26 years ago, so it’s not exactly fresh in my mind.



My Original Raytraced NAND IC Mask Layout (Modeled 11 Years Ago)
My Original Raytraced NAND IC Mask Layout (Modeled 11 Years Ago)

Annotated NAND Schematic (Encased In Glass)
Annotated NAND Schematic (Encased In Glass)

Annotated NAND IC Mask Layout
Annotated NAND IC Mask Layout


The two images with annotations show how the schematic matches the IC mask layout.  For this discussion, GND is ground, + is Vdd, F is the NAND gate output, A and B are the NAND gate inputs, and the colored rectangles surround the individual transistors (which are color coded to match).

I’ll start by describing the schematic of the CMOS NAND logic circuit.  The two top transistors (side by side) are PMOS (p-channel metal-oxide-semiconductor) which are normally on (meaning when the voltage on the transistor input (gate) is zero, current flows between the drain and the source).  So, as long as either A or B or both are off (0), the output F receives Vdd (presumably, 5 volts).  The two bottom transistors are NMOS (n-channel metal-oxide-semiconductors) which are normally off (meaning when the voltage on the transistor input is zero, current doesn’t flow between the drain and the source).  In this configuration, both A and B have to be on (5 volts) in order for F to be tied to ground (thus a 0), and in this case, the top two transistors would be off (thus not providing Vdd).  Since both NMOS and PMOS transistors are used, this circuit is considered CMOS (complementary metal-oxide-semiconductor).  By the way, the direction of the arrows (of the transistors) in the schematic show which type of transistor it is, and the legs with the arrows are the sources (as opposed to the drains).

Now, I’ll describe the Integrated Circuit (IC) mask layout.  Integrated circuits are created using a sequence of masks. The basic mask sequence is transistor area mask, polysilicon gate definition mask, contact area mask, and then metal definition mask. Thus a mask layout represents the masks that would be used (the actual masks would have to be checked and modified to adhere to various spacing requirements). The blue areas are metal.  The gray areas are cuts which create possible contacts (for the metal areas).  The red areas are polysilicon (gates).  The green areas are heavily doped n+ or p+ regions (diffusion) (sources and drains).  Technically, the NMOS transistors (the green area on the left), have an “understood” or “assumed” p-type well diffusion.  I didn’t model that (p-type well diffusion) distinction.  Anywhere the red overlaps the green (polysilicon overlaps diffusion), a transistor is formed.  The red lines conduct electricity from end to end.  The left block creates the two bottom transistors (in the schematic), and the right block creates the two top transistors.  Therefore, the left metal strip is ground, and the right metal strip is Vdd (+5v).  The two red lines with metal attached are the NAND gate inputs (A and B).  The metal on the far right (attached to the green area through a cut) is the output of the NAND gate.  The squiggly metal in the middle connects the two sets of transistors together and to the output of the NAND gate.  With this information, you should be able to see how the mask layout matches the schematic.

Blank gate arrays are formed on the silicon wafer (with the polysilicon, diffusion, cuts, and basic metal strips (not contacting anything) all formed) in the first stages of the process.  Then all a manufacturer has to do is make the metal connections needed to form the desired circuit.  Kind of like writing to a blank CD-R (not as easy though).  Each chip (gate array) may have several thousands of transistors (many more blocks than what I modeled). The modern incarnation of gate arrays is FPGA (Field-Programmable Gate Array). These are fully manufactured blank so to speak, and are configured (or programmed) by the customer – no manufacturing required. They even can be re-programmed. These really are like writing to a blank CD-RW and as easy (as long as you know the coding language).
My Raytraced NMOS Inverter IC Mask Layout
My Raytraced NMOS Inverter IC Mask Layout

My Raytraced NMOS Inverter IC Mask Layout (Modified)


By the way, I modeled another IC mask layout – an NMOS inverter.  I won’t bother describing that one, except to say the yellow area is a depletion implant, and the four metal lines at the bottom are inverter input, ground, inverter output, and Vdd respectively.  Also, in case it wasn’t obvious, in the NAND scene, the white back wall has many colored shadows/projections of the glass NAND mask layout because there’s more than one light – the lights are in different locations.  The effect is easier to see in my “Project GlassWorks…” video on my YouTube channel.
It’s too bad (for me) my university built an integrated circuit fabrication lab after I graduated (B.S. in Computer Engineering) – maybe I could have made some of these circuits.  Also, too bad they created a supercomputer after I graduated. But the professor responsible for creating it was on my thesis committee when I pursued my Masters (in Computer Science), so I did get to see it during its construction.

On another subject, I did design a simple 16 instruction microprocessor for a homework assignment.  Maybe it could be made using a gate array.  Of course, each one of those boxes contains many logic gates, so it’s really more than it appears.
16 Instruction Microprocessor I Designed
16 Instruction Microprocessor I Designed








I created a YouTube video showing all of this and some more stuff:

Also, I created a video showing the CMOS circuits for NAND, AND, NOR, and OR as well as the SPICE analysis of these circuits using various SPICE tools for Windows/Mac and UNIX/Linux.  This video shows you how to use these tools to simulate the circuits.
And, I created a video showing the testing, simulation, and improvement of my above mentioned microprocessor. Logisim is used to do this. This video also shows various digital logic basics, such as multiplexers, decoders, flip flops. etc.


Tuesday, May 31, 2016

HDR Photography and Raytracing (aka What is HDR?)

What Is HDR?

That is the question I asked myself while reading various raytracing blogs and forums a while ago (they love using acronyms).  I can tell you it’s not a long-forgotten brother of a former president, and not a fancy cable for your hi-def TV.  HDR stands for High Dynamic Range.  In my previous blog post, I combined two things one wouldn’t think would go together – substitute teaching and raytracing.  This time I’m combining HDR photography and raytracing.

HDR Versus LDR / SDR

Most photos or raytracing output would be considered Low Dynamic Range (LDR) (also known as Standard Dynamic Range (SDR)).  This is because most image formats are based on 24-bit (3 byte) RGB color triples.  Each color component (red, green, and blue) is 1 byte which means it can store an integer value between 0 and 255 (0, 1, 2, … 255).  Some image formats use 32 bits (4 bytes) – the fourth byte being either unused padding or an alpha channel used for transparency; either way, it doesn’t effect the stored color.  HDR image formats typically use floating point RGB color triples.  These values can range from smaller than you can imagine to larger than you can imagine (I won’t mention specific ranges because that would depend on the format used).  Overly dark images in an HDR format would contain many pixels with a small RGB triple, for example (0.000561, 0.000106, 0.0002).  This value would be (0, 0, 0) in an integer RGB format, which would be black with information lost.  In the case of overly bright images in an HDR format, there might be a lot of large RGB triples, like for example, (26010.0125, 257.1, 280.6).  This value would be (255, 255, 255) in an integer RGB format (since without any special processing, values are “clamped” to a range of 0 to 255), which would be white with a definite information loss – the HDR version is red.  You might say “Why not just scale the values to fit between 0 and 255?”.  In some cases that would work (but information / precision would be lost).  However, what if the image contains both super light and super dark areas?  HDR images can store “a greater dynamic range between the lightest and darkest areas” [Wikipedia].

HDR Used in Raytracing

Saving raytracer output in an HDR format is a no-brainer so to speak, since internally the RGB triples are already floating point, with, for all intents and purposes, no limitations of range.  The only thing that changes is the actual writing of the file.  I chose to support both Radiance’s “HDR” format and the “PFM” format (Portable Float Map) in my raytracer (in addition to various LDR formats).  Examples of HDR versus LDR appear below.  The two images show a scene that purposely has a super bright light.  The LDR version is quite useless, but the processed HDR version looks fine.
 
Raytraced scene with super bright lights saved with normal LDR format.
Raytraced scene with super bright lights saved with normal LDR format.


Raytraced scene with super bright lights tone mapped from an HDR image.
Raytraced scene with super bright lights tone mapped from an HDR image.
Loading HDR images in a raytracer is not necessarily a no-brainer.  Loading would mostly be used for what’s called “texture mapping” – a method of applying a texture to an object.  The texture routines could also be used for what’s called “environment mapping” – a method of easily applying an environment (that an object is in) without having to actually create the environment.  I chose to support the “PFM” format for loading.  Examples of environment mapping appear below.  In one of the pictures there are three spheres – two glass (showing refraction) and one silver (showing reflection).  The other picture shows two colored reflective spheres.  Of course, it doesn’t have to be spheres, it can be any objects.  I used spheres here to show that the environment really does completely surround the objects.  The HDR environment maps I used are freely available and can be downloaded at http://www.hdrlabs.com/sibl/archive.html
A raytraced environment mapping example with both glass and silver balls.
A raytraced environment mapping example with both glass and silver balls.

A raytraced environment mapping example with colored reflective balls.
A raytraced environment mapping example with colored reflective balls.

HDR Used in Photography

This application of HDR will probably be more interesting to most people, but also more controversial.  Mostly, people either love or hate HDR photography.  Some people love it, then hate it.  It tends to depend on whether you’re into realism or artistic manipulation.  I guess my preference would be to have both versions available.

Some digital cameras can use an HDR format, most can’t.  Also. some digital cameras have features like “auto exposure bracketing (AEB)”.  Neither of these is necessary to do HDR photography (but they are helpful).  Exposure bracketing is a technique of taking many photos of the same scene with different exposures.  The resulting images can be combined to form an HDR image.  Usually, one photo is taken with the desired (or close to the desired) exposure, at least one photo is taken darker (under-exposed), and at least one photo is taken lighter (over-exposed).  Care should be taken to ensure the scene doesn’t change while the set of photos are taken, since the photos need to be combined.  You can change the exposure by changing the shutter speed or ISO speed or aperture depending on what controls your camera has. The easiest way is to use EV compensation if your camera has that setting. Typically, -2 EV, 0 EV, and +2 EV would be used, but -1, 0, and +1 and even -2, -1, 0, +1, +2 (5 photos instead of 3) are good values as well.  Cheaper cameras might only have two exposure settings.  With these cameras, you could either try to do it with only two images, or you could change the light level of the scene (different lights, etc.).  Auto exposure bracketing will automatically do all the exposure bracketing for you.

Once you have the bracketed images, you need to combine them using software.  One program which will do this nicely is HDRsoft’s Photomatix Pro.  A free trial version is available.  Luminance HDR and picturenaut are free programs which can also be used (with less satisfactory results, in my opinion).  Once the images have been combined into an HDR image, generally tone mapping should be applied, especially if an LDR image is desired (in the end).  The myriad of tone mapping operations are too vast to cover in this blog posting, so my advice is to just experiment with different settings.  There usually is an “Undo” function, but if not, you could always start over.

To illustrate what can be done with HDR photography, I’ve done some test images using photos of Maryland’s Cove Point Lighthouse (which are copyrighted by Ferrell McCollough and provided by HDRSoft (permission was granted to use these photos)).  The three photos from their set are of normal exposure, over-exposed, and under-exposed.  The results tend to be somewhat surreal.  Furthermore, to really test what can be done, I also tried using just two source images: under-exposed and over-exposed.  Everyone can agree neither of the two source images are very desirable as is (due to the exposure settings), but when combined, the result is much better.  Even HDR photography haters would have to agree.

To see some remarkable examples of HDR photography, do an internet search for HDR photography.  There are quite a few pages with titles like “50 best HDR photos” or “50 Incredible Examples of HDR Photography”.

Lighthouse underexposed source photo (copyright Ferrell McCollough and provided by HDRSoft)
Lighthouse underexposed source photo (copyright Ferrell McCollough and provided by HDRSoft)

Lighthouse normal source photo (copyright Ferrell McCollough and provided by HDRSoft)
Lighthouse normal source photo (copyright Ferrell McCollough and provided by HDRSoft)

Lighthouse overexposed source photo (copyright Ferrell McCollough and provided by HDRSoft)
Lighthouse overexposed source photo (copyright Ferrell McCollough and provided by HDRSoft)

Lighthouse processed (fused) using Ferrell McCollough’s normal, over, and under photos.
Lighthouse processed (fused) using Ferrell McCollough’s normal, over, and under photos.

Lighthouse processed (tonemapped) using Ferrell McCollough’s normal, over, and under photos.
Lighthouse processed (tonemapped) using Ferrell McCollough’s normal, over, and under photos.

Lighthouse processed (tonemapped/greyscale) using Ferrell McCollough’s normal, over, and under photos.
Lighthouse processed (tonemapped/greyscale) using Ferrell McCollough’s normal, over, and under photos.
Lighthouse processed (fused) using Ferrell McCollough’s over and under photos.
Lighthouse processed (fused) using Ferrell McCollough’s over and under photos.

Final Thoughts

One interesting application of HDR images is web-based viewers which allow you to interactively change the exposure and apply tone mapping.  One such webpage is at:
http://hdrlabs.com/gallery/realhdr/
Two more webpages are at:
http://pages.bangor.ac.uk/~eesa0c/local_area/local_area.html and http://www.panomagic.eu/hdrtest/
Using a program called pfsouthdrhtml (part of the pfstools package), you can create webpages like these (but without the tone mapping selections of the first webpage).  picturenaut can also be used.  Also, a nice tutorial which goes into more detail of creating an HDR photo (warning, though, his version of Photomatix is different than mine and perhaps yours, so some interpretation is necessary) is at: http://marcmantha.com/HDR/Home_Of_Worldwide_HDR.html

Well, happy experimenting!

Addendum:  The “bangor” link and the “marcmantha” link seem to be dead links.  However, the new location of the bangor site is: http://www.cl.cam.ac.uk/~rkm38/local_area/local_area.html


I created a video showing all of the HDR images in HDRLabs' sibl archive mentioned earlier (in the raytracing/environment mapping section). It's a 360 degree vr video, so your browser and/or hardware will need to be capable of viewing it correctly.