Shader Muse explores the fun from Mixing, Chaining, and Digital Compositing Photo Images, Graphic Shaders (Apple Metal Shaders), and Image Filters (Core Image CIFilter). Beyond the fun, we explore the advantages of surfacing the properties of shaders and filters through a Digital Compositing Pipeline, enabling an end user such as a creative graphics designer to make changes to the node graph without programming.
For example, we explore the fun of mixing this Image
with this Shader (Fractal Flow Noise)
to get this Photo Fire Effects Video (mp4 video).
Shader Gallery (coming soon)
Jump straight into compiling the project - with XCode, Swift, Metal, and iOS.
To create the Fire Photo Effect, we first develop a "fire like" Shader and use a mask to create the effect of edges on the image catching fire. A Difference of Gaussian on the grayscale of the original image is first computed and then used to produce a sketch effect mask.
We need to chain several filters to create the mask and a blend with mask filter to put everything together. Our project uses Swift, Core Image CIFilter, and Apple Metal Shader to achieve all this.
Image Filters is common and easily understood. Many modern software applications or apps provide image filtering capabilities. But then what is a Shader? From Wikipedia, we know that "a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a ... process known as shading.".
In other words, a Shader is a set of computer instructions (computer program) that affects the light, darkness, and color etc. of an image and it runs fast. Beyond the above, it can also perform things like animations or procedural texture image generation.
- FractalFlowNoiseFilter.swift
- fractalFlowNoise.metal
The Metal Shader we have developed for use in the Fire Effect is a Fractal Flow Noise Shader. This Shader is a fractal sum of a given base image. What this means is that it takes a basic image file, makes many copies of it, some zoomed, some rotated, and blends them together. The blending is not uniform, however. The images that are zoomed smaller (having a higher frequency) will be blended with a lighter weight.
We also animate each copy of the basic image so that they move independently from each other. Furthermore, the animation of the pixels in each copy is dependent on the location of the pixel. Some pixels will move / rotate / scale faster than other pixels. This complex combination gives rise to animation effects that can simulate some natural phenomenon such as flame and gas.
This main work is performed in the 'turbulance_p' function. If you look at it closely, you will notice a loop that makes 24 copies of the image
for (float i= 1.0;i <24.0;i++ )
For each iteration, the image is zoomed out by 1.4, while the weight is reduced by 1.5. The zooming is achieved using the pixel position represented by the variable p described in the previous section
amplitude *= 1.5; //weight pos *= 1.4; //scale
So, the final sum is something like a series as below
val = amplitude * sample(pos) + amplitude * 1/1.5 * sample(pos1.4) + amplitude * 1/1.5 * 1/1.5 * sample(pos1.4*1.4) + ....
Note : The val (float3) is not just a single value. It is a 3D vector that represents the sampled value of each of the components Red, Green , Blue of the final color that is returned
Note : To zoom out the image, we simply multiply the sampling coordinates by 1.4 for each iteration. So, if we originally sample at the uv coordinates 0.0 to 1.0, the next iteration will be sampled at 0.0 to 1.4
To make the result even more interesting, we try to animate each copy of the basic image according to the time elapsed. The animation is a simple movement / shift of the image copy. The higher the frequency (image zoomed smaller), the more it is shifted.
The amount of shift is tracked by the variable newpos. For each iteration, the shift increases like a geometric series
newpos+= t*0.23;
In series form, the new position will take the form newpos = pos + t * 0.23 + t * t * 0.23 * 0.23 + t * t * t * 0.23 * 0.23 * 0.23 + .....
We do not sample the zoomed copy of the basic image using just the 'newpos' variable. Here, we introduce an additional control parameter known as 'strength'. This 'strength' variable is just a weight to control the amount of shift / movement. For the current implementation, this weight is a constant factor at 0.5. There is no restriction or whatever why it cannot vary according to each iteration.
adv = (newpos - pos)*strength; pos = pos + adv;
//comment : take the average between the desired new position and current position and use the value (pos) to sample the image
Due to the computation of the 'newpos' variable as a linear direction, the animation will generally take the form of a main direction (with varying speeds for each copy). Now we want to make different parts of the image move differently, so we create distribution functions that vary according to the position 'p' of the current pixel. These distribution functions (named pdf and pdf2 for the x and y axes), form a layer that allows each pixel of each image copy to move slightly differently from its neighbors.
Example: for pdf1 and pdf2 functions float val = abs(atan(p.y0.9543))(cos(p.x0.9532)+sinh(p.y0.9816)); float val = length(p)(cos(p.y/p.x0.9532)+sinh(p.y*0.9816));
The original implementation of these probability distribution functions is based approximately on polar coordinates with the length and arctangent functions. The actual implementation makes modifications to them largely by trial and error. The main intent is for these functions to be periodic , so you will notice the presence of sinusoidal and hyperbolic functions. To generate an interesting animation, these functions and their parameters are arbitrarily selected, and the reader may want to experiment by plugging different types of functions and parameters values to 'engineer' some other variations.
With the "fire" (Fractal Flow Noise Shader), we now need to make it occur naturally in the original image. In our implementation, we use a Difference of Gaussians for creating the mask. We compute a sketch of the original image and use it as a mask to blend it with the "fire" Shader to make the edges of some parts of the image look like it is catching fire. We can use a node graph of CIFilter to illustrate how we get the mask.
Original Image
|
V
CIPhotoEffectNoir
|
V
CIColorInvert
| |
V V
CIGaussianBlur CIGaussianBlur
| |
V V
CIColorDodgeBlend
This gives us a mask for our Fire Effect as shown below:
We use different Radius values in the two Gaussian Blur to vary the output of our Color Dodge Blend. Below are some sample values.
- Gaussian Blur 1 Radius - 14.88 (or 5.18)
- Gaussian Blur 2 Radius - 4.63
Original Image
|
| CIFractalFlowNoise CIColorDodgeBlend
| | |
V V V
CIBlendWithMask
The CIBlendWithMask uses the Original Image as the input image, CIFractalFlowNoise as the background image, and the output of CIColorDodgeBlend (DoG sketch) as the mask.
Original Image
| |
| V
| CIPhotoEffectNoir
| |
| V
| CIColorInvert
| | |
| V V
| CIGaussianBlur CIGaussianBlur
| | |
| V V
| CIColorDodgeBlend CIFractalFlowNoise
| | |
V V V
CIBlendWithMask (input - Original Image, background - CIFractalFlowNoise, mask - CIColorDodgeBlend)
It is possible to develop a Metal Shader to generate the entire Fire Shader Photo Effect above. But instead, we have chosen to apply Digital Compositing concepts and created a node graph of CIFilter with a considerably basic Metal Shader to achieve the same thing.
The advantage of doing this is many of the programming steps is now surfaced and is controllable by a user. For example, a graphics designer using an app that uses a Digital Compositing node graph can easily swap in a better "fire" or "mask". Or add additional steps to control the exposure or color correct their video.
Digital Compositing (node-based) is the process of combining multiple seemingly simple nodes to render and achieve a desired result. The paradigm of a node-based tool involves linking basic media objects onto a procedural map or node graph and then intuitively laying out each of the steps in a sequential progression of inputs and outputs. Any parameters from an earlier step can be modified to change the outcome, with the results instantly being visible to you, and each of the nodes, being procedural, can be easily reused, saving time and effort.
We can also express the node graph above succinctly in English as a list of steps.
- Original Image (none)
- Photo Effect Noir (0)
- Color Invert (1)
- Gaussian Blur (2)
- Gaussian Blur (2)
- Color Dodge Blend (3,4)
- Fractal Flow Noise (none)
- Blend With Mask (input - 0, background - 6, mask - 5)
The (0) in step 1 refers to the input image used by the Photo Effect Noir filter. Steps 3, 4, and 5 compute the Difference of Gaussians of the grayscale image to produce a sketch effect that is further used as a mask to create the effects of edges catching fire.
Displaying a node graph as a list of steps, a Digital Compositing Pipeline (invented by Nodef), is very useful in circumstances when there is limited screen space. For example, in times when we need to view or change a node graph on a mobile device.
Shader Muse forked Nodef project to enable Digital Compositing of Shaders and Filters in iOS (iPhone, iPad, and Mac).
Check out the compiled app of Shader Muse: Pipeline.
- Download and run the Shader Muse Pipeline app -
- In the Presets of the app, tap on the FIRE preset to apply the Fire Effect.
- Tap on the "f" button to view the Digital Compositing Pipeline of our Fire Effect.
- We can further enhance this effect by adding smoke: "FBM Noise" Shader.
- Next add a "Mix" Composite Filter to mix our Fire Effects with the "FBM Noise".
- Tap on the "Mix" node and change the Background to "7".
You can also adjust the intensity of the Fire Effect by tapping on the Gaussian Blur nodes and changing the Radius.
- FBM Noise - based on 2D Noise by Morgan Mcguire @morgan3d
- Fractal Flow Noise Shader
- Particles
and over 150 CIFilter for Digital Compositing. Please star our project, we will add more Shaders with your encouragement.
- Source Code - in the code folder
- Compile
- iOS
- iPadOS
- Mac
- Swift
- Metal Shading Language
After the plague, "quando nella egregia città di Fiorenza, oltre a ogn'altra italica bellissima, pervenne la mortifera pestilenza" (Decameron), while traveling, in the beautiful far east town of Malacca, a fire occurred, and both the town and skyline in front of our window was ablaze with a red orange color.
Memento Mori, Memento Vivere.