Edge node V1.2.4 (July 2015)

Warning: Heavy Technical Reading Ahead.

Edge Node v1.2.4 blend file can be downloaded from BNPR download page.

Edge Node v1.2.4

Before Freestyle, there was Edge Node. It is an image based edge detection, not a shader because edge detection is done via the compositor. The basic idea is to find edges with these passes:

  • Normal Pass
  • Z-Depth

To use them you need to activate these passes via the Properties Window > Render layers tab > Passes panel. Another precondition to get this node setup working is to turn on Full Sampling (Render tab > Anti Aliasing panel). In the original edge node we recommend you use 8x FSAA, since the new setup has better AA, 5x FSAA is more than enough.

Prerequisite to get Edge Node working:

  • Turn on Z-depth & Normal pass.
  • Turn on Full Sample Anti Aliasing (5x).

Edge Node v1.2.4

Why Edge Node was made and the motivation to improve it

Edge node is image based edge detection, that means rendering speed depends on how many pixels are displayed or rendered. A typical HD size render will add about 4s to render time on HDD and can be faster on SSD.

A complex scene may take a lot more time to render if done in Freestyle, while we only add 4s of render time with Edge node. The result will look like the final render as well. Freestyle does not have intersection edge detection yet, Edge node is the answer to that.

Other than Freestyle, we also have the backface culling method for edge detection. Backface culling only adds line for Freestyle equivalent edge types Silhouette and Contour, internal lines must be added as extra geometry or as texture. It also doubles the amount of geometry which is not ideal for fast workflow.

Points to take away:

  • Freestyle is heavy for complex scene, render time is potentially huge.
  • Freestyle does not have intersection edge detection.
  • Backface culling method only detects Freestyle equivalent edge type Silhouette and Contour.
  • Backface culling method is resource heavy as it doubles polygon count.
  • Edge node is image based edge detection, only adds 4s to render time for HD size render.
  • Edge node detects Freestyle equivalent edge type: Silhouette, Contour, Crease, Border and Intersections.
  • Edge node renders almost hand drawn quality edges. More organic lines.

Weaknesses of previous implementations

From the original Edge Nodes which you can read here, edge detection can be implemented in many ways. The problem with these variety of algorithms is when you have different implementations your Z-depth edges will behave differently to edges obtain through Normal Pass. The common problems are:

  • Different line thickness (different algorithms, neglect Pixel Math).
  • Missing edges (not efficient edge detection, neglect Pixel Math).
  • Aliasing (use aliasing causing algorithms).
  • Noise (over sensitivity of detection algorithms).

To solve these problems in your Pixel Math algorithm will take a lot of experimentation. I had the opportunity to do the testing few years back for the original Edge Node. Early this year (2015) I recognized where the problems were. The solution is very straight forward, use the same algorithm on both Normal and Z-depth. After that, they are combined with the same algorithm again.

How to use Edge Node v1.2.4?

1. Turn ON Full Sampling, 5x FSAA.
2. Turn ON Z-depth and Normal Pass.
3. Turn ON Material & Object Index Pass (optional).
4. Append Edge Node v1.2.4.
5. Do a test render, F12.
6. Link Normal pass and Z-depth to Edge node group.
7. Adjust Distance and check it via the 2nd output of Edge Node.
8. Enter Edge Node, Remove noise. Use the 3 open color ramps [Please read details below].
9. Do masking for color selection (Optional).
10. Color the detected edges [Please see detailed instruction below].
11. Combine solid to colored edges.

Let me walk you through the steps for the node setup and explain my thought processes.

Getting Edges from Z-depth

Why Z depth? Why not Normal Pass only?

When 2 surfaces are parallel, Normal Pass only see 1 continuous surface. But between the 2 surfaces, there are some depth. And that’s why we need edges from Z-Depth.

Z Depth edge detection

How to get good depth:

Z Depth usually has very big values. If previewed it will always be white, and that is not useful. To make Z Depth more useful, we have to be able to adjust it.

We do a logarithm calculation, with Z depth at the top input and squared distance for the bottom input. To view the distance on a color ramp node, the color at the position 0.5 is our input distance.

Depth Normalized

Note: Math node “logarithm” will calculate the logarithm of the upper value to the base of the lower value

How to make good depth for Laplace filter:

Now we have the Z Depth conditioned and normalized. A simple greyscale color ramp will not be useful, because Laplace filter is bad at detecting edges in greyscale. That is where my “Magic Machine” color ramp node comes in. With that going into the Laplace filter, Z Depth edge detection will be very nice.

Distance Check:

Distance Check

Distance Check is used to see how the “Magic Machine” color ramp looks. If it is almost 1 hue, it means you set distance too big or too small. Distance is in Blender Unit.

The basic workflow to get a good Distance value is to look how far the camera origin to the center of interest. With the floor grid on, you can get it very fast. There is no need for accuracy. Just need faces with distance is properly represented with different hue for used in Laplace filter.

Getting lines from Normal Pass

Normal Pass Edge Detection

Custom Direction:

The default normal directions are not good enough. Especially on the blue axis, because blue is very dark in value, edges detected from it will be very dark. With custom directions we can find extra edges from different directions. The output from dot of the normal node will be greyscale. Remember that I mentioned greyscale is not good for edge detection for Laplace filter. It is a compromise here, we have 3x Laplace filters, 3 of them will be very powerful to get invisible edges out.

Flat & round surfaces:

The Laplace filter has a unique characteristic. With the normal input brightened, the Laplace filter will detect round surfaces better and almost noise free because blue channel is dark. At default normal input, the Laplace filter is good at detecting flat surfaces, but can be very noisy on round surfaces. If edge detection is noisy on round surfaces, you can turn off or mute the output from the Laplace filter from being combined. More of this in the next section.

Combine lines, Boosting lines and removing noise

combine algorithm

Why not to combine with addition:

It is easy to go the route of addition to make brighter detected edges, but it has more disadvantages.

  • The amount of clipping unknown, so there are different number of inputs from edges detected from Normal Pass and Z Depth.
  • Hard to remove noise when clipping happens.
  • Hard to boost edges brightness when clipping happens.

When combining with Maximum operation, we compare which pixel is brighter. Only brighter pixels will be selected, thus clipping never happens.

maximum operator

Boosting lines while maintaining AA:

To boost line brightness, we just multiply it by 2. This way we still keep AA information.

Removing noise without removing AA, and keeping bright areas:

When removing noise we don’t do subtractionSubtraction means we also remove brightness from the already white edges, making it darker, which isn’t what we want. Instead, we remap the value to a greyscale color ramp, then slide the black color to the right to remove noise. The color ramp also acts as clamping, which is why we don’t clamp it after multiplying the edges by 2.

Last node at this stage is another Maximum Operation.

Clipping & Clamping

Edge conditioning

Final Edge Conditioning

How to remove noise and boost white with one node:

We remap output from edge combine stage. This time, both white and black color are slide toward the middle. Small noise is removed by sliding black color a little to the right. We boost white by sliding white color to the left. Do not move the 2 colors too close as it will remove AA.

Thickness adjustment (styles):

With this Erode/Dilate node, we can style the line to look hard edge or soft. With distance we can also add or remove thickness. This node is only useful when the input is in greyscale.

Final boost and clamping:

If you have done thickness adjustment, this final multiply will boost the white areas. It also acts as clamping before we do edge coloring.

Simple coloring of edges

Very simple way (Mix algorithm)

Simple Color

Professional way (Alpha Over)

Alpha over edges

Multi color lines

Object and Material masking:

Object and Material masking can be found at:

Object Masking:
Properties Window > Object Tab > Relationship Panel > Pass Index
Material Masking with BI node:
Properties Window > Material Tab > Render Pipeline Options Panel > Pass Index
Material Masking without node:
Properties Window > Material Tab > Options Panel > Pass Index

To use them you need to activate Material Index Pass and Object Index Pass. They can be activated at:

Properties Window > Render Layer Tab > Passes Panel

When activated, IndexOB and IndexMA will appear on your Render Layer node. You need ID Mask mode to get the mask, then you can do some pixel math for masking.

Pixel Math masking, Dilate and Erode:

Masking Pixel Math is easy, you either add then clamp or subtract then clamp. You can also use erode/dilate node to make your white area bigger or smaller.

The common workflow is as follows: You only want a color on certain edges. The way to do it is to subtract off the part you don’t want, then clamp it. Or, mask with white the part you want the color on. Clamping is important in masking as it is easy to get color that is bigger than 1 or lesser than zero.

Lay colors the pro way (Alpha Over):

This is an example of masking and coloring node setup for Edge Node Stress Testing day 6 (Render Result Below).

Masking In Action

Pixel Math as the core knowledge for Edge Node

For this version of Edge Node, pixel math is use heavily. There are few considerations made to optimize the setup:

1. Not using addition operation to combine detected edges.
2. Not using subtraction operation to remove noise.
3. Use Maximum Operation to combine detected edges.
4. Color boosting and noise removal using Color Ramp node to keep good AA.
5. Clamping before edges are used for coloring.
6. Masking needs a lot of clamping to keep data pure greyscale.
7. Use Combine RGBA node to make detected edges have alpha.
8. Alpha over is the better way to combine image with colored edges.

Edge node is purely playing with 1 color channel and we want the result to be white or black. When this is extended to 3 color channels (RGB) the application of pixel math will show more wonderful color behaviors.

Some examples of RGB pixel math application:

1. Remove render noise and replace it with colors besides it.
2. Combine few stops of HDR Image.
3. Boost midtone after Overlay blend mode is applied.
4. Pixel based Depth of Field for effect like tilt-shift.
5. Selective glow, either by hue or by value.

The application of proper pixel math is endless. Thus I encourage you to get Pixel Math E-book from BNPR store. It is the way to not only feel your composite node setup but actually understand in mathematical terms of why things behave as they are.

Stress testing Edge Node Version 1.2.0 to 1.2.4


Edge node v1.2.0, Stress testing day 1, Gas Mask,
Model link: http://www.blendswap.com/blends/view/79686

Close edges detection is very weak here.


Edge node v1.2.0, Stress testing day 2, Violin,
Model link: http://www.blendswap.com/blends/view/79595

Same as previous, close edges are hard to see.


Edge node v1.2.1, Stress testing day 3, Old Vintage Car,
Model link: http://www.blendswap.com/blends/view/77134

Solved close edge detection here.


Edge node v1.2.2, Stress testing day 4, Micro 6,
Model link: http://www.blendswap.com/blends/view/76453

Doing internal masking to remove edges. Not a good solution.


Edge Node v1.2.3, Stress testing day 5, Chibi tank,
Model link: http://www.blendswap.com/blends/view/75174

Fine tune to make more edges appear.


Edge node 1.2.4, Stress testing day 6, Sinon SAO,
Model link: http://www.blendswap.com/blends/view/75805

Multi color edges, move masking outside edge node, move color assign outside as well.


Edge Node 1.2.4, Stress testing day 7, Nikon D7100,
Model link: http://www.blendswap.com/blends/view/77959

With color assign and masking outside, the node setup is fully optimized.

The future for image base edge detection

>>> Realtime Edge Rendering

Edge node might turn into that next update. 😉

Edge Node v1.2.4 blend file can be downloaded from BNPR download page.

  • Gil Ruso

    This is genius!!! Thanks a lot for sharing… Is there a way to define the level of detail on the edge detection so it can detect more edges specially like for organic shapes? Something similar to what creasing threshold does in freestyle?

    • You can use the color ramp in “Edge Conditioning” to add/remove details. Slide white color to the left to get more lines, slide black color to the right to remove details. Hope this helps. 🙂

      • Gil Ruso

        Thank you very much for the reply. It worked like a charm!!!!!! This is by far the best 3d edge detection i have seen. And it is so fast!!! It doesn’t matter how complex the scene it’s always as fast.
        One more question… which is the best way to change the line thickness? I have seen I can change the distance value in the condition edges/thickness node. The problem is it only accepts integers so the increments in width are very drastic from 1 to 2 to 3 for example. Is there a better way to change the line thickness? thanks again and many thanks for sharing this. 🙂

        • You are welcome.

          For better line thickness, you can change “Fall Off” type. Different fall off will make the edges of the lines softer or harder. You can also add another color ramp after thickness node (which is dilate/erode) to make it behave like “edge boost+noise removal” node. That way you can have better control of line thickness.

          • Gil Ruso

            Thank you for the tips. I’ll try that!

  • motorsep

    Is this something that can be translated into Cg/HLSL/GLSL shader to be used in real-time rendering (games) ?

  • Zauber Paracelsus

    Will this work if you are using cycles for rendering? Because I noticed that the instructions tell you to enable the full sample option, which AFAIK is not supported by Cycles.

  • Slartibartfasz

    Hey! First of all: great job! I really like the setup, line dedection seems a lot better than for freestyle which does not work very well for me in more complex scenes. I have a question about AA though: the lines I am getting are very (!) aliased and no playing around with the various filters changed that. This does not change wether I’m using cycles or blender internal – I’d like to use cycles.

    I’m afraid there is something basic that I’m missing – any ideas?


    • In Blender Internal, there is a toggle button for full sampling anti-aliasing (FSAA). That will render the same image x amount of times as specified on the setting. When rendering, BI make x number of .exr files that will be combined into the final image. This is often taxing the cpu more and use more RAM. But the final result is no aliasing.

      In Cycles, there is no FSAA. Thus you only have 1 sample when doing compositing. Hence the aliasing problem.

  • NPRer

    Any chances of an SVG exporter? Or is it a limitation?

    • The result is in raster, but you can easily convert it to vector shapes in Inkscape/AI. You have to only set the lines as black and everything else as white.

      • NPRer

        That’s clever.

        Also, it being based on a shader does not mean that all of the mesh edges could be baked into a texture, right?

        • I can’t call this a shader. It is a post process of raster images.

          As written on related article, Edge Node will give you Freestyle equivalent line types:
          Silhouette, Contour, Border and Intersection (not available with Freestyle)
          These line types looks more natural compared to those obtained from mesh calculation from Freestyle.

          Also you cannot bake lines as texture. The lines positions changes depending on your Point of View.

%d bloggers like this: