BNPR exclusive: Edge nodes [Part 2]
Part 2: The development
In creating any system (and in engineering), the 1st and foremost task is to define the goal. As for edge nodes that is to get edges/contours from render passes, control width and change color. When this is done, experiments will take place with known parameters and concepts.
Eliminating variables, pew pew!!!
Light BWK wrote “The idea is to make as many constants as possible, eliminate variables.”
When the first prototype set is useable, it is time to remove variables. In nodes, variables are nodes like color ramps and RGB curve. They are variable because you can’t set the same accurate value twice (no numerical input). At the time of writing, color ramp node has no position value, while in internal render’s material/texture tab, color ramp has position value. RGB curve when the line curve it is a variable, a slight 0.001 position change will cause the whole line to shift. The only way to make RGB curve a constant is to use the X/Y value input and straight line. Although it is possible to group a variable nodes and use it as constant, but it isn’t very practical when there is no numerical input involved in creating it.
“Give me that node with the curve you set to make the blue more contrast but dim the red and…” This is not going to work. Always make something that is predictable for consistency.
Getting the edges
There are few places you can get edge data.
1. Alpha pass
2. Z pass
3. Normal pass
Each with pros and cons. With alpha pass we can get FreeStyle’s equivalent of “external contour.” From Z pass we can have FreeStyle’s equivalent of “silhouette.” With normal pass we can have more accurate edge data than FreeStyle ever has.
For the cons. Normals if parallel, the edges will disappear. Alpha if transparent or the wrong part is useless as mask. Z pass without proper conditioning is useless as the number isn’t between 0 to 1.
To get edge from normal pass, we’ll use Laplace filter. Laplace filter work best with value ranging from 0 to 1. For Z pass, after conditioned, the value is too small for Laplace to detect, hence this is the task for Sobel and Prewitt filter. Next Alpha or Object/material index will mask off extra white area by those 2 filters.
Not everything is straight forward. The edges you get must be within a certain value range for us to be able to control them. This is between the value 1.3 to 1.6 max. Also we need to filter out noise before it get calculated. High pass filter is used here.
No “One node group to rule them all”
Ideally there will be one node group that can detect all the edges you need. Yet the ideal node is fictional. Below is from our development log:
Daniel wrote, “I think it makes lots of sense to provide multiple line sets with all the techniques we came up with because it’s not really like one is better than the other one but more like some are good for hard surface models (set 1 for example(when including Z)) and some others for organic models (e.g. set 2).”
We came out with 6 line conditioning sets. Actually we made more than that, only those 6 fulfill our test scenes. How each works is for PART 3, when we reveal everything. Then the 7th, 8th and Nth line conditioning set will pop up from everywhere.
Me need nice edges! Please.
There are 2 types of edges that come out of the conditioning line sets. First is solid lines, second is fake anti-aliased. We love for SMAA: Enhanced Subpixel Morphological Antialiasing node to be in blender, for us to avoid using heavy, slow and HDD/SSD filling Full Sample Anti-Aliasing (FSAA). Developers we need this one urgently.
Next is the ability to make the line thick or thin, we simply use Dilate/Erode node. With feather, Fake AA can also be done here. But with the increment of 2 pixels per unit change. Lastly we provide a color node to make color selection faster.
MISC: We made variables again? Crash? 3? That’s awesome!!!
As Lee wrote in the previous post [see part 1], he came up with a setup not using normal pass. It is using another render layer and few colorful lamps. The problem with this setup is the lighting produced too many variables. The idea to use normal is that it has constant color and hence very good for conditioning. Not to say Lee’s idea is unusable, but on every render he needs to tweak ever so slightly to make it working in optimum condition.
Fast Gaussian blur, we love it, but it crashed Blender too many times while developing edge nodes. One can’t use too many of them in a composite. maybe it is a bug, but it happens at random. Every time it occur, the last node changed is Fast Gaussian blur.