Now Loading..

Offsetting Animations in Shaders

Last updated 2023- 5-14, 4:16:12 A.M. UTC+00:00

They like to know it as "Time Displacement".

The Beginning of My Shader Adventures

It all started when I saw this post from MMaker.

Remade this in Blender for practice, I'm sure there's more efficient ways but I'll get there eventually 🤠

— MMaker (@stysmmaker) July 27, 2020

MMaker brought up the topic again after our discussion on the precision of AE effects' inputs. I knew shaders were something I needed to look into.

There's really only a couple of small things that'd prevent me from trying to do a whole vid in [Blender Shaders].
— MMaker, July 2020

And one of these small things was the "Time Displacement" effect we are very used to in After Effects.

Time Displacement in AE


From VSauce's "DISTORTIONS"

For the uninitiated, the Time Displacement effect is a compound effect. An arbitrary map's luminance is used as the weight of the effect. As is the case with any other weight map in AE, 0.5 represents no change, 0 full negative, and 1 full positive. Then we have the Max Displacement Time slider, this is the full magnitude (it's also possible to reverse the effect with a negative value here) of the displacement (in seconds). And the last slider, Time Resolution (in frames/s), is not relevant in our discussion of shaders. Using this effect often results in noticeable seams between two frames that are time-displaced, this is simply due to the fact that there are not enough in-between frames to fill in the pixels. So in effect, we can think of the weight map as if it is being posterized. This is not so much of a problem for procedural animations in shaders as they are not based on frames.

Slit-scan and Absolute Remapping

Baku Hashimoto extends the concept of time displacement as a digital effect to the traditional slit-scan photography technique. In his very well written article, he goes in detail on how this was achieved with BorisFX (he still referred to them as GenArts, this is how to tell if someone's a veteran) Sapphire's S_TimeDisplace. It differs from the vanilla Time Displacement in that it is able to control the absolute time value of the footage, so [0,1] maps to the beginning of the footage to the end of it. It is naturally evident that this is also possible in shaders. The implementation of this technique is left as an exercise for the reader. (It's just a lerp.)

Time Displacement as a Hack for Halftone

Time Displacement has been used as a way to achieve the halftone effect natively in After Effects, it's far from ideal but one that nonetheless works.

AFTER EFFECTS SUPER FAST HALFTONE TUTORIAL #AFTEREFFECTS #MOTIONGRAPHICS #TUTORIAL

— MMaker (@stysmmaker) September 12, 2019

If all you wanted was halftone in shaders, you don't need to deal with any of this funkiness.

halftone in blender! pic.twitter.com/haYtapEUor

— lachrymaL (@lachrymaL_F) August 31, 2020

Abstract

Pain!

My first attempt at doing this in shaders involved implementing a cubic solver. I stubbornly insisted on implementing the actual cubic formula. To circumvent casus irreducibilis, I used the trigonometric method. It ended up working well enough for the specific configurations I was using, and the performance wasn't so bad either, but I had trouble switching roots when the principal root jumped to a different output, and I wasn't really thinking of using the discrimminant. This would be a lot easier with actual shader languages, don't ever try to do something like this with nodes again!

So here's effect we're trying to achieve:

Learn how to have nice randomized tile animations to use as a matte for transitions.
I don't trust twitter, so to compensate for the video being compressed to oblivion, I shall talk the video through. pic.twitter.com/R2doB90r6P

— CH FR (@chfr_otomads) January 13, 2021

In theory, this isn't very difficult. We set up an animation of a circle scaling up, use UV manipulation techniques to get a grid of them, and then offset each block's animation completion with a noise/random value, changing the offset completion would then drive everything to completion. And indeed, it is not very hard to implement this idea, but this isn't the exact effect described in CH FR's video above, we have ignored the fact that, in AE, we have total control over what we are trying to offset, including the timing function/easing of the animation.

The problem then escalates in complexity. But I don't think the solution is all that difficult to understand or even very cumbersome to obtain. One could simply pass the result obtained from the previous paragraph (global completion with offset applied), normalized and clipped, into a timing function, and that would solve our problem. It is crucial that the offsetting happens before entering the shaping function.

Home Style Animation Player

During the entire process of setting up this webpage, implementing this interactive player in WebGL probably took the most time. It's really cool because it allows me to use shaders in a canvas tag, so I can apply the principles in this article directly. But I've since heard that WebGL is dead and that WebGPU is the successor that has yet to be released.
KP informed me that there is a higher level implementation called three.js, which would've probably been easier to work with.

Method

Consider the following animation (1).

(1)

Playback Rate:

On the left is a 2x2 grid of the same diagonal wipe animation, and on the right is the animation curve that describes the completion of the animation.

And now let's offset the animation in each square, I'll use specific values here, but the idea is the same with noise and random numbers, I'll also color code them so it's easier to see on the graph. (2)

(2)

Playback Rate:

If this is all you're trying to do, we're done here! But let's continue to try to implement the timing function to have the animation be eased. We'll be using a quintic curve here.

Naively applying the timing function yields the following result. (3)

(3)

Playback Rate:

Note on the Newton–Raphson method

This implementation of cubic bezier is based on this one from Golan Levin (I don't know why he defined the same function twice). Since the cubic bezier curve is a parametric 2D curve (that is, using two different functions x(t) and y(t) (both of which are cubics) to describe the X and Y coordinates of every point on the curve, the t is called the parameter and its domain is [0, 1]). We need to solve the cubic x(t) from our given X-value, which would be the current time of the playhead in our application, to obtain the parameter t which we then plug into y(t) to find the corresponding Y-coordinate. This implementation uses the numerical Newton-Raphson method to approximate the roots of x(t).

To use this method we actually need the derivative of the function we're trying to solve, fortunately a cubic's derivative is a quadratic and that's pretty easy to compute.

It converges pretty fast for our case so we're only doing 5 iterations. (We just use x for our initial guess t0)

In the end we plug t5 into y(t) to get the final value.

Again, we get this chaotic result because the offset happens after the shaping function. The values coming out of the function are being offset out of the range accepted for our animation completion. If we switch the order of operations here as stated in the abstract, we obtain the following. (4)

(4)

Playback Rate:

And this is pretty much what we want, we could even switch out the timing function for something else, such as the cubic bezier curve that everybody's used to. (5) Try dragging the Bezier handles!

(5)

Playback Rate:
Point A:
Point B:

A couple more trivial changes and we get our desired effect. (6)

(6)

Playback Rate:
Point A:
Point B:

Reach of Different 3D DCCs

Placing each frame of the video on a separate plane in 3D space is actually something that can be done quite easily in Cinema 4D through the MoGraph Multi Shader. We like to think most DCCs (Maya, 3DS Max, Blender, Cinema 4D, Lightwave, modo; Houdini and ZBrush are the exceptions!) do not differ much in their scope and what they can do, but this goes to show that there still are very specific tasks that one DCC would not be able to do, or at least, it would be very painful to do it in that program whereas it would be really easy in another. I myself have been enjoying the C4D-to-Blender-to-C4D-to-Blender-to-C4D-to-Blender workflow quite a lot. (See ぽつねん copycat below)

Frame-based Time Displacement in Shaders

So would there be a way to time displace a video in shaders? Our pal Roughy ran into something similar while working on The Power of Terry. The goal was essentially to have a flipbook of all the frames of a video, each frame as a plane in 3D space. This turned out to be particularly challenging in Blender. We looked at drivers and scripting, drivers couldn't cut it and no one knew how to script, ultimately, Roughy did all of it by hand, manually assigning materials to every plane. A couple days later I dug into OSL and managed to do it with a filename-concatenation method, basically replacing the image texture input node. One unfortunate downside is that OSL is not implemented with GPU rendering in Cycles. While I have seen that V-Ray and Octane seem to have support for OSL running on the GPU, this seems to be a nonstandard implementation.


WTF Mr. Roosendaal?

Special Thanks!

Frame-based Time Displacement is similarly doable through OSL in Blender.


CGMatter video on frame-based time displacement implemented through OSL