Spark AR

Non-Euclidean Cube

I’ve just finished Portal 2’s coop mode – a bit late I know – and I had a blast! Throughout my playthrough, I was impressed by Valve’s dev smart uses of the different game mechanisms. I hadn’t play puzzle games in a while and since I liked it so much, I searched for other games similar to Portal. In my pursuit of finding my next late-night hobby, I stumbled upon the game ibb & obb but most importantly, I came across the following video about non-euclidean games.

The cube part of the video really sparked my interest. I love the magic-feeling of the cube and I tried to find out how the trick was performed. This Alan Zucconi’s article explains that in order to create this illusion, you need to use special stencil buffers. However, that’s not the technique we are going to use in this tutorial. Since Spark AR doesn’t have access to similar buffers, we will use UV manipulations and render passes to achieve the same effect.

Three.js implementation of the original approach

If you read the article, you would have learned that you can make the illusion by using some special stencil buffers. However, that’s not the technique used in this tutorial. We’re going to use Spark AR and we don’t have access to those buffers, but some UV manipulations and render passes will do just fine.

This tutorial will show you :

  1. The trick
  2. How to do it in Spark
  3. Ways to use it

You can follow along by downloading the files for free on my Github 🤖 !

The trick

The trick: every 3D objects are always inside the cube, but each face can only “see” one of them. I created an example in 2D with 2 planes. The blue one can only see the Spark logo and the red one, the torus.

Two things must be done to get this result. For a given plane, we need to add what we would see if we could look through it, the background (BG). Then, we add the shape of the object we want to see through the face. The process looks like this :

Figure 1. The magic

It is the same operation with a cube! Instead of having 2 planes, you have 6, one for every face.

The steps

1. Creating the assets

The first step is pretty straightforward, we have to create… a cube. The default one in Spark doesn’t work because it needs to have different materials for every face. At the same time, we’ll create the cube’s frame. If you don’t know how to do this, you can use Blender, a 3D creation software, and follow the instructions below. You can download the files too.

2. Getting the background

As explained before, the “background” of every face is needed. To do this, we need to have a basic understanding of how the different coordinate systems in our scene work and how our GPU knows what to render on screen.

Found on

When an object is modeled in a creation software, it has its own coordinate system relative to its local origin (1). To know its position relative to those of other ones in the scene, a transformation matrix is applied to every object, changing every coordinate system in world space (2). Other matrices (3 – 4) are then applied to determine which part of the scene our virtual camera can see. For example, an object hidden behind another should not be rendered and displayed on the screen. The same is true for an object that is not in the field of view of the camera. 

We end up with the scene coordinates in clip space, ready to be rendered on screen – the GPU handles the viewport transform (5) itself. In our case, manually doing this step is required because we need the coordinates of the plane in screen space. By using the plane’s position in screen space as UV and sampling it with the camera texture, we’re getting the background.

Luckily, there is a simple way to do this. The viewport transform is a perspective correction. It’s a division of the x and y coordinates by w, the depth component. However, since the result is in a [-1, 1] range and UV space is in a [0, 1] range, we need to remap those coordinates.

    \[screenSpaceUV = \frac{clipCoord.xy}{clipCoord.w}\cdot 0.5+0.5\]

In Spark, we also need to multiply the y component by -1 because of the way the UVs are handle.

As an example, I took the red channel of a texture using the screenspace UV assigned to a spinning plane to give you a grasp of how it works.

Couldn't we put a transparent material instead?
No, because we would see all the objects in the cube!

3. Setting up the scene

It is now time to add every object we want to see inside our invisible cube in the scene. I took basic meshes in the AR library with only colors as material but you can use anything you want! Scale and rotate them as needed, but you need to fit them all in your cube. Place every one of them in a separate null group with ambient light enabled as this will allow us to use render passes with every object. Try naming everything appropriately because it can get messy quickly.

Then, add the impossible cube and his frame in the same null group. It allows them to move at the same time. Add ambient light in the frame object only.

At this point, you have a choice to make. As of right now (v107), Spark only supports a maximum of 4 directional lights in a scene. Those are the one which makes it possible to get the right object shading. A quick fix to this is to put 4 objects in the cube instead of 6 and use the same for multiple faces. Another option is to use PBR material with environment light on each face without directional light at all. This is the technique I opted for, but the details are a little less visible.

4. Making the impossible

We’re now ready to perform every step we had in Figure 1. First, we sample the camera texture (the BG receiver patch) with the screenspace UV to get the background of the face as seen in step 3. We then use a screen render pass with the null group of any object (in this case, the torus). This gives us a texture with nothing except the torus. Finally, sampling it with the screenspace UV again and mixing them both with a binary mask of the torus shape gives us what we want! The max3 is 2 max patches cascaded. 

Since we need to this for each face of our cube,  I recommend to put these patches in groups. There is still one last problem we need to fix.

Can you spot the problem?

Some edges of the frame are missing! It is totally normal since we’re using the camera texture as the background, there is no frame in it.

5. The final adjustments

The only part remaining is to add the frame of the cube to the background used by each face. As done previously in step 4, mixing the camera texture and the cube’s frame with a binary mask will get the job done.

6. The result

After doing all the steps, you should get something like this :

As you can see, I use a plane tracker to create a bit more interaction between the cube and the user! I really like the effect it makes, but frankly, I’m not a big fan of the tracker. I don’t know if it’s because of my phone iPhone SE, but I’ve always found it difficult to use and unreliable. Anyway, we managed to create the impossible cube in Spark and I am pretty happy with the result 🙂

What’s next?

While it might be hard to use this exact non-euclidean cube in your project, we learned useful techniques to make a wide variety of interesting effects. Why not spin an impossible cube around a user’s face and display different face meshes inside. You could use UV screenspace to create portals or even other types of optical illusions? You have everything needed to do it now.

That’s it. You’ve reached the end of my first article and I hope you enjoyed it. Any feedbacks would be much appreciated. Consider following me on Instagram to be updated about the next ones.


Leave a Reply

Your email address will not be published. Required fields are marked *