Developers

This page will provide an in depth explanation about the plug-in for developers.

On this page we will dig ourself into the inner behaviour of the plug-in. We will discuss how to plug-in builds up from different classes, how it handles the events with nodes, meshes, materials, etc. This information is mainly useful for developers who want to extend the plug-in with more features.

Structure

Let us start with the structure of the plug-in. As it has said the classes of the plugin are placed in the org.apertusvr.render package. Open this in Android Studio, and see what we will find!:

org.apertusvr.render

As you can see, there are plenty of classes contained in the plugin. We can distribute them into three group (more or less...):

  1. The apeFilementRenderPlugin class, which implements the actual plug-in.

  2. Loader classes, which are responsible for loading resources (e.g. meshes, textures), theese are correlated with the 3rd group. The classes in this group are:

    1. apeFilaIblLoader

    2. apeFilaMeshLoader

    3. apeFilaMtlLoader

    4. apeFilaTextureLoader

  3. Entity classes, which are needed because Filament basically provides a low level API to its features. This means we have to implement these classes for our own taste. Classes in this group are:

    1. apeFilaIbl

    2. apeFilaLight

    3. apeFilaMesh and its descendants: apeFilaPlaneMesh, apeFilaConeMesh, ...

    4. apeFilaMeshClone

    5. apeFilaTransform

There are also two additional classes. One of them is the apeCameraController class, which as its name suggests responsible for controlling the camera according to user input. The second is the class called apePhong2Pbr. This class encompasses features for converting values related to Phong-shading (ambient, diffuse, specular, etc.) to Pbr values. This is done in a very arbitrary way (as there is no exact solution for the given problem), thus we will not discuss the details about it.

The following class-diagram may help you to get better insight of the plug-in structure:

class hierarchy in the plug-in

In the following sections we will go through on how this hierarchy serves the plug-in to connect the Filament API to ApertusVR. Before diving into it, we highly recommend you to read the README.md file on Filament's GitHub repository, which gives you an introduction to the API. Also even if we will discuss some of the API features along the way, it is good to check the android sample files too.

Nodes

This section shows how the plug-in handles the node events, and how it manages them using the Filament API.

Nodes vs transforms

While the ApertusVR API defines nodes as objects which have position, orientation, and scale, Filament uses transform entities, which are representing coordinate systems with 4x4 model matrices.

Filament's Android API represents entities with a raw int variable equipped with the annotation of @Entity. There are several entity type (e.g. renderables, transforms, ...) , but all of them are accessible with this int variable, as it serves as an ID. This means that for each of them first we need to create an entity ID in the entity manager: @Entity int entity = EntityManager.get().create();

This makes us many challenge. First note that ApertusVR will not give us encompassed transform events with position, orientation and scale, but sends us them separately. Processing these events includes transforming the position vector (apeVector3), the orientation quaternion (apeQuaternion) and scale vector into 4x4 matrix. This means handling these events separately is not so efficient.

Other problem is that sever features of nodes are not supported in Filament (like visibility options) as they are supported in ApertusVR.

We will discuss how the plug-in solves these issues, and implements full functionality of nodes using Filament transforms.

The apeFilaTransform class

The apeFilaTransform class encompasses the features of transform entities in Filament (or most of them) to connect it to ApertusVR. The class looks like the following code snippet:

public class apeFilaTransform {
public apeFilaTransform(TransformManager tcm) {/*...*/}
public apeFilaTransform(int entity, int transform) {/*...*/}
public void setTransform(apeMatrix4 transformMx, TransformManager tcm) {/*...*/}
public void setParent(apeFilaTransform other, TransformManager tcm) {/*...*/}
public void detach(TransformManager tcm) {/*...*/}
public void destroy(Engine engine) {/*...*/}
@Override
public boolean equals(Object obj) {/*...*/}
@Override
public int hashCode() {/*...*/}
@Entity int entity;
@EntityInstance int transform;
}

What you have to see here, is that it resembles to ApertusVR nodes in the sense, that it has public functions to:

  • Set the position, orientation and scale, but it do it all at once

  • Set a parent transform, which acts just like in the case of the nodes, thus the transform's coordinates are defined relative to its parent's coordinates.

  • Detach from parent transform

  • Destroy the filament entity, which is important on Android

The implementation of these functions are trivial, we will not cover them here.

Also there are two other functions, but we will not bother ourselves with them. However, the two member variable are more important. As we need to identify our transform entity, there is an @Entity int variable, and also an @EntityInstance int variable. The first makes it possible to identify it as an entity by the EntityManager, the second is needed for the Filament TransformManager, which manages the transform matrices and the hierarchy of transforms. The basic constructor creates them as follows:

public apeFilaTransform(TransformManager tcm) {
entity = EntityManager.get().create();
transform = tcm.create(entity);
}

Hierarchy of transforms

Just like nodes, transforms can be ordered into hierarchy. However, the difference is that not every transform has to be a separate entity. This means that geometries has their own transform, and thus they can be rendered on the scene without having a parent transform, unlike with nodes, where every node is on its own, and the geometries can only be rendered if they are attanched to a node.

In the plug-in, the "direct" transform of the geometry is used just to adjust the unit-scale of the object:

/* set the unit scale as the geometry's transform */
TransformManager tcm = mEngine.getTransformManager();
float s = fileGeometry.getUnitScale();
tcm.setTransform(tcm.getInstance(filaMesh.renderable),
new float[] {
s, 0f, 0f, 0f,
0f, s, 0f, 0f,
0f, 0f, s, 0f,
0f, 0f, 0f, 1
});

Here what you should see, is that after requesting the unit scale from fileGeometry (apeGeometryFile), the plug-in sets the "direct transform" for the correlated filaMesh, (apeFilaMesh). The function tcm.getInstanse(...) asks the EntityInstance (the transform) of the fileMesh's renderable entity, then just sets a unit-scale matrix for it.

Visibility

In ApertusVR, users have the possibility to hide an individual node, or hide the node with all of its children. In code it looks like:

class INode
{
protected:
virtual ~INode() {};
public:
/* ... */
virtual bool isVisible() = 0;
virtual bool getChildrenVisibility() = 0;
virtual void setVisible(bool visible) = 0;
virtual void setChildrenVisibility(bool visible) = 0;
/* ... */
};
}

The following image shows how these two feature works:

visibility options

In the plug-in this was implemented by changing the transform matrix of the transform in the hierarchy. Let us assume that node is an apeNode. Then:

  1. If node.isVisible() == false, but node.getChildrednVisibility() == true, then the node's transform matrix is identity, which means that it makes no effects on its children.

  2. If node.isVisible() == true, but node.getChildrendVisiblity() == false, then the node's transform matrix is zero, which means it squeezes all of its children, so they just disappear from the scene.

  3. If node.isVisible() == false and node.getChildrenVisiblity() == false, then see 2.

  4. If node.isVisible() == true and node.getChildrenVisiblity() == true, then the transform matrix is calculated from the node's position, scale and orientation.

Meshes

This section discusses how the meshes are rendered on the screen using the Filament API.

Renderable entites

We already dealt with entities in the previous section, where we said, that each entity in Filament is identified by an integer value. This entity can also be a renderable entity if we build a renderable for it with Filament's RenderableManager.Builder class. This is what the apeFilaMeshLoader class does.

So, when ApertusVR sends us an event, that someone created a new geometry (e.g. FileGeometry), we only create an empty class (apeFilaMesh), then later, when all the details are available about the a geometry, we build the renderable to use it on our scene.

The apeFilaMesh class

As we mentioned, this class implements a similar wrapper functionality like the apeFilaTransform. To get a grasp of how the class looks like, the following code block shows the stripped version of the original class:

apeFilaMesh.java
package org.apertusvr.render;
import /*...*/
class apeFilaMesh {
apeFilaMesh() {/*...*/}
apeFilaMesh(int renderable, IndexBuffer indexBuffer,
VertexBuffer vertexBuffer, Box box) { /*...*/}
public void setParentTransform(apeFilaTransform transform, TransformManager tcm) {
tcm.setParent(tcm.getInstance(renderable), transform.transform);
}
@Entity int renderable;
IndexBuffer indexBuffer;
VertexBuffer vertexBuffer;
List<Part> parts;
List<String> definedMaterials;
Box aabb;
apeFilaTransform parentTransform;
}

So basically there is two constructor. The firsts actually does nothing, but put null in every member value. As it was mentiont, this is actually the functionality we need, if we do not know the details about the mesh. There's also an other constructor, this works as expected from the first look.

The member variables needs explanation:

  • We talked about the renderable ID, this is the first variable.

  • There's a vertex and index buffer in the class. For normal functionality we do not need to store them, because when we build a renderable, Filament store them for itself. However, keeping a copy of them makes it easier to clone the mesh (see apeFilaMeshClone).

  • Meshes can build up from several submeshes. We store some data about each individual mesh part, again just for make it easier to clone the mesh.

  • We also store a list about the defined material instance names in the mesh. We will discuss materials later.

  • We also have a variable called parentTransform. This is only needed if ApertusVR sends the PARENT_NODE events, before it sends the details about the mesh, and therefore we do not have built renderable to set the transform for it.

Mesh loading

When ApertusVR sends the details about the geometry (e.g. vertices, indices, materials, ...), the plug-in's task is to create the proper renderable entity in Filament. This is where the apeFilaMeshLoader class comes in. Now this is actually a modified version of the MeshLoader.kt, which you can find in the Filament samples.

Filament has its own mesh file format, called filamesh. This means we have to use this in our plug-in as our main geometry file format, which implies that to fully understand how the mesh loader works you have to obtain a solid background about the filamesh file format. You can find details on this link: https://github.com/google/filament/blob/main/tools/filamesh/README.md.

Filamesh also supports glTF fileformat. Currently the plug-in does not have implementations to use this feature, but its an open issue.

Again, we first take a glace at the class, which looks like the following code block:

final class apeFilaMeshLoader {
/*const values*/
static void destroyMesh(Engine engine, apeFilaMesh mesh) {/*...*/}
static void loadMesh(InputStream input, String name,
Map<String, MaterialInstance> materials,
Engine engine, apeFilaMesh target,
String defaultMatName, boolean shadow) {
Header header;
try {
/* read header, vertex buffer, index buffer data */
/* create vertex and index buffer in the GPU */
@Entity int renderableEntity = createRenderable(
name, engine, header.aabb, indexBuffer,
vertexBuffer, parts, definedMaterials,
materials, defaultMatName, shadow);
/*put the values into the target*/
}
catch (IOException e) {
e.printStackTrace();
}
}
/* This function makes it simpler to the user to load meshes with a given material */
static void loadMesh(InputStream input, String name,
MaterialInstance material,
Engine engine, apeFilaMesh target,
String matName, boolean shadow) {
loadMesh(input, name, Collections.singletonMap(matName,material),engine,target,matName, shadow);
}
/* this function creates a clone of an existing source mesh*/
static void cloneMesh(String sourceMeshName, apeFilaMesh sourceMesh, apeFilaMeshClone target,
Map<String, MaterialInstance> materials,
Engine engine, String defaultMaterialName, boolean shadow) {
/*...*/
}
static void destroyClone(Engine engine, apeFilaMeshClone meshClone) {
/*...*/
}
/*
* functions for reading from binary files
* ...
*/
/*
* functions which read the header, parts, vertex and index buffers
* ...
*/
private static int createRenderable(
String name,
Engine engine,
Box aabb,
IndexBuffer indexBuffer,
VertexBuffer vertexBuffer,
List<Part> parts,
List<String> definedMaterials,
Map<String,MaterialInstance> materials,
String defaultMatName,
boolean shadowEnabled) {
int size = /* ... */
RenderableManager.Builder builder = new RenderableManager.Builder(size);
/* ... */
for (int i = 0; i < size; i++) {
/* set each subgeometry in the builder */
String matS = definedMaterials.get((int)parts.get(i).materialID);
MaterialInstance material = materials.get(name + matS);
if(material != null) {
builder.material(i, material);
}
else {
builder.material(i, Objects.requireNonNull(materials.get(defaultMatName)));
}
builder.castShadows(shadowEnabled);
builder.receiveShadows(shadowEnabled);
}
@Entity int result = EntityManager.get().create();
builder.build(engine, result);
return result;
}
/*
* Header and Part classes
* ...
*/
}

OBJ files

Primitives

Lights

Materials

ApertusVR and Filament materials

Basic materials

Camera