Developers
This page will provide an in depth explanation about the plug-in for developers.
Last updated
This page will provide an in depth explanation about the plug-in for developers.
Last updated
On this page we will dig ourself into the inner behaviour of the plug-in. We will discuss how to plug-in builds up from different classes, how it handles the events with nodes, meshes, materials, etc. This information is mainly useful for developers who want to extend the plug-in with more features.
Let us start with the structure of the plug-in. As it has said the classes of the plugin are placed in the org.apertusvr.render package. Open this in Android Studio, and see what we will find!:
As you can see, there are plenty of classes contained in the plugin. We can distribute them into three group (more or less...):
The apeFilementRenderPlugin class, which implements the actual plug-in.
Loader classes, which are responsible for loading resources (e.g. meshes, textures), theese are correlated with the 3rd group. The classes in this group are:
apeFilaIblLoader
apeFilaMeshLoader
apeFilaMtlLoader
apeFilaTextureLoader
Entity classes, which are needed because Filament basically provides a low level API to its features. This means we have to implement these classes for our own taste. Classes in this group are:
apeFilaIbl
apeFilaLight
apeFilaMesh
and its descendants: apeFilaPlaneMesh
, apeFilaConeMesh
, ...
apeFilaMeshClone
apeFilaTransform
There are also two additional classes. One of them is the apeCameraController
class, which as its name suggests responsible for controlling the camera according to user input. The second is the class called apePhong2Pbr
. This class encompasses features for converting values related to Phong-shading (ambient, diffuse, specular, etc.) to Pbr values. This is done in a very arbitrary way (as there is no exact solution for the given problem), thus we will not discuss the details about it.
The following class-diagram may help you to get better insight of the plug-in structure:
In the following sections we will go through on how this hierarchy serves the plug-in to connect the Filament API to ApertusVR. Before diving into it, we highly recommend you to read the README.md file on Filament's GitHub repository, which gives you an introduction to the API. Also even if we will discuss some of the API features along the way, it is good to check the android sample files too.
This section shows how the plug-in handles the node events, and how it manages them using the Filament API.
While the ApertusVR API defines nodes as objects which have position, orientation, and scale, Filament uses transform entities, which are representing coordinate systems with 4x4 model matrices.
Filament's Android API represents entities with a raw int
variable equipped with the annotation of @Entity
. There are several entity type (e.g. renderables, transforms, ...) , but all of them are accessible with this int
variable, as it serves as an ID. This means that for each of them first we need to create an entity ID in the entity manager: @Entity
int entity = EntityManager.get().create();
This makes us many challenge. First note that ApertusVR will not give us encompassed transform events with position, orientation and scale, but sends us them separately. Processing these events includes transforming the position vector (apeVector3
), the orientation quaternion (apeQuaternion
) and scale vector into 4x4 matrix. This means handling these events separately is not so efficient.
Other problem is that sever features of nodes are not supported in Filament (like visibility options) as they are supported in ApertusVR.
We will discuss how the plug-in solves these issues, and implements full functionality of nodes using Filament transforms.
The apeFilaTransform
class encompasses the features of transform entities in Filament (or most of them) to connect it to ApertusVR. The class looks like the following code snippet:
What you have to see here, is that it resembles to ApertusVR nodes in the sense, that it has public functions to:
Set the position, orientation and scale, but it do it all at once
Set a parent transform, which acts just like in the case of the nodes, thus the transform's coordinates are defined relative to its parent's coordinates.
Detach from parent transform
Destroy the filament entity, which is important on Android
The implementation of these functions are trivial, we will not cover them here.
Also there are two other functions, but we will not bother ourselves with them. However, the two member variable are more important. As we need to identify our transform entity, there is an @Entity int
variable, and also an @EntityInstance int
variable. The first makes it possible to identify it as an entity by the EntityManager,
the second is needed for the Filament TransformManager
, which manages the transform matrices and the hierarchy of transforms. The basic constructor creates them as follows:
Just like nodes, transforms can be ordered into hierarchy. However, the difference is that not every transform has to be a separate entity. This means that geometries has their own transform, and thus they can be rendered on the scene without having a parent transform, unlike with nodes, where every node is on its own, and the geometries can only be rendered if they are attanched to a node.
In the plug-in, the "direct" transform of the geometry is used just to adjust the unit-scale of the object:
Here what you should see, is that after requesting the unit scale from fileGeometry
(apeGeometryFile
), the plug-in sets the "direct transform" for the correlated filaMesh
, (apeFilaMesh)
. The function tcm.getInstanse(...)
asks the EntityInstance
(the transform) of the fileMesh's
renderable entity, then just sets a unit-scale matrix for it.
In ApertusVR, users have the possibility to hide an individual node, or hide the node with all of its children. In code it looks like:
The following image shows how these two feature works:
In the plug-in this was implemented by changing the transform matrix of the transform in the hierarchy. Let us assume that node is an apeNode. Then:
If node.isVisible() == false, but node.getChildrednVisibility() == true
, then the node's transform matrix is identity, which means that it makes no effects on its children.
If node.isVisible() == true, but node.getChildrendVisiblity() == false
, then the node's transform matrix is zero, which means it squeezes all of its children, so they just disappear from the scene.
If node.isVisible() == false and node.getChildrenVisiblity() == false
, then see 2.
If node.isVisible() == true and node.getChildrenVisiblity() == true
, then the transform matrix is calculated from the node's position, scale and orientation.
This section discusses how the meshes are rendered on the screen using the Filament API.
We already dealt with entities in the previous section, where we said, that each entity in Filament is identified by an integer value. This entity can also be a renderable entity if we build a renderable for it with Filament's RenderableManager.Builder
class. This is what the apeFilaMeshLoader
class does.
So, when ApertusVR sends us an event, that someone created a new geometry (e.g. FileGeometry
), we only create an empty class (apeFilaMesh), then later, when all the details are available about the a geometry, we build the renderable to use it on our scene.
As we mentioned, this class implements a similar wrapper functionality like the apeFilaTransform
. To get a grasp of how the class looks like, the following code block shows the stripped version of the original class:
So basically there is two constructor. The firsts actually does nothing, but put null in every member value. As it was mentiont, this is actually the functionality we need, if we do not know the details about the mesh. There's also an other constructor, this works as expected from the first look.
The member variables needs explanation:
We talked about the renderable ID, this is the first variable.
There's a vertex and index buffer in the class. For normal functionality we do not need to store them, because when we build a renderable, Filament store them for itself. However, keeping a copy of them makes it easier to clone the mesh (see apeFilaMeshClone
).
Meshes can build up from several submeshes. We store some data about each individual mesh part, again just for make it easier to clone the mesh.
We also store a list about the defined material instance names in the mesh. We will discuss materials later.
We also have a variable called parentTransform
. This is only needed if ApertusVR sends the PARENT_NODE
events, before it sends the details about the mesh, and therefore we do not have built renderable to set the transform for it.
When ApertusVR sends the details about the geometry (e.g. vertices, indices, materials, ...), the plug-in's task is to create the proper renderable entity in Filament. This is where the apeFilaMeshLoader class comes in. Now this is actually a modified version of the MeshLoader.kt, which you can find in the Filament samples.
Filament has its own mesh file format, called filamesh. This means we have to use this in our plug-in as our main geometry file format, which implies that to fully understand how the mesh loader works you have to obtain a solid background about the filamesh file format. You can find details on this link: https://github.com/google/filament/blob/main/tools/filamesh/README.md.
Filamesh also supports glTF fileformat. Currently the plug-in does not have implementations to use this feature, but its an open issue.
Again, we first take a glace at the class, which looks like the following code block: