JGLOOm - Loading Every File Format for Every OpenGL Lib*

Half of the scenegraph APIs I quoted aren’t mainly for games. If the main “advantage” of your library is its light weight, you’ll aim a tiny niche containing those who need more than just a set of bindings for OpenGL & OpenGL ES APIs and those who need less than a middle-level framework like LibGDX. This niche is already occupied by jReality and jzy3D. You talk about computer simulation and data visualization but I don’t see numerous formats used in scientific visualization in your lists and unfortunately, as time goes by, there are fewer of them supported in major 3D editors like Blender (especially since its version 2.50). PLY and STL models can be converted into OBJ, it shouldn’t be a problem. I see neither U3D nor Collada in your list.

What formats are used?

By computer simulation, I meant primarily on the web. Hence the webGL

GWT and JS both have the support for reading binary files too. You can create an XHR to a file, set it’s type to ArrayBuffer and receive an array buffer, which is like a container for binary data. You can create a DataView on top of it which behaves like a DirectByteBuffer. For example, here is a GwtDirectBuffer class that I wrote for the GWT backend of SilenceEngine.


public void readBinaryFile(FilePath file, UniCallback<DirectBuffer> onComplete)
{
    // Create a XMLHttpRequest to load the file into a direct buffer
    XMLHttpRequest request = XMLHttpRequest.create();
    request.open("GET", file.getAbsolutePath());

    // Set to read as ArrayBuffer and attach a handler
    request.setResponseType(XMLHttpRequest.ResponseType.ArrayBuffer);
    request.setOnReadyStateChange(xhr -> {
        if (request.getReadyState() == XMLHttpRequest.DONE)
            // Invoke the onComplete handler
            onComplete.invoke(new GwtDirectBuffer(request.getResponseArrayBuffer()));
    });

    // Send the request
    request.send();
}

And now I have this method to read the file as a binary file. Note that this is callback based because of JavaScript’s single threaded nature. Once done, the callback is invoked, and then you can simply read the bytes from the buffer as you wish.

:o ???

JSON description -> binary blobs + assets -> NIO -> profit…
|
±> shaders -> profit…
|
+ scenegraph -> pure java / component architecture preferably -> profit…

Hey man, I can guide the path, but shine you must… :stuck_out_tongue:

Currently walking the walk plenty busy creating fundamental tools; that one should be over 3k daily downloads by end of Sept… rounding out the arsenal so to speak… more to come…

NIO Buffers (and all the other natives), to my understanding, don’t work with GWT, right?

That’s the part that scared me, but SHC showed you can still use binary formats in GWT!

Typhoon looks very interesting, I knew you had your Java app but you seem to exploded in crazy works. Nice job!

Thanks for the tips on importing glTF, hopefully that’ll be the first format we support.

PLY, OFF, STL, Mathematica Graphics 3D, VTK, …

Actually, if you separated the geometry storages used during the parsing (pure Java, no NIO at this stage) from the creation of meshes and nodes containing the data for OpenGL (typically the NIO buffers) and if you chose a math library widely used by several engines (for example Vecmath, used as is in Java3D, a variant used in Xith3D, another variant used in JogAmp’s Ardor3D Continuation), those engines could use your stuff, maybe with a Maven plugin to adapt the imports, and you could use most of their loaders. However, it’s up to you and I understand that you prefer using jAssimp.

My plan was to have an AiSceneRenderer that simply renders a scene given an AiScene. Then we’d have different variations of LWJGLAiRenderer, WebGLGWTAiRenderer, and JOGLAiRenderer. Since there’s no universal model class we can easily support lots of formats and loading libraries.

I’m concerned now about Vecmath libraries, as a lot of those libraries use a proprietary vecmath solution…

My advice is to create an interface with OpenGL functions that the renderer would need, and implement the renderer in the core. The interface with the opengl functions will be passed to the renderer from the backends.

Proprietary? It’s under GPL v2 with Classpath exception (i.e non viral):
https://github.com/gouessej/vecmath/blob/master/LICENSE.txt

What did you mean exactly?

Yep ArrayBuffer / and whatever view matches for the copy.

TyphonRT has 13 years of crazy works behind it though just not open source for all to see; so as far as crazy works what’s visible is the tip of the iceberg… :wink: I just want to get to something commercially launched that justifies any ROI before open sourcing TyphonRT. TyphonJS I’m open sourcing and that has been refreshing to get things out for folks to use, but haven’t publicized anything yet about it. TyphonJS is also a test on building the infrastructure for the many-repo approach. Even a year ago that approach only is possible on Github financially speaking for open source repos (free). I believe ~6 months ago Github finally changed the pricing structure for unlimited private repos per org / user for a flat fee per user. Even then that doesn’t fit the model I’m developing as I treat Github organizations as a “component category” hence if I want private repos across ~30 organizations for one user that is ~$210 a month; better than it used to be though. There is Gitlab (free private repos), but I’m going to evaluate that later. TyphonJS is spread over ~25 organizations currently each with repos specific to the category that the organization represents. Some tooling is available (how the listing on typhonjs.io is created), but I’ve got a few unpublished tools that given a regex and Github token for instance all repos of TyphonJS across all organizations can be cloned in bulk and WebStorm projects automatically created and npm / jspm install run in one batch (no need to manually install everything which would be crazy)… Eventually a GUI configuration tool for apps will allow appropriate selections for an end project to be created with referenced NPM / JSPM modules that create the appropriate project with all resources, etc… Lot’s more, but I’ll stop here though as this is a JGloom thread… :stuck_out_tongue_winking_eye:

Also keep in mind the glTF binary extension

So the canary in the coal mine in all the discussion thus far is that what you are trying to pull off is complex. If you go down the traditional OO route you’re going to be screwed if not more so than a generic OO entity system. Especially screwed if you ever want to support Vulkan efficiently.

The direction I recommend is a purely event driven path for model loading and rendering. There are no direct connections between subsystems. Just events that each subsystem responds to and posts further output events handled by the next system in the chain. This works for loading and rendering or whatever else JGloom does. Unfortunately there is no publicly available efficient / component oriented event bus for Java out there (yep, TyphonRT is based on this).

Let’s assume a JGloom instance manages all models / scenes loaded. JGloom has an event bus internally used.

Promise modelID = jgloom.load(a loaded gltf file);

Let’s assume “load” might introspect the type of file or data being loaded. A modelID is assigned, promise created and returned after posting two events on the internal JGloom event bus. The first, CREATE_MODEL, forwards on the model ID and promise which is received by a management system. The second, LOAD_GLTF, with the raw file data after introspection and associated model ID.

The GLTF parser receives the LOAD_GLTF message and then starts unpacking and finds out there are 5 things to load. First it fires an event on the event bus PARSING_GLTF with the model ID and how many assets are being loaded which is received by a collector system which creates the complete model instance / component manager (hopefully). The parser then fires off 5 events (let’s say there is 1 texture, 2 shaders, 2 binary blobs), so the following is punted to the event bus with model ID and raw data from the glTF file: LOAD_GLTF_TEXTURE (x1), LOAD_GLTF_SHADER (x2), LOAD_GLTF_BINARY (x2). One or more separate loading systems receive these events and then create the proper resources for the GL or Vulkan or whatever environment being used. IE when you create a JGloom runtime for GWT you load GWT loader / renderer systems which for instance store binary data in ArrayBuffers or JOGL, LWJGL with NIO buffers, etc. Each of those loader systems create the proper assets / format and post with the model ID: LOADED_GLTF_TEXTURE (x1), LOADED_GLTF_SHADER (x2), LOADED_GLTF_BINARY (x2). The previously mentioned collector system receives these events and adds the loaded data to the model and once all assets are received given the asset count emits a LOADED_MODEL event with the model ID and model which is picked up by the management system tracking the initially returned promise replacing the promise placeholder with the actual managed model then completes / fulfills the promise. I guess you could also get fancy and support reactive events (RxJava, etc.) or expose an external eventbus as well as promises.

The user can then retrieve the actual model from JGloom… Let’s say jgloom.get(modelID).

Another great reason for this kind of architecture is that it’s really easy to provide a JGloom implementation that entirely excludes any renderer and defers to the app to render or even a model creator system that loads assets into an existing engine structure if applicable such as JMonkeyEngine, etc. The latter of course if the engine defines a fine grain API to import assets / scenegraph nodes, etc.

This is still all a strawman architecture idea, but perhaps gives a different perspective. The renderer would a bit more complex… Considering batching, animation, culling, etc. etc. At a certain point the user needs to be in control of these aspects. The nice thing though is that a fully decoupled system could load a user provided system without the complexity of a complicated OO dependency hierarchy.

While not super deep on details check out Dan Baker’s / Oxide Games talk on Vulkan and Ashes of Singularity engine. Take note when he mentions that the renderer can be swapped out and the rest of the engine / architecture doesn’t care.

That’s far above the task of a loading meshes library :stuck_out_tongue:

Ya, that’s fair. We can probably just generate AIScenes to buffers compatible to each library only, and users can point that to vertex arrays however they’d like. That way it’s not completely useless :persecutioncomplex:

Plus we don’t have to deal with the whole OpenGL version debacle, that I complain too much about.

Float (/double) for vertices/normals/uv and int (/short) for indices?

Essentially, although it can probably change per model format (some formats may require higher percision, I donno)

Aha! Gotcha. Already a step ahead of you!

  1. Objects are in their own little interfaces ([icode]GLBuffer -> (int)[/icode]).
  • Each library has their own constructors for these objects, so we’re getting pointers from the right libraries (LWJGLBuffers.createBuffer(), LWJGLFramebuffers.createFramebuffer(), …)
  1. Objects are manipulated by function interfaces:
  • GLF[Object] specifies basic uses of each object (bind, destroy …) stuff that all libraries support
  • GLF[Function] specifies specialized functions (GLFTexImage3D, GLFBufferSubData) stuff that some libraries support (these extend from the GLF[Object])
  1. Containers contain objects for that library, and implement the function interfaces that the library supports, making for very easy use of the object in core loaders
  1. Loaders in the core can manipulate objects from any library by using the function interface for that object.
  • Since the GLF[Function] extends GLF[Object], you can still do the basic functions of each object, as well specialized loaders
  • For example: [icode]loadTexture(int target, GLFTextureImage2D texture, … texture.texImage2D(…)[/icode]

So to continue the strawman architecture and why you must consider an asynchronous architecture (event driven / event bus) let’s consider the full glTF spec. In my previous outline of what event control flow could look like I took liberties that the glTF file being parsed indeed contained all of the assets; data URIs as per the spec. Most model formats contain all of the assets to load. glTF is different though as any asset in the glTF file may just be an external URI reference relative to the glTF file being loaded that needs to be downloaded separately from the glTF file itself. This presents a direct example why an asynchronous architecture is needed.

Previously in my last reply I mentioned:

Well shoot let’s say the glTF file is loaded from a remote URI and the file now includes external URI entries for one or more of the assets to load. The call to JGLoom might now look like:

Promise modelID = jgloom.load('https://raw.githubusercontent.com/KhronosGroup/glTF/master/sampleModels/CesiumMilkTruck/glTF/CesiumMilkTruck.gltf')

https://github.com/KhronosGroup/glTF/tree/master/sampleModels/CesiumMilkTruck/glTF

This means jgloom.load detects the URI and determines it’s some remote file to load. Let’s say the JGloom load API has an internal implementation that takes the actual loaded file. Like before from the public API a modelID is assigned, promise created and returned. Then LOAD_EXTERNAL_URI is posted on the internal JGloom event bus, with the URI of the file to load the associated model ID to load and a separate promise which accepts the loaded file and invokes the internal load implementation that takes the actual file. The system that receives LOAD_EXTERNAL_URI makes the HTTP request and upon receipt of the file fulfills the loader promise. Now we are more or less back to the control flow initially described. The internal load implementation determines it’s a glTF file and fires two events: CREATE_MODEL, LOAD_GLTF. As an aside what if the URI is a local “file://” URI. No problems as the external URI loader system accesses the file system and proceeds just like it would if it were a remote request.

The glTF parser receives the LOAD_GLTF message and then starts unpacking and finds out there are 5 things to load. First it fires an event on the event bus PARSING_GLTF with the model ID and how many assets are being loaded which is received by a collector system which creates the complete model instance / component manager (hopefully). However now all the assets are external URIs not locally available. Now the parser will create 5 promises which receive the file of the asset and like before subsequently punts individually to the event bus with model ID and raw data from the external file: LOAD_GLTF_TEXTURE (x1), LOAD_GLTF_SHADER (x2), LOAD_GLTF_BINARY (x2) So the parser then fires off 5 LOAD_EXTERNAL_URI events to the JGloom internal event bus. The system handling LOAD_EXTERNAL_URI resolves these promises and the control flow resumes as originally described collecting and completing the model loading process.

The event driven / event bus architecture perfectly handles these cases.

What if there is an error though. Let’s say an external resource 404s. The system handling external requests resolves the file request promise with an error and message. The error handler of the request promise now posts an event to the internal event bus LOAD_MODEL_ERROR with the model ID and a descriptive message. The collector system receives this message and cleans up resources and removes tracking for the given model ID. The management system receives the message and resolves the modelID promise passed back to the user with an error and removes tracking for that modelID.

Consider up front the benefit of an asynchronous architecture. A traditional OO architecture especially with discrete listeners will fold and be hard to maintain / debug.

I looked around for Java promise libraries that work on Android and it seems like JDeferred might be a good choice.

Interesting, I should go about implementing an asynchronous architecture… I’m not very good at designing async machines, using an object before it exists doesn’t excite me.
If anyone has some good reads on async, please PM me. I’m pretty dumbfounded when it comes to anything other than basic OO.

I’d love to look into it though, since we haven’t started with the loading yet it should be a quick bid to get working!
Thanks for the help Catharsis!

Java definitely has been late to the game for a good promise based solution. Java 8 introduces CompletableFuture. The only problem is that it is just introduced in Android API level 24 which precludes it really being usable per se. JDeferred seems like the best available external option presently. A promise is an object.

There are several OO event bus implementations around. GWT has one, Guava, there are many others. The last one might be a good candidate for an existing solution. Events are fully formed objects… Err of course I think all of them are flawed in respect to a component architecture.

When I mention component architecture essentially I mean a generic application of what has received attention in the Java sphere of things as an “Entity System”; Artemis, Ashley, etc. etc. (all flawed per se). At the heart of it the important design concern is implicit composition. The TyphonRT event bus implementation uses extensible enums as event categories. The all caps “event IDs” that I mention in the previous post like LOAD_GLTF, or CREATE_MODEL are extensible enums. With TyphonRT an event is still an object, but a component manager. You can attach any type of data to it dynamically and other systems register with the event bus under the extensible enum categories or catch alls. In this respect there are no specific event types like the traditional OO approach. An event is defined by the category / extensible enums it’s posted under and there is just one type of event which has data implicitly attached.

You can still get away with traditional OO, but there will be a proliferation of concrete event types. If you get the design right up front it all can still work. Consider new extensions to glTF then you have to mutate the events passed through the system or introduce new ones.

I’m playing fast and loose in the conversation in general.

That’s awesome! I never considered event busses, I’ve always used proprietary listeners… I’ll see if I can implement guava into the library.

Definitely consider not using traditional / old school listeners if sanity is to be maintained. :wink:

Opinionated Absolutely avoid Guava at all costs and dubiously view and double check anything that has touched Google engineering hands. Despite reputation laziness abounds within Google engineering. JGloom is already going to be bloated beyond belief as far as API surface is concerned let alone internal implementation going down the traditional OO route. If you add external libraries as dependencies pick the smallest most efficient and purpose built ones for the task. I guess that is another thing that should be in mind now and not later and that is creating the smallest API surface and modularization of JGloom.

JDeferred and Green Robot eventbus seem like reasonable candidates from a cursory search. I haven’t used either of these libraries, so evaluate them!.