Yep ArrayBuffer / and whatever view matches for the copy.
TyphonRT has 13 years of crazy works behind it though just not open source for all to see; so as far as crazy works what’s visible is the tip of the iceberg…
I just want to get to something commercially launched that justifies any ROI before open sourcing TyphonRT. TyphonJS I’m open sourcing and that has been refreshing to get things out for folks to use, but haven’t publicized anything yet about it. TyphonJS is also a test on building the infrastructure for the many-repo approach. Even a year ago that approach only is possible on Github financially speaking for open source repos (free). I believe ~6 months ago Github finally changed the pricing structure for unlimited private repos per org / user for a flat fee per user. Even then that doesn’t fit the model I’m developing as I treat Github organizations as a “component category” hence if I want private repos across ~30 organizations for one user that is ~$210 a month; better than it used to be though. There is Gitlab (free private repos), but I’m going to evaluate that later. TyphonJS is spread over ~25 organizations currently each with repos specific to the category that the organization represents. Some tooling is available (how the listing on typhonjs.io is created), but I’ve got a few unpublished tools that given a regex and Github token for instance all repos of TyphonJS across all organizations can be cloned in bulk and WebStorm projects automatically created and npm / jspm install run in one batch (no need to manually install everything which would be crazy)… Eventually a GUI configuration tool for apps will allow appropriate selections for an end project to be created with referenced NPM / JSPM modules that create the appropriate project with all resources, etc… Lot’s more, but I’ll stop here though as this is a JGloom thread… 
Also keep in mind the glTF binary extension
So the canary in the coal mine in all the discussion thus far is that what you are trying to pull off is complex. If you go down the traditional OO route you’re going to be screwed if not more so than a generic OO entity system. Especially screwed if you ever want to support Vulkan efficiently.
The direction I recommend is a purely event driven path for model loading and rendering. There are no direct connections between subsystems. Just events that each subsystem responds to and posts further output events handled by the next system in the chain. This works for loading and rendering or whatever else JGloom does. Unfortunately there is no publicly available efficient / component oriented event bus for Java out there (yep, TyphonRT is based on this).
Let’s assume a JGloom instance manages all models / scenes loaded. JGloom has an event bus internally used.
Promise modelID = jgloom.load(a loaded gltf file);
Let’s assume “load” might introspect the type of file or data being loaded. A modelID is assigned, promise created and returned after posting two events on the internal JGloom event bus. The first, CREATE_MODEL, forwards on the model ID and promise which is received by a management system. The second, LOAD_GLTF, with the raw file data after introspection and associated model ID.
The GLTF parser receives the LOAD_GLTF message and then starts unpacking and finds out there are 5 things to load. First it fires an event on the event bus PARSING_GLTF with the model ID and how many assets are being loaded which is received by a collector system which creates the complete model instance / component manager (hopefully). The parser then fires off 5 events (let’s say there is 1 texture, 2 shaders, 2 binary blobs), so the following is punted to the event bus with model ID and raw data from the glTF file: LOAD_GLTF_TEXTURE (x1), LOAD_GLTF_SHADER (x2), LOAD_GLTF_BINARY (x2). One or more separate loading systems receive these events and then create the proper resources for the GL or Vulkan or whatever environment being used. IE when you create a JGloom runtime for GWT you load GWT loader / renderer systems which for instance store binary data in ArrayBuffers or JOGL, LWJGL with NIO buffers, etc. Each of those loader systems create the proper assets / format and post with the model ID: LOADED_GLTF_TEXTURE (x1), LOADED_GLTF_SHADER (x2), LOADED_GLTF_BINARY (x2). The previously mentioned collector system receives these events and adds the loaded data to the model and once all assets are received given the asset count emits a LOADED_MODEL event with the model ID and model which is picked up by the management system tracking the initially returned promise replacing the promise placeholder with the actual managed model then completes / fulfills the promise. I guess you could also get fancy and support reactive events (RxJava, etc.) or expose an external eventbus as well as promises.
The user can then retrieve the actual model from JGloom… Let’s say jgloom.get(modelID).
Another great reason for this kind of architecture is that it’s really easy to provide a JGloom implementation that entirely excludes any renderer and defers to the app to render or even a model creator system that loads assets into an existing engine structure if applicable such as JMonkeyEngine, etc. The latter of course if the engine defines a fine grain API to import assets / scenegraph nodes, etc.
This is still all a strawman architecture idea, but perhaps gives a different perspective. The renderer would a bit more complex… Considering batching, animation, culling, etc. etc. At a certain point the user needs to be in control of these aspects. The nice thing though is that a fully decoupled system could load a user provided system without the complexity of a complicated OO dependency hierarchy.
While not super deep on details check out Dan Baker’s / Oxide Games talk on Vulkan and Ashes of Singularity engine. Take note when he mentions that the renderer can be swapped out and the rest of the engine / architecture doesn’t care.