It’s pretty effective in the DCC world. The best example I know of is Softimage ICE, which is actually what inspired* me to create this tool. It’s very popular and I’ve seen non-programmers do crazy things with it, real programming stuff. Also see Houdini’s procedural workflow, the shader trees in MAX/Maya, Kismet in UE, etc. Maya is getting a tool similar to ICE in a future version too afaik.
Also keep in mind that the tool itself is intended to be used by developers only. Whether or not non-programmers have access to it depends on the product and how the tool has been integrated. One obvious concern is safety; you don’t want non-programmers to have access to file access nodes etc. So, a developer would have to filter and customize the core tool and expose something more usable by non-programmers (or programmers with restricted access). Hopefully this process will be easy to perform by the tool itself.
wrt: MIT’s Scratch and similar tools. Nothing comes close to general purpose programming and DOPE is meant to enable that.
- Even though the underlying design is completely different, I shamelessly copied the UI to avoid having to come up with “programmer graphics”. I was also learning JavaFX at the same time and wanted to push it hard and see what it can do. Anyway, the current GUI is a placeholder, it will change while I’m finishing the core functionality.
The plan is to expose everything in an API, so you’ll be able to use it without the GUI. I also have a 3rd-party willing to create an HTML5 client (think client-side design -> server-side code generation + execution).
The main performance benefit of using a code generating tool like this is code reuse across different execution contexts:
- You design an algorithm once and it can run on both a JVM and a GPU. Not only you didn’t have to write it twice, but you didn’t even have to bother with learning OpenCL/CUDA/whatever programming. It will just work (assuming of course you stay within certain limits - think Aparapi’s constraints).
- There’s a benefit from the user perspective as well: A user with decent GPU and working drivers will get the performance benefit transparently. A user without one, will still have a working app, the runtime will simply fall-back to multi-threaded JVM execution.
Also, runtime code generation means the tool can make optimization decisions based on the real user environment. Again, the developer doesn’t have to intervene or even think about it, advanced stuff like tuning for a GPU’s warp/wavefront size happen automatically.
But the answer to your question is yes. The main reason is that a graphical programming environment opens up a lot of possibilities for visual debugging:
Since the tool will have full information on the input data layout, it will be able to display intermediate values in a data flow for multiple input values at once. Even without real data, you’ll be able to create additional nodes that generate dummy data and feed those in to see how your algorithm behaves. Softimage ICE does this very nicely with multiple options for how the data is visualized in the 3d scene.
Break-points and step-by-step debugging can be so much more powerful. See how amazing it looks in Kismet.
Using graphics and code-generation for programming makes it much easier to implement stuff like Bret Victor’s brilliant ideas.