What I did today

Played around with outlines in my engine today :stuck_out_tongue:

I suggest you guys try out the browser Vivaldi, it’s pretty neat. I’m enjoying it so far.

Today I started implementing an ext2 driver for my OS.

Not java or (directly) gaming related, but I’m feeling good about it so I’ll post:

Some may remember the tiny little stack-based language I wrote a while back: https://github.com/BurntPizza/ghost

I’m finally making good progress on a more serious language I’ve been kicking around in my head for a while: https://github.com/BurntPizza/aleph

It’s a lisp, with a (hopefully, as it’s not implemented yet) static type system.
Today I got my first macro implemented: the let form:


// "let a = 4, b = 6, and c = 8 in a + b + c"
(let (a 4
      b 6
      c 8)
  (+ a b c))

// will return 18

It’s implemented (currently) by expanding into an anonymous function which is immediately called:
(anonymous functions are currently the only way to assign labels in a new scope, just like lambda calculus)


((fn (a b c)
  (+ a b c))
 4 6 8)

// will return 18

This in turn compiles to the following program:


---- interpreter::test::let_form stdout ----
SymbolTable:
| id | ident |   kind   |
|----|-------|----------|
| 12 |   def |  special |
| 13 |    fn |  special |
| 14 |    do |  special |
| 15 |     + |  special |
| 16 |     a |      var |
| 17 |     b |      var |
| 18 |     c |      var |
| 21 |  fn$0 |       fn |

Constant table:
[0] 4
[1] 6
[2] 8

Bytecode:
0:     ldc    4
3:     ldc    6
6:     ldc    8
9:     call   fn$0
14:    exit   
  fn$0:
15:    save   %2  // I don't have an optimization pass for
17:    save   %1  // pointless stack-shuffling like this yet 
19:    save   %0  // or inlining for that matter
21:    load   %0
23:    load   %1
25:    load   %2
27:    add    3
29:    ret    

I can prove it works because the following test passed: :wink:

Now just to figure out how to do namespaces well…

This is not Java, but I’m thinking of posting this as an update to my workshop instructor… Any recommendations on how to improve on the next update?

QfY2QZFhWN0

If you want to play with designing a language, Racket people have some freely available books. Racket’s solution is: https://docs.racket-lang.org/reference/Namespaces.html

Indeed. I’m also aware of (a possible, I forget if they’re using it yet) racket’s macro expansion solution, which I will be taking a thorough look at: https://www.cs.utah.edu/plt/publications/popl16-f.pdf

Aleph is definitely inspired by Clojure, but one thing I want to make sure I don’t wind up with is anything resembling Clojure’s terrible namespaces: mutable (!) collections of Vars at runtime. Given the language’s advocacy of REPL-driven development and immutability, the fact that it is very easy to accidentally mutate your namespace into an erroneous state is ridiculous. The namespaces almost feel like they came from a poor fork of the language, even though they’re core component.

Started a new project called as GwtOpenType which is a binding of the awesome opentype.js project. With this, I’m pretty sure I can enable the TTF support for my GWT backend of SilenceEngine.

My target is to release a snapshot in two days.

Ok, that didn’t work.

gotta love qemu’s half broken BIOS memory map… my physical memory manager is working now, on to virtual memory manager and creation of malloc and free! yay finally real mem management with
userspace->kernel interaction.

edit:
As a proof of concept I tried to overwrite the actual assembly code section (where the kernel starts in memory, or at . = 0x00100000) and it actually worked haha… wherever I put this code, as soon as it executes, nothing else happens because I literally overwrote the actual code in memory. In reality, the processor is trying to execute ā€˜0x80’ over and over and over again. I love coding.


	uint32_t* memtest = &kernstart;
	for(int i = 0; i < (&kernend - &kernstart); i++)
	{
		memtest[i] = 0x80;
	}

edit: Some more cool stuff, if I instead overwrite with ā€œ31C0ā€ which is

 xor eax, eax 

it doesn’t stop incrementing ā€˜i’, and overwrites into video memory in that short time it has, which results in some goofy stuff. At least I think this is what’s happening. AFAIK EDX is used for loops in standard compilers, but EAX obviously has something to do with it since it’s overwriting vmem. Memory protection is important!

I re-created an old lighting engine of mine, this time optimizing the recursion with a stack. It was so much faster that I accidentally had it running every frame without any noticeable performance decline. Of course, the real (human) performance decline was when I originally used a queue and wondered why there were so many objects waiting…

I’ve tried my hand at isometric projection (and a little bit of pseudo-shading) in Java2D:

I’m as of yet unsure how I’d store textures to render on top.

Cubes make the depth hard to ā€œseeā€ (the alignment of top with bottom of adj). Try changing the height.

@Roquen: I do agree, I also think that it’s got to do with the fact that I’m really only drawing a grid until I make some textures to draw on top. I’m gonna do a bit of experimenting.

@Coldstream

[quote=ā€œRoquen,post:4192,topic:49634ā€]
I reckon you should try halving the height. I’ve already seen that in games (can’t think of any right now unfortunately) and it usually works pretty damn good. :point:

Sorting the isometric sprites the ā€œcorrectā€ way is a hell, if you go for perfection you’ll end up with a O(n^2) algorithm in the worst case or O(n * log n) algorithm eating up your mem in the best case.

I open-sourced one of my unreleased games.

How about height = (0.25~0.3)*width?

That picture is for 2 cubes (equal size in each dimension) isometric on left, dimetric on right. Different scales for ā€œupā€ are possible for dimetric. Choose one you like to model a unit cube and see how it plays out for other objects.

It could be kinda cool to have a generalized axonometric playground. (Not me!)

I really don’t follow, esp n^2. I did a isometric engine way back and had no such problems. Also n log n is fast enough even on old hardware for pretty big lists. If its mostly sorted, many sorts tend to O(n).

Anything that has a consistent ordering should always be sortable in n log n worst case. (broken quick sorts not withstanding)