Savegame design

It’s not such a problem if you plan ahead for it and design accordingly.

Cas :slight_smile:

Once had a project spanning many years and backwards compatibility had to be good for all users from day one, so they wouldn’t lose their stuff. we had a different loading + initialize function for all the save versions going back a dozen revisions. I don’t know if there are any cleaner ways of doing it.
So if the user had v1.11 saved on server, boots up the new version 1.12 client, the server would update the savefile.
I believe we also kept the last couple of versions backed up in case of goofups.

While that approach is the most flexible, it is also the most tedious. There are other ways of providing forward and backward compatibility.

Rather than assuming the saved data will be in a specific order, if you annotate each piece of data with what field it corresponds to, your saved data will be larger, but makes it easy to support limited compatibility. Eg, if a field has been removed, you can just ignore the old data for that field. If a field has been added but doesn’t exist in old data, you can just leave the field at its default. If a field’s type has been changed, you can attempt to massage the value into the new type (int -> float, long -> String, etc). It isn’t easy to support renaming fields with this approach, so don’t do that.

If you have a schema for your saved data, adding/removing field could work the same way but you could also describe differences in versions (eg, renaming fields or type conversions) and have whatever parses your data handle it. I find writing and maintaining a schema to be annoying though.

Just use java.io.externalizable and then you can use ObjectOutputStream. Then just use a Map with and any values that don’t exist get set as a default. This works fine.

Sigh… no!

Ordinary serialization already does all of the difficult work involved in this problem. The problem exists no matter what binary or ascii format you use to actually encode the data; that is, if you change the data model, you have to figure out what to do with all the persisted data in the wild.

If you’re looking for a magic bullet that miraculously solves the problem, and for example write your own serializer, all you’ve done is reinvent a perfectly good working wheel probably incompletely, and then you’re left with exactly the same problem you had when you started.

Cas :slight_smile:

Aaaand we’re back to reply #14.

Exactly. Running around in circles reinventing something that already does what it says on the tin, trying to find a shortcut to solving a problem that cannot be solved with shortcuts.

Cas :slight_smile:

The built-in serialization is crappy. If it happens to be sufficient for you, that is great, but there are many scenarios where it is a poor choice. Eg, if you wanted serialization to be fast, or if you wanted the output to be small, or if you wanted to use standard constructor invocation, etc.

You’ll notice “java-built-in” does badly in the benchmarks, while “java-manual” does well. The latter uses java.io.Externalizable and is hand written serialization code. This is fast and efficient, but extremely tedious. It isn’t any special win for the built-in serialization, as any interface with “readBytes” and “writeBytes” methods could do the same. It also doesn’t handle backward/forward compatibility.

LOLLOLLLLOLLLL.

All right I’m going to stop discussing this topic now. :stuck_out_tongue:

Getting programmers to agree on how to do something is a bit like herding cats :slight_smile: There’s a good maxim you can apply here: he who actually does it is right.

Cas :slight_smile: