LibGDX Saving Chunks To Files?

i have now tried for about 6h to save chunks to file and then reload them back to game when needed

give any tips that might help me?
a good saving library like simple-json / xml

and how to approach this problem

basically what am trying to do is when all chunks has been generated
it will save them to one file
then ing remove chunks completely so they dont exist

and when player starts to go close to the location off those chunks they load back in using the file with information!

ChunkManager --> generates chunks and saves em to a file
Chunk --> generates blocks with properties

and what i want chunk file to do
ChunkFile --> save chunks and there inforamtion on the blocks and the chunks location in the Chunk[][] array
save the blocks in etch chunk and there location in the chunk and what type off block it is

you might have an idea now but just in case :slight_smile:

PINK = Player Camera
BLUE = Loaded Chunks Player Camera Sees Chunks
YELLOW’ISH = Chunks Ready To Be Loaded, by taking info about it in file
RED = Not Even In Game, its just in a File And Ready To be Added To Game

http://s22.postimg.org/yd37lh6a9/explanation.png

So what is your question? Are you having difficulty doing the actual file IO or serialization, etc.?

i dont know where to start besides adding chunks to file =/

You should write down what steps are required. Think about each step along the way. Doing things like this is part of problem solving as a programmer. Think about writing the chunks to a file, then reading it back. Once you can do that, you can write/read on the fly depending on your situation.

There are 2 different scenarios:

infinite or very big world:

use more than one file, since the memory usage and read and write
speed will decrease heavily, if you load too big files. You could store
16x16 chunks per file, and give this files coordinates

medium or small world (world that doesn’t consume tons of memory, and can be saved quite quick)

Here you can save it in one file.

If youre just looking for a way to actually arrange the data, try jnbt. NBT is
the file format minecraft uses, and im using for my game right now. It’s
a tree-like format, which is built out of different tags, like a Float Tag, Integer Tag,
Byte Array Tag and additionally Compound Tags, to actually build a tree. You
can google for jnbt, and use the file format of minecraft as reference (can be found
at minecraft wiki). The actual chunk data is saved as a byte array in this form
(assuming it’s a 2D world): blocks[x * CHUNK_SIZE + y] = blockAt(x, y).

I hope this helps you out.

Also, if you are using an algorithm for generating a world that can be reproduced exactly with something like a seed, then you shouldn’t save all of your world data. Save the seed and save changes made to the world. Everything you can derive from the seed isn’t worth saving. Unless of course your algorithm is so expensive that it’s more value to save everything to disk. Then it is a matter of memory vs runtime speed, but I doubt this will be the case.

can you link any good tutorials on how to use it? it sounds promesing :slight_smile: + i used todo minecraft plugins dunno if that will help att all hope fully i get it to work =D

http://wiki.vg/NBT
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#safe=off&q=nbt%20file%20format
http://jnbt.sourceforge.net/

Found within seconds after a Google search for ‘NBT file format’.

Its funny you posted this question because I implemented this into my game yesterday in about an hour. I used a different method called RLE (Run Length Encoding) to compress the amount of data being saved to the file. What I did was each chunk of the world has its own save file with the name of the file being “chunk_”. It goes through each row of the chunk’s tiles and uses the RLE compression method.

The way that the RLE compression works is it takes a common pattern and compresses it for you. There are many ways to implement and interpret the compression method but I’ll explain to you my implementation. So say that you have 10 blocks with the ID 5 in a straight line in a row. Instead of formatting those 10 blocks as “5 5 5 5 5 5 5 5 5 5”, where each number is the ID of the block, it instead formats as “5-10” where the first number is the ID and the second number is the amount of blocks in a row. So if you have a line that is “5 5 5 3 3 4 1 1 1 1 1 1” it would instead be saved as “5-3 3-2 4-1 1-6”. This works very well for chunks that are large.

So my implementation goes through and does that for each row and then each line in the saved file is a corresponding row in the chunk. Here is my code:


        public static void saveChunk(String worldFileName, Chunk chunk) {

                //.... I load the file and some other stuff up here. That's also why there is a try/catch

		try {
			String totalToken = Chunk.CHUNK_SIZE + " " + Chunk.CHUNK_SIZE + "\n"; //First line is the width and the height of the chunk	
			
			for (int y = 0; y < Chunk.CHUNK_SIZE; y++) { //Go through each row
				BlockType lastType = null;
				int sameCount = 0;
				for (int x = 0; x < Chunk.CHUNK_SIZE; x++) { //Go through each column
					BlockType type = chunk.getBlock(x, y); //Get the block type
					
					if (type == null) //In case it is null which it is in newly generated chunks
						type = BlockType.AIR;
					
					if (lastType == null) //if it is the first instance because only the first run will be null
						lastType = type;
					
					if (type == lastType) { //We're still counting the same blocktype
						sameCount++;
					} else if (x != 0){ //We found a new blocktype. Save in format <id>-<count>
						totalToken += lastType.getId() + "-" + sameCount + " "; //Add it to the row
						sameCount = 1; //Restart the count
						lastType = type; //Set the last type to our new type
					}
				}
				if (sameCount != 0) { //If there are values not saved and still counting at the end of the row before going to the next...
					totalToken += lastType.getId() + "-" + sameCount + "\n"; //Add it to the row
				} else {
					totalToken += "\n"; //Add the new line and continue to the next row
				}
			}
			handle.writeString(totalToken, false); //Write to file and dont append
		} catch (IOException e) {
			e.printStackTrace();
		}
	}

When loading it it is just the opposite


public static Chunk loadChunk(FileHandle handle, int x, int y) {
		if (handle.exists()) {
			Chunk chunk = new Chunk(x, y); //Create a new chunk
			
			String[] rows = handle.readString().split("\n"); //Get all rows
			
			String[] dimensions = rows[0].split(" "); //Get the dimensions from the first line
			int width = Integer.parseInt(dimensions[0]); //get the width
			int height = Integer.parseInt(dimensions[1]); //get the height
			
			for (int i = 0; i < height; i++) { //Go through each row
				String[] rowData = rows[i + 1].split(" "); //Get each column information. + 1 so that we skip the dimension row
				int xOffset = 0; //The offset from the last data count. Keeps track of what column (x) to set the block to
				for (int j = 0; j < rowData.length; j++) { //Go through each column entry
					String[] data = rowData[j].split("-"); //Split the id and count
					int id = Integer.parseInt(data[0]); //Get the ID
					int count = Integer.parseInt(data[1]); //Get the count
					for (int k = 0; k < count; k++) {
						chunk.setBlock(BlockType.values()[id], xOffset + k, i); //Set the block
					}
					xOffset += count; //Add the column count
				}
			}
			chunk.setGenerated(true);
			return chunk;
		}
		return null; //Chunk file doesn't exist
	}

Hopefully this helps out!

If you want compression you could also have the save file be a zip archive and each chunk (or group of chunks) as a ZipEntry. I wouldn’t bother with RLE.

True. It really just depends on how expensive you want the saving method to be and then how compressed you want the data to be.

Which is why deflate (zip; and most compressors) has a variable compression level. :point:

You should be able yourself to figure out how this library works now. You create a tree structure
inside your code, because I think, that Chunk and Block are hierarchical in some sense. You call
a function save(), in each of this classes, which also call the save() method of the lower one. This
way each class can add his information and all you have to do is use an NBTOutputStream and write
it.

The compression is already done in NBT, so don’t worry about that. It’s better if you use this compression,
since I guess you need a fast compression rather than a good one.

This is my fiel format:

COMPOUND super COMPOUND chunk INT chunkX INT chunkZ LIST chunkData COMPOUND regionData BYTE_ARRAY subChunkData INT part

This may be not an good example, but maybe you can figure out why it is arranged like this.

Greetings

will be Reading all off this and se if i can get it to work =D