getting length of UDP packet

The java docs state that if a packet exceeds the maximum packet width it will be truncated. I want to check for possible truncation. The UDP header has a 16-bit length field, but I don’t see any way to access the UDP header in java. There is a length() method in the datagrampacket class, but the docs are not clear whether that returns the original packet length, or the truncated packet length. i.e. it’s not clear whether length() is simply computing the length of the byte array that java gets from the OS or if length() is actually extracting the length from the packet header.

Worst-case scenario I can append 2 bytes to the front of my packets that would include the length of the packet which I could check against when the packet arrives at it’s destination. I would like to avoid this if possible though as the length is already in the packet header and sending it again adds a fair bit of overhead for small packets.

I tried sending increasingly larger packets from my dev server to my local machine and calling length() on the datagrampacket on my local machine to see if the values diverge. The server will send increasingly larger packets starting at 100 bytes until they exceed an unsigned 16-bit length in which case the server throws an exception. Oddly, my client stops receiving packets once they hit 3,000 bytes in length. They leave the server, but they never make it to my client application. I’m not sure if windows is discarding them, or if Java is doing it, or if they get lost while enroute. Thus I wasn’t able to test the packet.length() against the packet that the server sends as they stop showing up after 3,000 in size.

~don

There is limit around 1400-1500 bytes for whole packet (including IP header), defined by MTU. Sending bigger packets is possible, but they’ll get fragmented to more than one packet and if just one is lost, the whole big packet is lost. It’s also bad for performance, so don’t use big packets. If you use them regularly consider either using normal TCP stream (there are some tunings to lower latency if required) or rethink your network protocol.

So lets say you want to be safe, right? the packet can be 64k, minus some header bytes and a whole lot of bytes that will never get filled for practical reasons, as explained by jezek2.

Just do:


byte[] massive = new byte[64*1024];
DatagramPacket dp = new DatagramPacket(massive);

while(true)
{
   socket.receive(dp);
   byte[] tiny = new byte[dp.getLength()];
   System.arraycopy(massive, 0, tiny, 0, tiny.length);
   // process 'tiny' as usual
}

Almost.

MTU (recommended to be set to 1400 for general internet) causes packets to be fragmented at the IP layer. They get automatically defragmented in that layer as well. By the time you receive them, you cannot tell that they fragmented (unless your OS is broken; some used to be, there were some downright evil ways to remotely crash windows and mac PCs for a while that made use of this)

The maximum byte size for UDP packets is much much bigger than that.

So, yes, there should be a performance issue when you go over the MTU (although if you go over it by a large enough amount you probably won’t notice the performance drop, unless you’re getting quite high packet loss already). But it shouldnt stop packets arriving (unless you’re dumping a heck of a lot of traffic and a router is deciding to shut you down).

Also, to the OP: get Wireshark, and run it on both client and server, and find out what is ACTUALLY happening.

Without that you have NO WAY of knowing if the server really is sending what you think it’s sending. You only know that something pretended to send what your high level app asked to be sent.