socketChannel vs datagramsocket (UDP)

Hi
my friend have written a basic server and client using socket, an the again a server and a client using a socketchannel, using UDP
The client connect to server, send a packet containing the result of System.nanoTime(), the server echo this packet, then client receive the echo and find the delta between the packet time and the actual time (something like a PING)

The results are very strange: it has run on the same machine (linux system).
real ping say:

ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_req=1 ttl=64 time=0.018 ms

Socket say it has taken 0.5ms (500000ns)
Socketchannel say it has taken 2ms (2000000ns)

how this is possible? Socketchannel should be faster, no? also 2ms are really too much lag!
UDP doesn’t have Nagle’s algorithm, what can cause this big delay?

thanks for response, i’ll add the code tomorrow if i meet my friend, but writing it from scratch should not take more than 30 minute.

If 2ms ping is too much lag, then you should be working in high frequency trading, because you’re going to have to deal with more than that in anything like real world gaming. As for the discrepancy, are your client and server also communicating over loopback?

i know there are a lot of lag out there, that’s why i’m trying to cut it off at least where i can.

yes, client and server are on the same machine using loopback (127.0.0.1)

Random happenstance? Did you do send multiple packets and average them? Testing against localhost is pretty meaningless when it comes to performance since the algorithms benefits and costs really only show up when traveling through a real network (where you can expect anywhere between 20ms and 150ms to be playable).