Now everything works fine,
but I still got three other little questions:
1.What happens, if I send pakets with different threads and 2 or more threads send it the same time on the same port?
2.What happens, if I receive an paket and send one the same time on the same port?
I already read about it, but I am not sure if it causes “real” errors.
3.Why isn´t it possible to add a received packet to an arrayList without using
You cant send and recieve at the same time, and thats not a problem.
I dont know why you would want multiple threads to be sending packets. You probably have a design flaw somewhere. Its possible, but not good practise - at least not since nio.
You should be able to if it returns a suitable object.
Basically what happens is you add the packets you’ve created in a queue for the Operating System to send.
It’s not an issue, though. Since if you have other programs or you need to differentiate between 2 connections (In most cases you don’t need to) you just “make” a new connection by simply sending/receiving to another port.
When you send and receive data you send that data in “packets” at a time. This means that your data can’t just be received half way - either it comes or it doesn’t come at all (UDP).
As an off-note. TCP works pretty much the same way, except TCP automatically checks if the data you send/receive 1) arrives and 2) arrive in the same order it was sent (This can lead the TCP to backtrack for missing packets and cause lag).
If you’re making an MMO Server (or server-client application in general) you can for example differentiate between different clients by their ip address. I.e, receive packet, check who sent it, deal with the packet,
Your clients should only need to talk to the server.
And about two Threads receiving at the same. Hmm. Either the packets are shared inconsistently between them (if they listen to the same port) or the OS give both of them a copy. Not sure. I want to try this out. Brb.
[EDIT]: Oh right, it’s not possible since the port will be in use (The operating system won’t allow it). So, you can receive at the same times no problem from different ports. In most cases you wouldn’t need to. Why do you think you would need 2 ports for your program? We could perhaps point out a simpler and more effective way where you’d only need to use 1 port.
Its possible to send a datagram with no checksum though, so I would certainly believe that corruption is possible. The checksum in udp is not very good either, so smaller errors won’t be noticed. Also, remember datagram truncation on some systems. Thats a neat way to lose packet data.
UDP uses a 16bit checksum so in this case you don’t need to worry about it. If the UDP fails the checksum it will drop the packet. I.e, look as if it never arrived to your program. So, if you’re using UDP you don’t have to worry about it.
Actually lemme paste in here using google-fu:
[quote]UDP used a 16 bit checksum. It is not impossible for it to have corruption, but it’s pretty unlikely. In any case it is not more susceptible for corruption than TCP.
[/quote]
[quote]First of all, the “IP checksum” referenced above is only an IP header checksum. It does not protect the payload. See RFC 791
Secondly, UDP allows transport with NO checksum, which means that the 16-bit checksum is set to 0 (ie, none). See RFC 768. (An all zero transmitted checksum value means that the transmitter generated no checksum)
Thirdly, as others have mentioned, UDP has a 16-bit checkSUM, which is not the best way to detect a multi-bit error, but is not bad. It is certainly possible for an undetected error to sneak in, but very unlikely.
[/quote]
[quote]UDP uses a 16-bit optional checksum. Packets which fail the checksum test are dropped.
Assuming a perfect checksum, then 1 out of 65536 corrupt packets will not be noticed. Lower layers may have checksums (or even stronger methods, like 802.11’s forward error correction) as well. Assuming the lower layers pass a corrupt packet to IP every n packets (on average), and all the checksums are perfectly uncorrelated, then every 65536*n packets your application will see corruption.
Example: Assume the underlying layer also uses a 16-bit checksum, so one out of every 2^16 * 2^16 = 2^32 corrupt packets will pass through corrupted. If 1/100 packets are corrupted, then the app will see 1 corruption per 2^32*100 packets on average.
If we call that 1/(65536*n) number p, then you can calculate the chance of seeing no corruption at all as (1-p)^i where i is the number of packets sent. In the example, to get up to a 0.5% chance of seeing corruption, you need to send nearly 2.2 billion packets.
(Note: In the real world, the chance of corruption depends on both packet count and size. Also, none of these checksums are cryptographically secure, it is trivial for an attacker to corrupt a packet. The above is only for random corruptions.)
[/quote]
TLDR; don’t worry about it.
In another thread (2) I add packets to receivedData.
If Thread (1) calls “while(receivedData.size() > 0)” shortly after thread(2) added the packet, it´s null. It seems, that thread(2) just increased the size but haven´t added the object yet.
Should I lock the object until is it completely added?
Should I put a “thread.sleep(0,5)” after “while(receivedData.size() > 0)”.
Is it better to catch the errors and don´t care about some more packetloss?
Feel free to criticise, if my approach is quite bad.
thx for reply
Why are you receiving data from the same connection on two different threads at the same time? This will only leads the packets to split inconsistently between the threads.