I am listening on a socket(S1) at one side and have created multiple instances of another socket(S2), to simulate concurrent users, using which i write data. The problem now is that S2 is writing too fast to S1 and flushing the data even before S1 can read. They are on high capacity machines. Since S1 cannot get the request, it cannot proceed and throws an error. S1 is not being swamped as it has enough resources and threads pooled to handle such conditions. This is totally baffelling me as i tried all the tricks mentioned.
I would really appritiate your input on this.
>I am listening on a socket(S1) at one side and have created multiple instances of another socket(S2), to simulate concurrent users
Since you haven't mentioned the socket types, I'm assuming the most commonly used - TCP sockets. So they are stateful, and byte-ordering and delivery is guaranteed.
>The problem now is that S2 is writing too fast to S1 and flushing the data even before S1 can read
Don't know what that means. If the delivery is guaranteed, the data that you write *will* reach the recipient. OSI layers are designed for that. Flushing the data doesnt mean the data is wiped - it merely forces the output streams to write the buffer contents onto the stream, no matter how small the data size may be. And if that data is written, the recipient *will* get it
>They are on high capacity machines
>Since S1 cannot get the request, it cannot proceed and throws an error.
What "request" does S1 cannot get? Or do you mean the data written by S2? Again, TCP connection guarantees delivery...
>This is totally baffelling me as i tried all the tricks mentioned
There are no tricks. For socket communication, you have to ensure the client and server follow some sort of a protocol, and are implemented such that they follow the protocol.
I have a protocol in place. The problem is that all the sent requests are not read, some are and some are not. This is because the socket closes before the read operation is done. So i get a socket closed exception.
I am trying to figure out why the socket closes before the data can be read.
Well, AFAIK, if you don't close the socket explicitly, it wouldn't be closed. Maybe you could look at your code again to see if you are closing the sockets receiving some special characters (in your protocol) or on some event..?
You need to sniff your traffic. Try ethereal.com (on windows),
or tcpdump your server nic, then stuff it in Ethereal for interpretation.
You might have a ip or tcp layer problem, like server or client sending RST or FYN when not expected. Or a bad firewall software, or whatever else in the way.
Like the other guy says, if you don't close it, something else does.
And if you use those XML crappy writers over socket stream, get ready to see it closed, because there are known bugs about XML writers that close streams they haven't opened (oh I so hate XML...)
I don't know about the state of jdk concerning half closed socket, that might be another track to follow.