Why will my C socket file transfer (server/client) program run correctly only once?
This is my first time to post on Stack Overflow. I apologize if I have not followed protocol correctly.
I have a simple C socket program with a client and server component. The program sends a file from the client on one VM to the server on another VM. The program works and the file sends successfully the first time.
However, when I try to run the program a second time, the file does not seem to be transferred. Through investigation, I have found that, after rebooting the VMs, the program works again. Why is this happening?
Here is the server code:
/* Server code */
/* TODO : Modify to meet your need */
#define PORT_NUMBER 5000
#define SERVER_ADDRESS "10.20.20.55"
#define FILENAME "/home/saul/M2.py"
int main(int argc, char **argv)
struct sockaddr_in server_addr;
struct sockaddr_in peer_addr;
int sent_bytes = 0;
struct stat file_stat;
int closed = 0;
int fclosed = 0;
/* Create server socket */
server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket == -1)
fprintf(stderr, "Error creating socket --> %s", strerror(errno));
printf("Socket Created Successfully: %d\n", server_socket);
/* Zeroing s
In the first (successful) case, the data is not read by the first recv(), but in the call inside the while()-condition.
In the second (unsuccessful) case, the header+data is read all in the first recv()-call, thus the call to recv() in the while()-condition returns 0 and the while()-loop is not executed at all.
To me it is a bit unclear, how the protocol is exactly defined. If your header is always 512 bytes (which seems to be the case from your output), it might help to only read 512 bytes in the first call to recv:
len = recv(peer_socket, buffer, 512, 0);
But you still have to make sure, that really 512 bytes were read (and otherwise loop until you get the rest), otherwise it will get out of sync.
The bottom line is:
Never expect data from a stream socket to be chunked in a certain way when receiving data from it. Always specify, how many bytes you want to read, and check, if this number of bytes actually has been read (and call read() again, if not).
It looks like a race condition. More exactly it works first time only by chance, because initialization times causes the two client writes be separately read on server. And on following runs, the two writes are read by first server read.
The key is here : 705 = 512 (header size) + 193 (file size).
You read up to BUFSIZ bytes, when the client should send at most 512 bytes for the message part. If client is fast enough, it has queued everything and the data part is concatenated in first read after the 512 bytes of the message. And of course, nothing is left for following reads and you immediately reach end of file and immediately exit the receiving loop.