Sunday 18 January 2009

TCP send in Linux 2.6

tcp_sendmsg() copies the data from userspace by buiding socket buffers and calling skb_entail() on each packet. skb_entail() calls tcp_add_write_queue_tail() for adding the buffer to the sk_write_queue of the socket and setting sk_send_head to the packet if it's not yet set.
Note the difference between sk_write_queue and sk_send_head, send_head denotes the first package which has not been requested for transmitting through the lower layer (IP) while all that the packages remain on the write_queue (until they are acked, check tcp_ack() during receiving)
In case the last packet added is the only one waiting for transmission (i.e. skb == skb_send_head) tcp_push_one() is called in order to advance sk_send_head and to call tcp_transmit_skb() on the package, otherwise it is simply enqueued.
Before returning the number of bytes copied from userspace tcp_sendmsg() calls tcp_push(), which calls __tcp_push_pending_frames(), which calls tcp_write_xmit(), the general function for iterating packets from sk_send_head and calling tcp_transmit_skb() for each of them.
Both tcp_push_one() and tcp_write_xmit() call tcp_transmit_skb() for the actuall transmission of a package through the socket's icsk_af_ops->queue_xmit() function.
Both tcp_push_one() and tcp_write_xmit() call tcp_event_new_data(), which advances sk_send_head and sets up the TCP_TIME_RETRANS timer of the socket if it's not set yet.

TCP receive in Linux 2.6

There are two ways of receiving packets from the NIC in the kernel, either through interrupts or by polling the interface.
During receiving through interrupts the netif_rx() function is called with the skb sk_buff, which enques the packet on the softnet_data's input_pkt_queue and schedules the NET_RX_SOFTIRQ softirq. (For the case when network cards would generate too many interrupts, the driver can register a poll function and switch off interrupts, by using the NAPI interface.)
The NET_RX_SOFTIRQ's func, net_rx_action() iterates the per processor softnet_data's poll queue - on which the backlog pseudo device is present as well - and calls each poll function. The backlog device's poll function, process_backlog(), is actually the one which processes the softnet_data's input_pkt_queue and pushes the sk_buff packets to the upper layers by calling netif_receive_skb() on each packet.
netif_receive_skb() will iterate the network packet handlers which match the protocol type and calls deliver_skb() function which in turn calls the protocol's "func" function, IP's packet handler function is ip_rcv().
ip_rcv() eventually calls ip_rcv_finish() which calls ip_route_info() on the skb in order to fill in the dst field. The dst field is a "rtable" struct which embeds a "dst_entry" struct. The dst_entry's ipunt field is a function pointer which is set to ip_local_deliver() in case of a packet that has to be delivered locally.
There is a global inet_protos array (hash) of "net_protocol" structs which represents the registered transport layer protocols, TCP's net_protocol is called tcp_protocol. ip_local_deliver() dereferences the protocol array and calls its handler function, TCP's handler is tcp_v4_rcv().
There are three receive queues of a socket, the sk_backlog, the ucopy.prequeue and the main sk_receive_queue.
The three queues have the following purposes: prequeue is responsible for putting off in order data processing to process context in case a process is waiting on the socket (called fastpath), receive queue is the standard way of receiving packets if no reader process is waiting when the package was received. Backlog is used to temporarily store received packages in case a reader is processing the receive queue.
tcp_v4_rcv() calls tcp_rcv_established() if the connection is established and puts a socket buffer on the preqeue if the socket is not locked but there is a user process waiting for reading (ucopy.task is set) and the socket buffer contains in order data according to the expected sequence number.
The main purpose of the prequeue mechanism is to allow processing of socket buffers in process
context and therefore decreasing the amount of time spent in bottom half (softirq) processing.
The process is awaken after putting the buffer on the queue. If there is no user process waiting tcp_v4_do_rcv() is called directly in the softirq context and the skb is placed on the main receive queue. If the socket is locked, the packet goes to the backlog queue. tcp_recvmsg() is the function called from process context in order to copy segments to user space. It processes the receive queue and/or installs itself as a waiter for data in case the queue is empty or the requested amount of bytes were not available yet. Doing so it calls tcp_data_wait() which will wait until tcp_v4_rcv() receives packages from the IP layer.
Backlog queue is also processed in tcp_recvmsg() right before the socket is released after a read operation.
Note that tcp_v4_do_rcv() can be called either in softirq or process context. Process context is responsible for data transfer to userspace, while softirq will place the buffer on the receive queue. tcp_v4_do_rcv() is called in process context through the sk_backlog_rcv field of the socket when the prequeue is iterated in tcp_prequeue_process().
Note that tcp_rcv_established() calls tcp_ack() as well, which cleans the socket's write_queue (i.e. retransmit queue) according to the ACK received in the package (check send side).