System architecture

While current protocols place state at both endpoints, Trickles encapsulates server-side state into continuations, and ships continuations to the client. The client periodically sends this state to the server as needed to restore the server's state and generate the necessary data.

Both the transport layer and the server application contain state that are encapsulated within a continuation. Transport continuations are used by the congestion control algorithm. The transport control block (TCB) variables are included to enable the server to reconstruct the congestion control state. Selective acknowledgments (SACK) provide the link status information to consider when computing a state update.

Application continuations are constructed and used by the server application to assist in generating application data to return to the client. Application continuations contain arbitrary user-defined data, which can be used to resume or optimize a computation. Examples of the former include the current state of an authentication protocol, the negotiated connection keys of a SSL connection (protected by a secret known only to the server); of the latter, shortcut information for finding a particular object, such as a file system inode.

TCB variables
Server-side RTT estimation
mac Message Authentication Code
Trickles state machine
{ start, end, rangeNonce } SACK chunk (one or more)
Transport continuation fields

Using continuations

Using continuations and generating continuation updates

Each Trickles request contains the continuations necessary to compute the response data. Likewise, Trickles response packets contain the updated transport continuation resulting from the congestion control actions, and the update user continuation resulting from generating the requested data.

To bootstrap the client's set of continuations, the initial continuation is sent to the client on the SYN/ACK packet:

Bootstrapping the client's continuation set

In the example below, the server converts an HTTP request into a user continuation containing the inode to the requested file. The inode continuation is attached to requests for subsequent parts of the file, wherein the the server application uses the inode to accelerate the data access.

The trickle abstraction

The trickle abstraction provides a model for reasoning about the control flow and data flow in a stateless server. A trickle is a sequence of corresponding requests and responses. Since a server does not remember state between packets, information flows only forward along a trickle. The packets of a connection can be partitioned into a set of request/response pairs. Note that, at any point, there are as many active trickles as packets in flight. Thus, to control cwnd, it suffices to control the number of trickles.

When a server receives a request, it can maintain or adjust the current number of trickles. To maintain the current number, the server continues each trickle by generating one response trickle.

To increase the number of trickles, the server splits the trickle by generating multiple response trickles:

Splitting a trickle

To decrease the number of trickles, the server terminates the trickle by generating no responses:

Terminating a trickle

Therefore, to implement a window-based congestion control algorithm, it suffices to control the number of trickles by determining when to split and terminate.

Challenges in stateless congestion control

The stateless continuation passing processing model fundamentally constrains when data is available to the server. As statelessness precludes the server from directly propagating information between requests, the client is responsible for fusing information from different server responses.

Thus, state computed in response to a request is available to the server after one RTT. In other words, for every request k, the results of the immediately preceding cwnd - 1 requests are not available.

As normally defined, TCP Reno violates these constraints. TCP uses the following algorithm:

  1. Compute newCwnd
  2. Send newCwnd - cwnd + 1 packets
  3. cwnd := newCwnd

In this algorithm, cwnd must be available for the subsequent request. This violates the data flow constraint regardless of cwnd.

Stateless congestion control through TCP simulation

Trickles uses input and state inference and TCP simulation to resolve this problem. The server assumes that each request arrives in order. This assumption fixes a sequence of requests leading to any request number k.

Trickles uses these assumptions to mimic the behavior of TCP. Conceptually, the inferred request sequence is used to drive a simulated TCP state machine starting from the beginning of the time to the current point; the resulting number of response packets sent by the simulated TCP is used to determine how many times to split.

More concretely, the simulation generates the cwnd after processing the request. Let TCPCwnd(k) be the function mapping request number k to cwnd.

During slow start/congestion avoidance, Trickles congestion control reduces to maintaining TCPCwnd(k) in flight. For a monotonic TCPCwnd(k), such as that for TCP Reno, this reduces to generating

TCPCwnd(k) - TCPCwnd(k-1) + 1
responses, which can be implemented by splitting
TCPCwnd(k) - TCPCwnd(k-1)

Of course, direct simulation is highly inefficient and impractical, since the time complexity is quadratic in the length of the connection. Instead, we derived a closed-form expression that captures TCP behavior under loss-free conditions, and can be evaluated in O(1) time.

Closed-form expression for TCPCwnd(k) under TCP Reno

Trickles decomposes full TCP into loss-free epochs, and applies different instances to TCPCwnd(k). The free parameters of TCPBase (initial sequence number), initial cwnd, and ssthresh enables Trickles to adapt the expression to the initial conditions of each loss-free epoch.