Tuesday, August 28, 2012

how rsync works

Nice small description on how rsync algorithm works at http://psteitz.blogspot.in/2012/01/rsync-how-it-works.html

If you are interested in details then look at technical paper on rsync (written by rsync founders).

Sunday, August 26, 2012

Latency vs Throughput

These terms are sometimes confusing.

Latency is the time it takes to serve a request. Throughput on the other hand measures the total number of requests served in a given unit of time.
For example in context of web servers, time it takes to serve a http request is latency.
Number of http client requests served per unit of time (seconds/hour/day etc) measures the throughput. Throughput can also be measured in terms of bytes of data served per unit of time.

This choice of latency vs throughput is based on application requirement. Generally, applications strive for high throughput without causing the latency delay. For ex an e-commerce application should be able to serve large number of customers (throughput) with minimum latency. On the other hand, an application doing the batch processing of data like "log files data analysis" will be more interested in higher throughput of data access.  

To improve the latency there may be multiple factors. Latency may depend on how well the application is written to the external factors like shared data access. Latency will be minimum when there is no contention for shared resources in processing the request. Ideal case is when a single thread is processing the request. This is practically not feasible as applications need to serve multiple requests concurrently thus requiring higher throughput.

Throughput can be improved by increasing the number of threads running the application without causing much latency delay. If the CPU is mostly utilized then any further increase in number of threads would cause throughput degradation. Applications needs to tune themselves for optimum results.

If the application is scalable, adding more machines to the system should ideally increase the throughput proportionally without causing latency change. Scalability refers to the capability of system to increase throughput under an increased load when resources are added. Scalability constrains can be data access layer or some shared resources which the application is trying to access for serving the request.