Improving HTTP-Server Performance by Adapted Multithreading

J. Keller and O. Monien (Germany)


Multithreading, Parallel Processing, Web Technologies, Performance Evaluation


It is wellknown that http servers can be programmed easier by using multithreading, i.e. each connection is dealt with by a separate thread. It is also known, e.g. from massively parallel programming, that multithreading can be used to hide the long latency of a remote memory access. How ever, the two techniques do not readily complement each other because context switches do not necessarily occur when latencies would suggest them. We present an http server that combines these two features: as soon as an event with a longer latency is encountered, such as the server can not send all data with one buffer, the context is switched. The implementation is based on the replacement of threads in the http library Indy by an own implementation that al lows explicit context switch as in fibers. We benchmark our http server with the apache benchmark against the orig inal thread-based implementation, with different file sizes and different load levels. We find that if the server's net work connection is fully utilized, our implementation needs about one third of CPU time to handle the same throughput. If the network connection is not the bottleneck, then our implementation achieves a 26% higher throughput, given a 100% CPU utilization in both servers. The application of fiber-based multithreading is a technique similar to using assembler: it is not feasible on a large scale, but use in a library provides enormous performance benefits transpar ently to the user.

Important Links:

Go Back