Whenever we add new code to a webserver, we place the server "under siege." First we stressthe new URL(s) and then we pound the server with regression testing with the new URLs added to the configuration file. We want to see if the new code will stand on its own, plus we want to see if it will break anything else.

The following statistics were gleaned when I laid siege to a single URL on a http server:

Transactions: 1000 hits

Elapsed time: 617.99 secs

Data transferred: 4848000 bytes

Response time: 59.41 secs

Transaction rate: 1.62 trans/sec

Throughput: 7844.79 bytes/sec

Concurrency: 96.14

Status code 200: 1000

In the above example, we simulated 100 users hitting the same URL 10 times, a total of 1000 transactions. The elapsed time is measured from the first transaction to the last, in this case it took 617.99 seconds to hit the http server 1000 times. During that run, siege received a total of 4848000 bytes including headers. The response time is measured by the duration of each transaction divided by the number of transactions. The transaction rate is number of transactions divided by elapsed time. Throughput is the measure of bytes received divided by elapsed time. And the concurrency is the time of each transaction divided by the elapsed time. The final statistic is Status code 200. This is the number of pages that were effectively delivered without server errors.

To create this example, I ran siege on my Sun workstation and I pounded a GNU/Linux Intel box, essentially a workstation. The performance leaves a lot to be desired. One indication that the server is struggling is the high concurrency. The longer the transaction, the higher the concurrency. This server is taking a while to complete the transaction and it continues to open new sockets to handle all the additional requests. In truth the Linux box is suffering from a lack of RAM, it has about 200MB, hardly enough to be handling one hundred concurrent users. :-)

Now that we've stressed the URL(s) singly, we can add them to our main configuration file and stress them with the rest of the site. The default URLs file is /etc/siege/urls.txt.

Siege can allow websystems administrators a chance to see how their servers perform under duress. I recommend running server performance monitoring tools while it is under siege to gage your hardware / software configurations. The results can be surprising...

Siege was originally based on Lincoln Stein's and if you cannot it on your architecture, it is recommended that you run that excellent perl script instead. I intentionally modeled my statistics output after his in order to maintain similar reference.


Copyright © 2000 2001 2004 Jeffrey Fulmer, et al. <[email protected]>

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Fo undation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.


The most recent released version of siege is available by anonymous FTP from in the directory pub/siege.

RELATED TO layingsiege…