You have the best hardware you can get, but the service is still crawling. What's wrong? Make sure you have a fast Internet connection—not necessarily as fast as your ISP claims it to be, but as fast as it should be. The ISP might have a very good connection to the Internet but put many clients on the same line. If these are heavy clients, your traffic will have to share the same line and your throughput will suffer. Think about a dedicated connection and make sure it is truly dedicated. Don't trust the ISP, check it!
Another issue is connection latency. Latency defines the number of milliseconds it takes for a packet to travel to its final destination. This issue is really important if you have to do interactive work (via ssh or a similar protocol) on some remote machine, since if the latency is big (400+ ms) it's really hard to work. It is less of an issue for web services, since it influences only the first packet. The rest of the packets arrive without any extra delay.
The idea of having a connection to "the Internet" is a little misleading. Many web hosting and colocation companies have large amounts of bandwidth but still have poor connectivity. The public exchanges, such as MAE-East and MAE-West, frequently become overloaded, yet many ISPs depend on these exchanges.
Private peering is a solution used by the larger backbone operators. No longer exchanging traffic among themselves at the public exchanges, each implements private interconnections with each of the others. Private peering means that providers can exchange traffic much quicker.
Also, if your web site is of global interest, check that the ISP has good global connectivity. If the web site is going to be visited mostly by people in a certain country or region, your server should probably be located there.
Bad connectivity can directly influence your machine's performance. Here is a story one of the developers told on the mod_perl mailing list:
What relationship has 10% packet loss on one upstream provider got to do with machine memory ? Yes.. a lot. For a nightmare week, the box was located downstream of a provider who was struggling with some serious bandwidth problems of his own... people were connecting to the site via this link, and packet loss was such that retransmits and TCP stalls were keeping httpd heavies around for much longer than normal.. instead of blasting out the data at high or even modem speeds, they would be stuck at 1k/sec or stalled out... people would press stop and refresh, httpds would take 300 seconds to timeout on writes to no-one.. it was a nightmare. Those problems didn't go away till I moved the box to a place closer to some decent backbones. Note that with a proxy, this only keeps a lightweight httpd tied up, assuming the page is small enough to fit in the buffers. If you are a busy internet site you always have some slow clients. This is a difficult thing to simulate in benchmark testing, though.