Anyone who has any relationship with
Rails development has, at this point, heard of
Nginx. The
point of Nginx is to replace the Apache, a the definitive global webserver that Rails devs feel is simply too slow for their lightening fast development framework. It's not the first time the Rails community has
snubbed Apache, nor will it the last. Those Rails devs are simply fickle folks.
So, fine, let the Rails devs frolic with their uberfast webserver... what about the rest of us mere mortals? Is Nginx a good route for you? Let me say here and now, the answer to that question is almost always a strong, resilient, and durable
no. The reasons for the rejection are many, so let's start with the funny ones first and proceed to the more technical ones.
First, it behaves in inexplicable ways for different browsers. Check out this screen shot of Penny-Arcade loaded in Firefox (on the top) and Konquerer (on the bottom) at the
same time.
Click to see full resolution
This happened with multiple reloads (cache disabled)... it always worked with Firefox, always "failed" with Konqueror. Oh, and that "Bad Gateway" message is a something you should get used to if you are thinking about deploying Nginx, because it's an all too common sight (more about that later on).
Second, the
primary documentation is in Russian. Yes,
Русский. From what I can gather, the primary developers are Russian, which is great... yay global open source development! But, a webserver is a complicated beast, hence the great forests that are clear cut each year to produce the necessary library of books on Apache and MS Information Server. Let me be clear that when I say
primary, I do mean to imply there is
secondary documentation. This is secondary documentation in the same way that warning labels will list sixteen life threatening things you could do written in English, followed by a single warning in Spanish that translates to "Danger."
Third, nginx does not support .htaccess files. Anyone who spends much time building custom websites knows the power of these magic little files that alters the way Apache treats a particular folder. Securing a folder with basic authentication is two line simple lines and a password file. Nginx takes a different approach, where different means stop bugging us to add .htaccess support. Instead, every directive, for every folder, regardless of it's scope, must go into a master configuration file. You can split the conf file into many smaller files, but they are all loaded when the server starts and given global effect. The common approach here is to split each hosted domain into a conf file... but that only helps keep things organized, because in the end of the day, every conf file has global implications.
Third and a half, nginx requires you to have apache support tools lying around to do stuff. This really isn't worth a whole new point, because everyone already has apache lying around... but lets say you wanted to create a password file for basic authentication. There is no nginx utility to generate those handy hash values, you have to use
htpasswd, available from your apache distribution.
Fourth, Nginx doesn't actually do anything beyond serve static HTML and binary assets... which is to say, it doesn't run php or perl or any of the other P's that you might find in the
LAMP stack. What it does is take requests and proxies them to other servers that do know how to execute that code. This is great in the Rails world, which long ago decided to have Rails be it's on little server that you submit requests to and get responses back. Even under Apache, the standard approach is to run Rails as a cluster of Mongrel servers that Apache talks to via a proxy connection. In the world of PHP and Perl, this approach is somewhat counter-intuitive. Apache's mod_php loads a php interpreter into Apache, allowing Apache to do all the heavy lifting for you... ditto with mod_perl. Even ruby has a mod_ruby (although, it's still premature). With nginx, everything is it's own standalone server.
So, what if your php project needs to know something about the webserver (like the root folder, or a basic auth username)? Well, you need to know that ahead of time and setup the proxy (which you defined in that global conf file I mentioned in #3) to pass those variables to your application server, otherwise it won't be around for you to use. Better yet, what if the proxy server is down? Nginx will great you with a handy "Bad Gateway" message and no further information. Good luck debugging the underlying server, since it really only knows how to talk in http requests... perhaps you can code your own debugger with
LWP.
Finally, I am left with the question why? The ostensible reason is that it's faster and can therefore handle more requests. Even if we accept that as true (*grumble, grumble*), it only accomplishes that speed by passing the buck off to other servers. When you find a non-responsive site it's not because the static assets like images and HTML text are being served slowly... it's because the dynamic content generated by php/perl/python/ruby/whatever and the underly database from which the data is drawn cannot keep up. Nginx suffers that same failing... while requiring just as many resources because you now have to run so many different servers for each of the languages you want to code it.
If you are developing Rails, then by all means, enjoy this flavor of the month until some new exciting technology comes along and all the little Ruby lemmings go marching off in a new direction. For everyone else writing applications that are meant to stand the test of time, stay with Apache, it hasn't let us down yet.