国产一级a片免费看高清,亚洲熟女中文字幕在线视频,黄三级高清在线播放,免费黄色视频在线看

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
Looking For Optimal Solution: Benchmark Resul...

Aftrer my previous post about high-performance RoR deployment methods, I’ve got lots of messages/emails/IM conversations about some errors in previous benchmarks and there were lots of suggestions about extending of tested software set and modifying testing methodology. So, that is why I decided to perform deep and wide performance testing for all deployment schemes I can find. In this article you can find description of testing methodology and, of course, benchmarks results. If you want to know details of specific software setup, take a look at articles in category “Ruby On Rails” to find all articles from “Looking for optimal Solution” series (I’ll post all of them in next few days).

Truly speaking, when I’ve started all these testing stuff, my main aim was to find most optimal solution for our project, but later, thanks to Dmytro Shteflyuk, I decided to perform testing with optimized software settings, to find maximum theoretical speed, that I can expect from RoR on my servers. So, you can interpret results of testing from two different points of view: as simple performance comparison of different solutions, or as highest performance marks that specific solutions can provide you (of course, you can get more by creating some more complicated schemes like “nginx/lighttpd for static content + rails for dymanic”, but this benchmark was done only for dynamic content without any caching).

First of all I want to describe hardware/software plaftorm where all performance benchmarks were done. Our development server has following configuration:

  • CPU: 4 x XEON CPUs
  • Memory: 4 Gb of RAM
  • OS: Debian GNU/Linux Testing with recent 2.6 kernel.
  • Ruby: ruby 1.8.4 (2005-12-24) [i486-linux] from Debian Testing repository.
  • Rails: Rails 1.1.6 installed by gem.

All tests were performed on simple RoR appliaction with simple single-action controller:

class TestController < ApplicationControllerdef hw@hello = "Hello, world!"@time = Time.now()endend

and simple view:

<h1>Test#hw</h1><p>Hello: <%= @hello %></p><p>Time: <%= @time %></p>

Rails Framwork was started with default settings in production mode. Tests were performed by ab (Apache Benchmark) utility with following params:

$ ab -c 100 -n 10000 http://SERVERIP:PORT/test/hw

where PORT is specific port number that has was chosen for every test.

I decided to not use any DB-related code because I want to get top performance marks for Ruby On Rails engine, not for some mysql/postgres/oracle/etc software. Simple code, simple tests, simple and understandable results.

Notice: Before each tests all files from tmp/sessions were deleted because with lots of test requests to server I’ve got very poor results for some software because of file-system layer lags. So, if you’ll decide to check my tests on your own hardware/software, clean sessions dir after every test run.

While tests were running I’ve monitored server with top/iostat/vmstat to understand test results better… and that is why all fastcgi/proxy/lsapi tests were done with 4 backend processes - server has 4 CPUs and 4 processes performed better than 2/5/8/10.

So, let me describe tested configurations shortly before I will show you benchmark results (links will be pointed to detailed description of tests later).

  1. WEBrick/1.3.1 - I’ve tested it only to get some base performance value to compare with all other test results.
  2. mongrel (single process) - This test was performed to get information about what performance gain we can get if will user single mongrel server without any balancing software.
  3. lighttpd (4 mongrels) - Test of lighttpd load-balancing between several tcp backend servers.
  4. lighttpd (4 fastcgi processes) - Test of lighttpd load-balancing between several fastcgi backend servers.
  5. nginx (4 mongrels) - Test of nginx load-balancing between several tcp backend servers.
  6. nginx (4 fastcgi processes) - Test of nginx load-balancing between several fastcgi backend servers.
  7. pen (4 mongrels) - Test of pen load-balancing between several tcp backend servers.
  8. pound (4 mongrels) - Test of pound load-balancing between several tcp backend servers.
  9. haproxy (4 mongrels) - Test of haproxy load-balancing between several tcp backend servers.
  10. apache 2.0 (4 fastcgi processes) - Test of apache 2.0 load-balancing between several fastcgi backend servers.
  11. LiteSpeed (4 lsapi instances) - This really exotic software I tested because someone asked me about this test in comments for previous benchmark. LiteSpeed web-server have some SAPI module for ruby that, AFAIU, works like FastCGI, but with some improvements. But unfortunately, optimized version of this server costs some money and not all can use it for their projects (it has some free version, but if you want server, optimized for performance, you should pay lots of money).

Now, when you know about all of my tests, I can show you information about tests results. First of all, if you want to see results in table view, here is screenshot from my Excel spreadsheet:



If you like to analyze results in graphical presentation, you can take a look at the following diagram, that shows QPS (queries per second) results for all tests:


Speaking about these results, I can say, that all of them were predictable - TCP is really slower that unix sockets, that were used in fastcgi tests. So all tests with tcp-based backend commmunications have less QPS, than unix socket based. But you should understand, that tcp-based frontend-backend communication provides you with greay scalability mechanism and you can move backend instances between servers without any problems. So, if you need absolutely best performance on one server, I would recommend you to use nginx with fastcgi backend processes. But if you need better scalability, you can use nginx with mongrel-servers as backend processes.

本站僅提供存儲服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊舉報
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
RoR的部署方案選擇
主題:Lighttpd和RoR安裝配置的疑難解答
一個production模式下的Ror環(huán)境搭建-airport -JavaEye技術(shù)社區(qū)
關(guān)于Apache/Tomcat/JBOSS/Neginx/lighttpd/Jetty等一...
生產(chǎn)環(huán)境中的一些web server(主要是三巨頭apache, nginx, lighttpd)
Nginx是什么?Nginx介紹 - Licess's Blog
更多類似文章 >>
生活服務(wù)
分享 收藏 導(dǎo)長圖 關(guān)注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點擊這里聯(lián)系客服!

聯(lián)系客服