Apachebench request count and Node.js script counter don't match -
no doubt i'm doing stupid, i've been having problems running simple node.js app using nerve micro-framework. testing apachebench, seems code within single controller being invoked more app being called.
i've created test script so:
'use strict';  (function () {     var path = require('path');     var sys = require('sys');     var nerve = require('/var/www/libraries/nerve/nerve');     var nervecounter = 0;      r_server.on("error", function (err) {         console.log("error " + err);     });      var app = [         ["/", function(req, res) {             console.log("nc = " + ++nervecounter);        }]     ];      nerve.create(app).listen(80); }()); start server. box, run load test:
/usr/sbin/ab -n 5000 -c 50 http://<snip>.com/ ... complete requests:      5000 ... percentage of requests served within time (ms) ...  100%    268 (longest request) but node script printing way to:
nc = 5003 rc = 5003     in other words, server being called 5000 times controller code being called 5003 times.
any ideas i'm doing wrong?
updated
i changed tone , content of question reflect colum, alfred , greginyeg gave me in realising problem did not lie redis or nerve , lie apachebench.
program:
const port = 3000; const host = 'localhost'; const express = require('express'); const app = module.exports = express.createserver(); const redis = require('redis'); const client = redis.createclient();  app.get('/incr', function(req, res) {     client.incr('counter', function(err, reply) {         res.send('incremented counter to:' + reply.tostring() + '\n');     }); });  app.get('/reset', function(req, res) {     client.del('counter', function(err, reply) {         res.send('resetted counter\n');     }); });  app.get('/count', function(req, res) {     client.get('counter', function(err, reply) {         res.send('counter: ' + reply.tostring() + '\n');     }); });  if (!module.parent) {     app.listen(port, host);     console.log("express server listening on port %d", app.address().port); } conclusion
it works without flaws on computer:
$ cat /etc/issue ubuntu 10.10 \n \l  $ uname -a linux alfred-laptop 2.6.35-24-generic #42-ubuntu smp thu dec 2 01:41:57 utc 2010 i686 gnu/linux  $ node -v v0.2.6  $ npm install express hiredis redis npm info build success: redis@0.5.2 npm info build success: express@1.0.3 npm info build success: hiredis@0.1.6  $  ./redis-server --version redis server version 2.1.11 (00000000:0)  $ git clone -q git@gist.github.com:02a3f7e79220ea69c9e1.git gist-02a3f7e7; cd gist-02a3f7e7; node index.js  $ #from tab  $ clear; curl http://localhost:3000/reset; ab -n 5000 -c 50 -q http://127.0.0.1:3000/incr > /dev/null; curl http://localhost:3000/count;  resetted counter apachebench, version 2.3 <$revision: 655654 $> copyright 1996 adam twiss, zeus technology ltd, http://www.zeustech.net/ licensed apache software foundation, http://www.apache.org/  benchmarking 127.0.0.1 (be patient) completed 500 requests completed 1000 requests completed 1500 requests completed 2000 requests completed 2500 requests completed 3000 requests completed 3500 requests completed 4000 requests completed 4500 requests completed 5000 requests finished 5000 requests   server software:         server hostname:        127.0.0.1 server port:            3000  document path:          /incr document length:        25 bytes  concurrency level:      50 time taken tests:   1.172 seconds complete requests:      5000 failed requests:        4991    (connect: 0, receive: 0, length: 4991, exceptions: 0) write errors:           0 total transferred:      743893 bytes html transferred:       138893 bytes requests per second:    4264.61 [#/sec] (mean) time per request:       11.724 [ms] (mean) time per request:       0.234 [ms] (mean, across concurrent requests) transfer rate:          619.61 [kbytes/sec] received  connection times (ms)               min  mean[+/-sd] median   max connect:        0    0   0.5      0       7 processing:     4   11   3.3     11      30 waiting:        4   11   3.3     11      30 total:          5   12   3.2     11      30  percentage of requests served within time (ms)   50%     11   66%     13   75%     14   80%     14   90%     15   95%     17   98%     19   99%     24  100%     30 (longest request) counter: 5000 
Comments
Post a Comment