Another ehttpd Performance Test

1. The hello-world http server – ehttpd

%%% ehttpd.erl
-module(ehttpd).
-compile(export_all).

start() ->
    start(8888).
start(Port) ->
    N = erlang:system_info(schedulers),
    listen(Port, N),
    io:format("ehttpd ready with ~b schedulers on port ~b~n", [N, Port]),

    register(?MODULE, self()),
    receive Any -> io:format("~p~n", [Any]) end. %% to stop: ehttpd!stop.

listen(Port, N) ->
    Opts = [{active, false},
            binary,
            {backlog, 256},
            {packet, http_bin},
            {raw,6,9,<<1:32/native>>}, %defer accept
            %%{delay_send,true},
            %%{nodelay,true},
            {reuseaddr, true}],

    {ok, S} = gen_tcp:listen(Port, Opts),
    Spawn = fun(I) ->
        register(list_to_atom("acceptor_" ++ integer_to_list(I)),
            spawn_opt(?MODULE, accept, [S, I], [link, {scheduler, I}]))
    end,

    lists:foreach(Spawn, lists:seq(1, N)).

accept(S, I) ->
    case gen_tcp:accept(S) of
    {ok, Socket} ->
        spawn_opt(?MODULE, loop, [Socket], [{scheduler, I}]);
    Error ->
        erlang:error(Error)
    end,
    accept(S, I).

loop(S) ->
    case gen_tcp:recv(S, 0) of
    {ok, http_eoh} ->
        Response = <<"HTTP/1.1 200 OK\r\nContent-Length: 14\r\n\r\n"
                     "hello, world!\n">>,
        gen_tcp:send(S, Response),
        gen_tcp:close(S),
        ok;
    {ok, _Data} ->
        loop(S);
    Error ->
        Error
    end.
%%% end of ehttpd.erl

2. Start ehttpd

[root@localhost azunyanmoe]# ulimit -n 99999
[azunyanmoe@localhost ~]$ cat /proc/cpuinfo | grep GHz
model name : Pentium(R) Dual-Core  CPU      E5500  @ 2.80GHz
model name : Pentium(R) Dual-Core  CPU      E5500  @ 2.80GHz
[azunyanmoe@localhost ~]$ free -m
             total       used       free     shared    buffers     cached
Mem:          1980       1276        703          0         34        635
-/+ buffers/cache:        606       1373
Swap:         4998         70       4928
[azunyanmoe@localhost ~]$ erlc ehttpd.erl
[azunyanmoe@localhost ~]$ taskset -c 1 erl +K true +h 99999  +P 99999
-smp enable +S 2:1 -s ehttpd
Erlang R14B01 (erts-5.8.2) [ source] [smp:2:1] [rq:2] [async-threads:0]
[hipe] [kernel-poll:true]

ehttpd ready with 2 schedulers on port 8888
Eshell V5.8.2  (abort with ^G)
1> 

3. Run test with ab from another host

azunyanmoe@localhost:~$ ab -c 60 -n 100000 http://192.168.1.100:8888/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.1.100 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests

Server Software:        
Server Hostname:        192.168.1.100
Server Port:            8888

Document Path:          /
Document Length:        14 bytes

Concurrency Level:      60
Time taken for tests:   12.928 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      5300636 bytes
HTML transferred:       1400168 bytes
Requests per second:    7735.35 [#/sec] (mean)
Time per request:       7.757 [ms] (mean)
Time per request:       0.129 [ms] (mean, across all concurrent requests)
Transfer rate:          400.41 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    3  25.1      2    3000
Processing:     1    5   4.0      4     215
Waiting:        1    4   3.9      4     214
Total:          3    7  25.4      7    3004

Percentage of the requests served within a certain time (ms)
  50%      7
  66%      7
  75%      8
  80%      8
  90%      8
  95%      9
  98%     10
  99%     10
 100%   3004 (longest request)

4. Conclusion

  1) 7,735 req/s, not too bad, but still far from C10K.
  2) I tested nginx for the same task (and in the same environment), its
     performance was very close with ehttpd (about 7,500 ~ 8,000).
  3) Compiling ehttpd.erl with hipe option on seemed to be helpless.
  4) Maybe the bottleneck was elsewhere. At client side?

5. References

  1) http://blog.yufeng.info/archives/105

Advertisements
This entry was posted in erlang and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s