Java REST API Benchmark: Tomcat vs Jetty vs Grizzly vs Undertow, Round 2

  • 3

Java REST API Benchmark: Tomcat vs Jetty vs Grizzly vs Undertow, Round 2

Share onShare on Google+Share on FacebookShare on LinkedInTweet about this on Twitter

This is a follow-up to the initial REST/JAX-RS benchmark comparing Tomcat, Jetty, Grizzly and Undertow.

In the previous round where default server configuration was used, the race was led by Grizzly, followed by Jetty, Undertow and finally Tomcat.

In this round, I have set the maximum worker thread pool size to 250 for all 4 containers.

To make this happen, I had to do some code changes for Jetty as well as Grizzly as this was not possible in the original benchmark.

This allowed me to start the container with the thread pool size as a command line parameter.

For more detail about running the tests yourself, please have a look at the github link in the resources section.

Note that here, the test have been run only for 128 concurrent users as from the previous round, the number of concurrent users did not make a big impact

System information

 

Note that here, we have more free ram than in the previous round as I have shut down all running applications.

I also restarted the machine before every single test run

Results

 

throughoutput-10-million-request-128-user-250-worker-threads

Through output for 10 million request, 128 concurrent users, 250 server worker thread

As shown on the graph above, as fas as tough output is concerned, again, Grizzly is far ahead leading the race, followed by Jetty.

Undertow came third very close to Jetty. Then Tomcat came last.

 

 

 

 

response-time-10-million-request-128-user-250-worker-threads

Response time for 10 million requests, 128 concurrent users, 250 server worker thread

The Response time graph above shows Grizzly ahead in the game, followed by Jetty, Undertow and Tomcat last

Conclusion

I expected Undertow to be the fastest of all. But somehow, this did not happen

The result of this round 2 is very similar to what we have seen in round 1: Grizzly is the fastest container when it comes to serving JAX-RS requests.

Resources

Source code and detailed benchmark results are available at https://github.com/arcadius/java-rest-api-web-container-benchmark

 

 

 

 

 

 

 

 

Share onShare on Google+Share on FacebookShare on LinkedInTweet about this on Twitter

3 Comments

Stuart Douglas

February 1, 2016at 12:04 am

Something to bear in mind is that Grizzly is sending smaller responses than the other servers:

HTTP/1.1 200 OK
Content-Type: application/json
Date: Sun, 31 Jan 2016 23:36:32 GMT
Content-Length: 27

vs:

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 27
Content-Type: application/json; charset=UTF-8
Date: Sun, 31 Jan 2016 23:27:47 GMT

for Undertow and:

HTTP/1.1 200 OK
Date: Sun, 31 Jan 2016 23:28:55 GMT
Content-Type: application/json; charset=UTF-8
Content-Length: 27
Server: Jetty(9.2.14.v20151106)

for Jetty.

Calculating this as a percentage for Undertow:

22/142 = 0.154 = 15% more data being sent on the wire (which is one of the reasons why micro benchmarks like this are not terribly useful).

You also have no warm up phase, one request is not a warm up, at a minimum you need to be running the JVM under full load for at least a minute, preferably much longer.

You should also be running both the load driver and the server on different machines, testing on the same machine is problematic, especially when the machine in question has only 2 core (also 250 threads is way to many, especially for this type of micro benchmark on this hardware).

    Arcadius Ahouansou

    February 14, 2016at 10:24 pm

    Hello Stuart Douglas.

    Thank you very much for taking the time to comment on this article.

    Header Size:
    You are right about the data transferred from Grizzly being smaller.
    However the Content-Length is 27 in all cases, the header size differs because Grizzly in the case of this benchmark does not send the server signature.

    In round 3 of this benchmark, I will address your concern and make sure the headers sent by each server are identical

    Warm-up phase:
    It is very true that 1 request may not be enough.
    I will address this in round 3 as well.

    Running the load generator on a different machine:
    First note that the machine this benchmark has been run on has 4 cores.
    Please re-check the paragraph “system Information” in the post above.

    The load generator is being run on the same machine on purpose to avoid any network latency.
    You would have also noticed that there was no interaction with any back-end system. This is also on purpose so that only the App Servers response time is tested.

    More powerful hardware:
    I am not sure whether having a more powerful hardware will change who is winning the race.

    As numbers speak louder, I will address your concern in the next round.
    I am going to run round 3 on a proper dedicated server hardware:
    A quad-core Xeon machine with 32GB of ram
    ./sysinfo.sh
    CPU:
    model name : Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz
    model name : Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz
    model name : Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz
    model name : Intel(R) Xeon(R) CPU E3-1225 V2 @ 3.20GHz

    RAM:
    total used free shared buffers cached
    Mem: 31G 15G 16G 0B 330M 5.6G
    -/+ buffers/cache: 9.1G 22G
    Swap: 1.0G 0B 1.0G
    Java version:
    java version "1.8.0_72"
    Java(TM) SE Runtime Environment (build 1.8.0_72-b15)
    Java HotSpot(TM) 64-Bit Server VM (build 25.72-b15, mixed mode)

    OS:
    Linux ... x86_64 GNU/Linux

    I Will make sure I free up as much RAM as possible before running round 3.

    Stuart,
    – what would you suggest as ideal worker thread count on this Xeon server?
    Note that I will still be running the load generator on that same server machine.
    – Actually, Undertow was the main reason why I did this benchmark. As it seems quite new, I expected it to outperform existing servers like Tomcat, Jetty and Grizzly. Please is there any other tuning I could do to make Undertow do better?

    Thank you very much Stuart Douglas

Virtual Private Servers

August 28, 2017at 4:22 am

I decided to make a similar benchmark for Ada Web servers with the same REST API so that it would be possible to compare Ada and Java implementations. The benchmark gives the number of REST requests which are made per second for different level of concurrency.

Leave a Reply