Benchmark Nodejs

Posted on  by admin

A robust benchmarking library that supports high-resolution timers & returns statistically significant results. As seen on jsPerf. Benchmark.js’ only hard dependency is lodash.Include platform.js to populate Benchmark.platform. In an AMD loader:. Optionally, use the microtime module by Wade Simmons:. Tested in Chrome 54-55, Firefox 49-50, IE 11, Edge 14, Safari 9-10, Node.js 6-7, & PhantomJS 2.1.1.

Benchmark.js is part of the BestieJS “Best in Class” module collection. This means we promote solid browser/environment support, ES5+ precedents, unit testing, & plenty of documentation.

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. To run this test with the Phoronix Test Suite, the basic command is: phoronix-test-suite benchmark build-nodejs.

Last Updated 29 December 2021. Test Type Processor. Average Run Time 29 Minutes, 58 Seconds.
Test Dependencies C/C++ Compiler Toolchain + Python + OpenSSL. * Uploading of benchmark result data to OpenBenchmarking.org is always optional (opt-in) via the Phoronix Test Suite for users wishing to share their results publicly.** Data based on those opting to upload their test results to OpenBenchmarking.org and users enabling the opt-in anonymous statistics reporting while running benchmarks from an Internet-connected platform.*** Test profile page view reporting began March 2021.Data current as of 10 May 2022.

Installation

pts/build-nodejs-1.1.1 [View Source] Wed, 29 Dec 2021 15:07:17 GMTinterim.sh change to accommodate upstream difference around cleaning and building Node.js. pts/build-nodejs-1.1.0 [View Source] Wed, 29 Dec 2021 14:16:39 GMTUpdate against nodejs 17.3 upstream to enable Python 3.10 compatibility.

pts/build-nodejs-1.0.0 [View Source] Wed, 17 Mar 2021 12:54:33 GMTAdd Node.js timed compilation benchmark.

OpenBenchmarking.org metrics for this test profile configuration based on 583 public results since 29 December 2021 with the latest data as of 6 May 2022. Below is an overview of the generalized performance for components where there is sufficient statistically significant data based upon user-uploaded results.

It is important to keep in mind particularly in the Linux/open-source space there can be vastly different OS configurations, with this overview intended to offer just general guidance as to the performance expectations. Percentile Rank. Seconds (Average).

Tested CPU Architectures

Based on OpenBenchmarking.org data, the selected test / test configuration (Timed Node.js Compilation 17.3 - Time To Compile) has an average run-time of 49 minutes.

By default this test profile is set to run at least 3 times but may increase if the standard deviation exceeds pre-defined defaults or other calculations deem additional runs necessary for greater statistical accuracy of the result. Yes, based on the automated analysis of the collected public benchmark data, this test / test settings does generally scale well with increasing CPU core counts.

Data based on publicly available results for this test / test settings, separated by vendor, result divided by the reference CPU clock speed, grouped by matching physical CPU core count, and normalized against the smallest core count tested from each vendor for each CPU having a sufficient number of test samples and statistically significant data.

This benchmark has been successfully tested on the below mentioned architectures.

The CPU architectures listed is where successful OpenBenchmarking.org result uploads occurred, namely for helping to determine if a given test is compatible with various alternative CPU architectures.

Timed Code Compilation

Kernel Identifier. ARMv8 Cortex-A72, Ampere ARMv8 Neoverse-N1 256-Core, Ampere eMAG ARMv8 32-Core, Apple M1. 1 System - 48 Benchmark Results. Intel Core i5-4258U - Apple MacBook Pro - 2 x 4 GB DDR3-1600MHz. macOS 11.6.5 - 20.6.0 - GCC 13.0.0 + Clang 13.0.0 + Xcode 13.2.1.

1 System - 233 Benchmark Results. AMD Ryzen Threadripper 3990X 64-Core - Gigabyte TRX40 AORUS PRO WIFI - AMD Starship. Pop 21.10 - 5.17.5-76051705-generic - GNOME Shell 40.5. 1 System - 178 Benchmark Results. 2 x Intel Xeon Platinum 8380 - Intel M50CYP2SB2U - Intel Device 0998.

Clear Linux OS 36260 - 5.17.4-1139.native - GNOME Shell 42.0. 6 Systems - 301 Benchmark Results. 2 x AMD Ryzen 5 3600 6-Core - Intel 440BX - Intel 440BX. AMD Ryzen 9 5950X 16-Core - Gigabyte X570 AORUS ELITE WIFI - AMD Starship.

Gentoo - 5.17.4-gentoo-harambe-edition - X Server 1.21.1.3. 2 Systems - 71 Benchmark Results. Intel Core i7-6700HQ - Oracle VirtualBox v1.2 - Intel 440FX 82441FX PMC. ManjaroLinux 21.2.6 - 5.15.32-1-MANJARO - KDE Plasma 5.24.4.

BestieJS

1 System - 5 Benchmark Results. AMD EPYC 7443P 24-Core - Dell PowerEdge C6525 [0978PJ] - AMD Starship. 3 Systems - 220 Benchmark Results. 2 Systems - 217 Benchmark Results. AMD Ryzen 7 2700X Eight-Core - ASRock B450M Pro4 - AMD 17h. Ubuntu 22.04 - 5.15.0-25-generic - KDE Plasma 5.24.4. Featured Disk Comparison. 1000GB Western Digital WD1005FBYZ-0 + 1000GB Western Digital WD10EZEX-00B + 250GB SSD 850 EVO 250G - 1000GB Western Digital WD1005FBYZ-0 + 1000GB Western Digital WD10EZEX-00B.

1 System - 1 Benchmark Result. AMD Ryzen 9 5950X 16-Core - MSI X570-A PRO - AMD Starship. Ubuntu 20.04 - 5.15.0-23-generic - GNOME Shell 3.36.9. 2 Systems - 1560 Benchmark Results. Intel Core i9-11980HK - Alienware x17 R1 - Intel Device 43ef. Elementary 6.1 - 5.13.0-39-generic - X Server 1.20.13.

15 Systems - 182 Benchmark Results.

Ubuntu 21.10 - 5.16.0-051600rc8-generic - GNOME Shell 40.5. 11 Systems - 119 Benchmark Results.

AMD Ryzen 5 5600G - ASUS TUF GAMING B550M-PLUS - AMD Renoir.

Ubuntu 21.10 - 5.16.0-051600rc8-generic - GNOME Shell 40.5. Featured Kernel Comparison. Intel Core i9-12900K - ASUS ROG STRIX Z690-E GAMING WIFI - Intel Device 7aa7. Intel Core i9-12900K - ASUS ROG STRIX Z690-E GAMING WIFI - Intel Device 7aa7.

Revision History

Ubuntu 22.04 - 5.16.0-051600-generic - GNOME Shell 40.5. 4 Systems - 150 Benchmark Results. Intel Core i7-2700K - BIOSTAR B75MU3B v5.0 - Intel 2nd Generation Core DRAM. Ubuntu 21.04 - 5.11.0-38-generic - GNOME Shell 3.38.4. 4 Systems - 215 Benchmark Results. Intel Core i7-5775C - MSI Z97-G45 GAMING - Intel Broadwell-U DMI. Ubuntu 18.10 - 5.0.0-999-generic - GNOME Shell 3.30.2. Featured Graphics Comparison. AMD Ryzen 3 2200G - ASUS PRIME B350M-E - AMD Raven. Ubuntu 22.04 - 5.13.0-19-generic - GNOME Shell 40.5. 3 Systems - 86 Benchmark Results.

Source Repository

Intel Core i7-4770K - Gigabyte Z97-HD3 - Intel 4th Gen Core DRAM. Ubuntu 20.10 - 5.8.0-63-generic - GNOME Shell 3.38.3. 8 Systems - 44 Benchmark Results. 2 x AMD EPYC 74F3 24-Core - AMD DAYTONA_X - AMD Starship. Ubuntu 20.04 - 5.13.0-28-generic - GNOME Shell 3.36.4. 3 Systems - 90 Benchmark Results. Intel Core i7-1185G7 - Dell 0DXP1F - Intel Tiger Lake-LP.

Test Type

Ubuntu 22.04 - 5.15.0-17-generic - GNOME Shell 40.5. 3 Systems - 26 Benchmark Results. AMD EPYC 72F3 8-Core - Supermicro H12SSL-i v1.01 - AMD Starship. Ubuntu 21.04 - 5.14.0-rc7-amd-pstate-phx - GNOME Shell 3.38.4. 3 Systems - 493 Benchmark Results. Intel Core i5-12400 - ASUS PRIME Z690-P WIFI D4 - Intel Device 7aa7.

Ubuntu 21.10 - 5.15.7-051507-generic - GNOME Shell 40.5.

2 Systems - 350 Benchmark Results. Ampere eMAG ARMv8 - AmpereComputing OSPREY - Applied Micro Circuits X-Gene.

Ubuntu 20.04 - 5.7.0-050700-generic - GNOME Shell 3.36.4. 3 Systems - 57 Benchmark Results. 2 x Intel Xeon Gold 5220R - TYAN S7106 - Intel Sky Lake-E DMI3 Registers. Ubuntu 20.04 - 5.9.0-050900rc6-generic - GNOME Shell 3.36.4. 15 Systems - 42 Benchmark Results. Intel Core i5-12400 - ASUS PRIME Z690-P WIFI D4 - Intel Device 7aa7. Ubuntu 21.10 - 5.16.0-051600rc8-generic - GNOME Shell 40.5. Modified9 years, 3 months ago. As it currently stands, this question is not a good fit for our Q&A format.

We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. I've made benchmark, to compare what is faster NodeJS or Apache + PHP?

Most Popular Test Results

When I had tested 'Hello world' application Node was faster, but when I tried to use http.get function It was completely different story. Why NodeJS is becoming so slow? Is it deal in http.get ? Test environment. Hello world application:. ab -n 10000 -c 10 hostname .10.000 requests, 10 concurrent (time in seconds). ab -n 10000 -c 100 hostname.

10.000 requests, 100 concurrent (time in seconds). ab -n 100000 -c 300 hostname. 100.000 requests, 300 concurrent (time in seconds). Pull feeds application:. *ab -n 100 -c 10 hostname *. 100 requests, 10 concurrent (time in seconds). *ab -n 1000 -c 10 hostname *.

Last Updated

1000 requests, 10 concurrent (time in seconds). *ab -n 10000 -c 100 hostname *10.000 requests, 100 concurrent (time in seconds). *ab -n 10000 -c 50 hostname *10.000 requests, 50 concurrent (time in seconds).

I've changed my code to:. It didn't change much.

It got a little bit faster but in compare with Apache+PHP it is very slow.

  • Aggregated logs—Application logs emit either implicitly by some libraries or explicitly by a developer for getting an insight into an application. Most aggregated log tools allow you to easily search and visualize your logged data. In our case, we could log out the performance of each of our APIs and plot them on a graph.
  • Infrastructure insights—Your application will run on a host of sorts, so you’ll likely want to see all the data. If you’re running in the cloud, most providers give you this data (albeit in a crude form) out of the box. The data you’ll get from these tools will cover things like CPU and memory usage of your host, connection data, etc.
  • Application monitoring—This type of tool usually sits within your application code and can draw insights about how functions are performing/being called, what errors we throw, etc.

Some APM tools, like Retrace, have all or most of these three features rolled into one, whereas others can be more specialized. Depending on your requirements, you might want one tool that does everything or a whole range of tools for different purposes.

Supported Platforms

On top of tools, we can also include other Node.js-specific tools and profilers, like flame graphs, that look at our function execution or extract data about our event loop execution. As you get more well-versed in Node.js performance testing, your requirements for data will only grow. You’ll want to keep shopping around, experimenting, and updating your tooling to really understand your application.

Now we’ve set up our tooling, got realistic profiles for our performance, and understood our application performance, we’re nearly ready to run our tests. But before we do that, there’s one more step: creating test infrastructure.

Documentation

You can run performance tests from your own machine if you wish, but there are problems with doing this. So far, we’ve tried really hard—with our test profiles, for instance—to ensure that our performance tests replicate. Another factor in replicating our tests is to ensure that we always run them on the same infrastructure (read: machine).

One of the easiest ways to achieve a consistent test infrastructure is to leverage cloud hosting. Choose a host/machine you want to launch your tests from and ensure that each time you run your tests it’s always from the same machine—and preferably from the same location, too—to avoid skewing your data based on request latency.

It’s a good idea to script this infrastructure, so you can create and tear it down as and when needed. They call this idea “infrastructure as code.” Most cloud providers support it natively, or you can use a tool like Terraform to help you out.

Phew! We’ve covered a lot of ground so far, and we’re at the final step: running our tests.

Support

The last step is to actually run our tests. If we start our command line configuration (as we did in step 1), we’ll see requests to our Node.js application. With our monitoring solution, we can check to see how our event loop is performing, whether certain requests are taking longer than others, whether connections are timing out, etc.

The icing on the cake for your performance tests is to consider putting them into your build and test pipeline. One way to do this is to run your performance tests overnight so that you can review them every morning. Artillery provides a nice, simple way of creating these reports, which can help you spot any Node.js performance regressions.

Download

That’s a wrap.

Today, we covered the event loop’s relevance for the performance of your JavaScript application, how to choose your performance testing tooling, how to set up consistent performance test profiles with Artillery, what monitoring you’ll want to set up to diagnose Node.js performance issues, and, finally, how and when to run your performance tests to get the most value out for you and your team.

Experiment with monitoring tools, like Retrace APM for Node.js, make small changes so you can test the impact of changes, and review your test reports frequently so you can spot regressions. Now you have all you need to leverage Node.js performance capabilities and write a super performant application that your users love!

  • Node.js Error Handling Best Practices: Ship With Confidence- January 21, 2022
  • Flamegraph: How to Visualize Stack Traces and Performance- July 3, 2019
  • Docker Performance Improvement: Tips and Tricks- April 4, 2019
  • Node.js Performance Testing and Tuning- January 11, 2019
  • Winston Logger Ultimate Tutorial: Best Practices, Resources, and Tips- December 31, 2018