Webinar
Introducing EMQX 6.1: Durable MQTT Streams and Analytics-Ready Data | Register Now →

Open MQTT Benchmarking Comparison: EMQX vs VerneMQ

May JinMay Jin
Apr 25, 2023MQTT
Open MQTT Benchmarking Comparison: EMQX vs VerneMQ

The blog post Open MQTT Benchmark Suite: The Ultimate Guide to MQTT Performance Testing introduced the Open MQTT Benchmark Suite developed by EMQ. We defined MQTT benchmark scenarios, use cases, and observation metrics in the GitHub project. Based on the activity and popularity of the community and GitHub project, the top 4 open-source MQTT brokers in 2023 – EMQX, Mosquitto, NanoMQ, and Vernemq, were chosen to perform the benchmark test.

This blog series presents the benchmark test results and aims to help you choose a suitable MQTT broker based on your needs and use cases.

This is the last post of the blog series, which provides the benchmarking results of EMQX and VerneMQ. Additionally, we compare the features and capabilities of both brokers in detail in another post.

MQTT Benchmark Scenarios and Use Cases

The MQTT Benchmark Suite designs two sets of benchmark use cases. One is named Basic Set, which is for small-scale performance verification, and another is called Enterprise Set, which aims for enterprise level verification.

Detailed descriptions of the testing scenarios are already available on the GitHub project , for convenience we briefly list them here as well.

All the tests are executed on a single node.

Use Cases

Basic Set

  • Point-to-Point: p2p-1K-1K-1K-1K
    • 1k publishers, 1k subscribers, 1k topics
    • Each publisher pubs 1 message per second
    • QoS 1, payload 16B
  • Fan-out: fanout-1-1k-1-1K
    • 1 publisher, 1 topic, 1000 subscribers
    • 1 publisher pubs 1 message per second
    • QoS 1, payload 16B
  • Fan-in: sharedsub-1K-5-1K-1K
    • 1k publishers, 1k pub topics
    • 5 subscribers consume all messages in a shared subscription way
    • Publish rate: 1k/s (each publisher pubs a message per second)
    • Shared subscription’s topic: $share/perf/test/#
    • Publish topics: test/$clientid
    • QoS 1, payload 16B
  • Concurrent connections: conn-tcp-10k-100
    • 10k connections
    • Connection rate (cps): 100/s

Enterprise Set

  • Point-to-Point: p2p-50K-50K-50K-50K
    • 50k publishers, 50k subscribers, 50k topics
    • Each publisher pubs 1 message per second
    • QoS 1, payload 16B
  • Fan-out: fanout-5-1000-5-250K
    • 5 publishers, 5 topics, 1000 subscribers (each sub to all topics)
    • Publish rate: 250/s, so sub rate = 250*1000 = 250k/s
    • QoS 1, payload 16B
  • Fan-in: sharedsub-50K-500-50K-50K
    • 50k publishers, 50k pub topics
    • Publish rate: 50k/s (each publisher pubs a message per second)
    • Use a shared subscription to consume data (to avoid slow consumption by subscribers affecting broker performance, 500 subscribers are used to share the subscription)
    • Shared subscription’s topic: $share/perf/test/#
    • Publish topics: test/$clientid
    • QoS 1, payload 16B
  • Concurrent connections: conn-tcp-1M-5K
    • 1M connections
    • Connection rate (cps): 5000/s

Common MQTT Config

ConfigValue
keep alive300s
clean sessiontrue
authentication enablementno
TLS authentication enablementno
test duration30 minutes

Testbed

The test environment is configured on AWS, and all virtual machines are within a VPC (virtual private cloud) subnet.

Broker Machine Details

  • Public cloud: AWS
  • Instance type: c5.4xlarge 16C32G
  • OS: Ubuntu 22.04.1 amd64

Test Tool

XMeter is used in this benchmark test to simulate various business scenarios. XMeter is built on top of JMeter but with enhanced scalability and more capabilities. It provides comprehensive and real-time test reports during the test. Additionally, its built-in monitoring tools are used to track the resource usage of the EMQX/Mosquitto server, enabling a comparison with the information provided by the operating systems.

XMeter provides a private deployment version (on-premise) and a public cloud SaaS version. A private XMeter is deployed in the same VPC as the MQTT broker server in this testing.

SW Version

BrokerVersion
EMQX4.4.16
VerneMQ1.12.6.2
XMeter3.2.4

Benchmarking Results

Basic Set

point-to-point: 1K:1K

Average pub-to-sub latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedAvg memory used
EMQX0.274%2%510M495M
VerneMQ0.330.410%6%1.3G

Fan-out 1k QoS 1

Average pub-to-sub latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedAvg memory used
EMQX32%1%475M460M
VerneMQ21.554%2%1.2G1.1G

Fan-in 1k - shared subscription QoS 1

Average pub-to-sub latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedAvg memory used
EMQX0.193%2%468M460M
VerneMQ0.346%5%1.3G1.2G

10K connections cps 100

Average latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedMemory used Stable at
EMQX0.742%1%540M510M
VerneMQ0.893%0%1.1G1.0G

Enterprise Set

point-to-point: p2p-50K-50K-50K-50K

Metrics

Actual msg rateAverage pub-to-sub latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedAvg memory used
EMQX50k:50k1.5888%80%5.71G5.02G
VerneMQ50k:50k2136.6291%90%6.30G6.02G

EMQX keeps the stable pub & sub rate at 50000/s during the 30-minute's test. VerneMQ is able to handle the target 50k message incoming and outgoing throughput, but the latency was quite high.

pub-to-sub latency percentiles

Latency (ms)EMQXVerneMQ
p501467
p7512,937
p9026,551
p9549,517
p991816,500

Result Charts

  • EMQX
  • VerneMQ

Fan-out: fanout-5-1000-5-250K

Metrics

Actual msg rateAverage pub-to-sub latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedAvg memory used
EMQX250k1.9973%71%530M483M
VerneMQ82k11,802.1193%92%3.01G2.94G

In this scenario, Verne cannot reach to the target message rate. The throughput has been fluctuating around 82,000/s.

EMQX keeps the stable rate at 250,000+/s throughout the test.

pub-to-sub latency percentiles

Latency (ms)EMQXVerneMQ
p50211,966
p75212,551
p90313,060
p95313,357
p99413,884

Result Charts

  • EMQX
  • VerneMQ

Fan-in: sharedsub-50K-500-50K-50K

Metrics

Actual msg rateAverage pub-to-sub latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedAvg memory used
EMQXpub: 50k
sub: 50k
1.4794%93%8.19G6.67G
VerneMQpub: 7.6k
sub: 3.5k
116,888.6183%74%12.16G8.38G

pub-to-sub latency percentiles

Latency (ms)EMQXVerneMQ
p501128,251
p751132,047
p902135,239
p952137,106
p9919140,528

Result Charts

  • EMQX
  • VermeMQ

Concurrent connections: conn-tcp-1M-5K

Metrics

Average latency (ms)Max CPU user+systemAvg CPU user+systemMax memory usedMemory used Stable at
EMQX2.435%22%10.77G8.68G
VerneMQ2.4744%25%22.4Gnot stable

During a 30-minute’s test for VerneMQ, the memory used keeps increasing. It rose from 18GB when 1 million connections were completed to 22.4GB at the end of the test.

Latency percentiles

Latency (ms)ENQXVerneMQ
p5022
p7522
p9022
p9523
p9933

Result Charts

  • EMQX
  • VerneMQ

Conclusion

EMQX and VerneMQ have similar performance for the basic test cases. In the enterprise level testing, EMQX outperformed VerneMQ across all scenarios. As stated in another post, EMQX is one of the best choices for deploying MQTT brokers in production in 2023.

Try EMQX Enterprise for Free
Connect any device, at any scale, anywhere.
Get Started →

Related Posts