Quantcast
Channel: HiveMQ – HiveMQ
Viewing all 54 articles
Browse latest View live

HiveMQ 3.3.2 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.3.2. This is a maintenance release for the 3.3 series and brings the following improvements:

  • The Web UI now shows the maximum offline queue size and strategy for clients
  • Improved unsubscribe performance
  • The Web UI now shows a warning if different HiveMQ versions are in a cluster
  • At Startup HiveMQ shows the enabled cipher suites and protocols for TLS listeners in the log
  • The Web UI Dashboard now shows MQTT Publishes instead of all MQTT messages in the graphs
  • Updated integrated native SSL/TLS library to latest version
  • Improved message ordering while the cluster topology changes
  • Fixed a cosmetic NullPointerException with background cleanup jobs
  • Fixed an issue where Web UI Popups could not be closed on IE/Edge and Safari
  • Fixed an issue which could lead to an IllegalArgumentException with a QoS 0 message in a rare edge-case
  • Improved persistence migrations for updating single HiveMQ deployments
  • Fixed a reference counting issue
  • Fixed an issue with rolling upgrades if the AsyncMetricService is used while the update is in progress

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.3.x user.

Have a great day,
The HiveMQ Team


HiveMQ 3.2.9 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.2.9. This is a maintenance release for the 3.2 series and brings the following improvements:

  • Improved Logging for configured TLS Cipher Suites
  • Improved Retained Message Metrics
  • Improved support for Java 9
  • Fixed an issue where the metric half-full-queue.count could show an incorrect value
  • Fixed an issue that could cause cluster nodes to wait for operational nodes on startup indefinitely
  • Improved payload reference counting for single node deployments
  • Fixed an issue with rolling upgrades in an edge case where a node with a newer version is joining during network-split
  • Improved Shutdown behaviour for OnPublishReceivedCallbacks and plugin system services

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.2.x user.

Have a great day,
The HiveMQ Team

HiveMQ 3.3.3 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.3.3. This is a maintenance release for the 3.3 series and brings the following improvements:

  • Adds global option to rate-limit plugin service calls
  • Improved Logging for configured TLS Cipher Suites
  • Improved Retained Message Metrics
  • Improved support for Java 9
  • Fixed an issue where the metric half-full-queue.count could show an incorrect value
  • Fixed an issue that could cause cluster nodes to wait for operational nodes on startup indefinitely
  • Improved payload reference counting for single node deployments
  • Fixed an issue with rolling upgrades in an edge case where a node with a newer version is joining during network-split
  • Improved Shutdown behaviour for OnPublishReceivedCallbacks and plugin system services
  • Fixed an issue where assignments in the ClientGroupingService got cleaned up prematurely
  • Improved example configuration file for in-memory persistence

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.3.x user.

Have a great day,
The HiveMQ Team

HiveMQ 3.3.4 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.3.4. This is a maintenance release for the 3.3 series and brings the following improvements:

  • Increased performance when a node joins an existing cluster with a lot of stored queued messages
  • Fixed subscription metric showing incorrect values after cluster topology change
  • Improved cleanup of expired information in the cluster to reduce memory usage with lots of short lived clients
  • Fixed a reference counting issue when there is a conflict for outgoing messages flows after network split
  • Fixed an issue with rolling upgrades in an edge case where a new node is joining during network-split
  • Improved compatibility of operating system metrics and Java 9

You can download the new HiveMQ version here.

We strongly recommend to upgrade if you are an HiveMQ 3.3.x user.

Have a great day,
The HiveMQ Team

What’s new in HiveMQ 3.4

$
0
0

We are pleased to announce the release of HiveMQ 3.4. This version of HiveMQ is the most resilient and advanced version of HiveMQ ever. The main focus in this release was directed towards addressing the needs for the most ambitious MQTT deployments in the world for maximum performance and resilience for millions of concurrent MQTT clients. Of course, deployments of all sizes can profit from the improvements in the latest and greatest HiveMQ.

This version is a drop-in replacement for HiveMQ 3.3 and of course supports rolling upgrades with zero-downtime.

HiveMQ 3.4 brings many features that your users, administrators and plugin developers are going to love. These are the highlights:

 

New HiveMQ 3.4 features at a glance

Cluster

HiveMQ 3.4 brings various improvements in terms of scalability, availability, resilience and observability for the cluster mechanism. Many of the new features remain under the hood, but several additions stand out:

Cluster Overload Protection

The new version has a first-of-its-kind Cluster Overload Protection. The whole cluster is able to spot MQTT clients that cause overload on nodes or the cluster as a whole and protects itself from the overload. This mechanism also protects the deployment from cascading failures due to slow or failing underlying hardware (as sometimes seen on cloud providers). This feature is enabled by default and you can learn more about the mechanism in our documentation.

Dynamic Replicates

HiveMQ’s sophisticated cluster mechanism is able to scale in a linear fashion due to extremely efficient and true data distribution mechanics based on a configured replication factor. The most important aspect of every cluster is availability, which is achieved by having eventual consistency functions in place for edge cases. The 3.4 version adds dynamic replicates to the cluster so even the most challenging edge cases involving network splits don’t lead to the sacrifice of consistency for the most important MQTT operations.

Node Stress Level Metrics

All MQTT cluster nodes are now aware of their own stress level and the stress levels of other cluster members. While all stress mitigation is handled internally by HiveMQ, experienced operators may want to monitor the individual node’s stress level (e.g with Grafana) in order to start investigating what caused the increase of load.

WebUI

Operators worldwide love the HiveMQ WebUI introduced with HiveMQ 3.3. We gathered all the fantastic feedback from our users and polished the WebUI, so it’s even more useful for day-to-day broker operations and remote debugging of MQTT clients. The most important changes and additions are:

Trace Recording Download

The unique Trace Recordings functionality is without doubt a lifesaver when the behavior of individual MQTT clients needs further investigation as all interactions with the broker can be traced — at runtime and at scale! Huge production deployments may accumulate multiple gigabytes of trace recordings. HiveMQ now offers a convenient way to collect all trace recordings from all nodes, zips them and allows the download via a simple button on the WebUI. Remote debugging was never easier!

Additional Client Detail Information in WebUI

The mission of the HiveMQ WebUI is to provide easy insights to the whole production MQTT cluster for operators and administrators. Individual MQTT client investigations are a piece of cake, as all available information about clients can be viewed in detail. We further added the ability to view the restrictions a concrete client has:

  • Maximum Inflight Queue Size
  • Client Offline Queue Messages Size
  • Client Offline Message Drop Strategy

Session Invalidation

MQTT persistent sessions are one of the outstanding features of the MQTT protocol specification. Sessions which do not expire but are never reused unnecessarily consume disk space and memory. Administrators can now invalidate individual session directly in the HiveMQ WebUI for client sessions, which can be deleted safely. HiveMQ 3.4 will take care and release the resources on all cluster nodes after a session was invalidated

Web UI Polishing

Most texts on the WebUI were revisited and are now clearer and crisper. The help texts also received a major overhaul and should now be more, well, helpful. In addition, many small improvements were added, which are most of the time invisible but are here to help when you need them most. For example, the WebUI now displays a warning if cluster nodes with old versions are in the cluster (which may happen if a rolling upgrade was not finished properly)

Plugin System

One of the most popular features of HiveMQ is the extensive Plugin System, which virtually enables the integration of HiveMQ to any system and allows hooking into all aspects of the MQTT lifecycle. We listened to the feedback and are pleased to announce many improvements, big and small, for the Plugin System:

Client Session Time-to-live for individual clients

HiveMQ 3.3 offered a global configuration for setting the Time-To-Live for MQTT sessions. With the advent of HiveMQ 3.4, users can now programmatically set Time-To-Live values for individual MQTT clients and can discard a MQTT session immediately.

Individual Inflight Queues

While the Inflight Queue configuration is typically sufficient in the HiveMQ default configuration, there are some use cases that require the adjustment of this configuration. It’s now possible to change the Inflight Queue size for individual clients via the Plugin System.
 
 

Plugin Service Overload Protection

The HiveMQ Plugin System is a power-user tool and it’s possible to do unbelievably useful modifications as well as putting major stress on the system as a whole if the programmer is not careful. In order to protect the HiveMQ instances from accidental overload, a Plugin Service Overload Protection can be configured. This rate limits the Plugin Service usage and gives feedback to the application programmer in case the rate limit is exceeded. This feature is disabled by default but we strongly recommend updating your plugins to profit from this feature.

Session Attribute Store putIfNewer

This is one of the small bits you almost never need but when you do, you’re ecstatic for being able to use it. The Session Attribute Store now offers methods to put values, if the values you want to put are newer or fresher than the values already written. This is extremely useful, if multiple cluster nodes want to write to the Session Attribute Store simultaneously, as this guarantees that outdated values can no longer overwrite newer values.
 
 
 
 

Disconnection Timestamp for OnDisconnectCallback

As the OnDisconnectCallback is executed asynchronously, the client might already be gone when the callback is executed. It’s now easy to obtain the exact timestamp when a MQTT client disconnected, even if the callback is executed later on. This feature might be very interesting for many plugin developers in conjunction with the Session Attribute Store putIfNewer functionality.

Operations

We ❤️ Operators and we strive to provide all the tools needed for operating and administrating a MQTT broker cluster at scale in any environment. A key strategy for successful operations of any system is monitoring. We added some interesting new metrics you might find useful.

System Metrics

In addition to JVM Metrics, HiveMQ now also gathers Operating System Metrics for Linux Systems. So HiveMQ is able to see for itself how the operating system views the process, including native memory, the real CPU usage, and open file usage. These metrics are particularly useful, if you don’t have a monitoring agent for Linux systems setup. All metrics can be found here.

Client Disconnection Metrics

The reality of many MQTT scenarios is that not all clients are able to disconnect gracefully by sending MQTT DISCONNECT messages. HiveMQ now also exposes metrics about clients that disconnected by closing the TCP connection instead of sending a DISCONNECT packet first. This is especially useful for monitoring, if you regularly deal with clients that don’t have a stable connection to the MQTT brokers.

 

JMX enabled by default

JMX, the Java Monitoring Extension, is now enabled by default. Many HiveMQ operators use Application Performance Monitoring tools, which are able to hook into the metrics via JMX or use plain JMX for on-the-fly debugging. While we recommend to use official off-the-shelf plugins for monitoring, it’s now easier than ever to just use JMX if other solutions are not available to you.

Other notable improvements

The 3.4 release of HiveMQ is full of hidden gems and improvements. While it would be too much to highlight all small improvements, these notable changes stand out and contribute to the best HiveMQ release ever.

Topic Level Distribution Configuration

Our recommendation for all huge deployments with millions of devices is: Start with separate topic prefixes by bringing the dynamic topic parts directly to the beginning. The reality is that many customers have topics that are constructed like the following: “devices/{deviceId}/status”. So what happens is that all topics in this example start with a common prefix, “devices”, which is the first topic level. Unfortunately the first topic level doesn’t include a dynamic topic part. In order to guarantee the best scalability of the cluster and the best performance of the topic tree, customers can now configure how many topic levels are used for distribution. In the example outlined here, a topic level distribution of 2 would be perfect and guarantees the best scalability.

Mass disconnect performance improvements

Mass disconnections of MQTT clients can happen. This might be the case when e.g. a load balancer in front of the MQTT broker cluster drops the connections or if a mobile carrier experiences connectivity problems. Prior to HiveMQ 3.4, mass disconnect events caused stress on the cluster. Mass disconnect events are now massively optimized and even tens of millions of connection losses at the same time won’t bring the cluster into stress situations.

 
 
 
 
 
 

Replication Performance Improvements

Due to the distributed nature of a HiveMQ, data needs to be replicated across the cluster in certain events, e.g. when cluster topology changes occur. There are various internal improvements in HiveMQ version 3.4, which increase the replication performance significantly. Our engineers put special love into the replication of Queued Messages, which is now faster than ever, even for multiple millions of Queued Messages that need to be transferred across the cluster.

Updated Native SSL Libraries

The Native SSL Integration of HiveMQ was updated to the newest BoringSSL version. This results in better performance and increased security. In case you’re using SSL and you are not yet using the native SSL integration, we strongly recommend to give it a try, more than 40% performance improvement can be observed for most deployments.

 
 

Improvements for Java 9

While Java 9 was already supported for older HiveMQ versions, HiveMQ 3.4 has full-blown Java 9 support. The minimum Java version still remains Java 7, although we strongly recommend to use Java 8 or newer for the best performance of HiveMQ.

HiveMQ 3.2.10 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.2.10.

HiveMQ 3.2.10 is the last release of the 3.2.x series. We recommend to upgrade to the HiveMQ 3.3 or 3.4 series.

It is a maintenance release which brings the following improvements:

  • Improved cleanup of expired information in the cluster to reduce memory usage with lots of short lived clients
  • Fixed an issue where the session.count metric was not decremented correctly
  • Fixed subscription metric showing incorrect values after cluster topology change
  • Fixed an issue with rolling upgrades in an edge case where a new node is joining during network-split
  • Fixed an issue where SubscriptionStore.getSubscriptions() returned inconsistent data
  • Fixed an issue where message flow was not consistent after node restart
  • Fixed errorlogs during the HiveMQ shutdown process
  • Improved Java 10 compatibility for windows deployments

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.2.x user.

Have a great day,
The HiveMQ Team

Monitoring HiveMQ with Prometheus and Grafana

$
0
0

System monitoring is an essential part of any production-software deployment. Monitoring your MQTT brokers is crucial, especially in clustered environments. Classic challenges to effective monitoring include a lack of cohesive tools and the presence of a wrong mindset. It’s important not to fall victim to the false sense of security these factors create.

You need to monitor your system.

In this blog post, we’ll walk you through a detailed, step-by-step guide to set up the Prometheus application with HiveMQ. The goal: allow you to efficiently monitor massive amounts of your available HiveMQ metrics with Prometheus. Prometheus is one of the most popular solutions for monitoring distributed systems on the market today. In our opinion, it is the perfect companion for HiveMQ when it comes to monitoring.

To support the integration of cohesive monitoring tools, we include the JVM Metrics Plugin and the JMX Plugin in the core distribution of HiveMQ. The JVM Plugin adds crucial JVM metrics to the HiveMQ metrics that are already available and the JMX Plugin enables JMX monitoring for any JMX monitoring tool. For example, JConsole.

Real-time monitoring with tools like JConsole is certainly better than nothing, but some disadvantages exist. HiveMQ is often deployed with Docker and therefore direct access to the HiveMQ process might not be possible. Despite limitations, time-series monitoring solution like Prometheus also function as great debugging tools when you need to find the root cause of problems in your production environments.

The AWS Cloudwatch Plugin, Graphite Plugin, InfluxDB Plugin and Prometheus Plugin are free-of-charge and ready-to-use plugins that HiveMQ provides to enable time-series monitoring.

Prometheus and Grafana

We are often asked to recommend monitoring tools. So far, we have had good experiences with Prometheus. However, the tool that you choose to use is ultimately your decision and needs to reflect your personal preferences.

Prometheus is flexible. You can use Prometheus as a time-series database to gather and store metrics for your existing or preferred metric-visualization program can use as a data source. Or, you can use Prometheus as an all-in-one solution for both gathering metrics and generating your metric visualizations.
This blog posts shows you how to use Prometheus to gather and visualize your HiveMQ metrics. We will also show you how to create a monitoring dashboard using Prometheus as a data source in Grafana.

Example Dashboard

Example Dashboard

Installation and configuration

In this installation, we want our HiveMQ clusters to report their metrics to Prometheus. Then, we can set up a Grafana dashboard for real-time monitoring of our HiveMq metrics.

To fulfill our plan, we’ll need three pieces of software in addition to our HiveMQ cluster:

  • The HiveMQ Prometheus Monitoring Plugin
  • Prometheus
  • Grafana

Installing the Prometheus HiveMQ Plugin

HiveMQ offers a wide range of off-the-shelf and ready-to-use plugin extensions. One of these plugins is the HiveMQ Prometheus Monitoring Plugin. The installation of this plugin, like all HiveMQ plugins, is very simple:

  • Download the distribution
  • Unpack the zip file
  • Move the
    prometheus-monitoring-plugin.jar
    file to the plugins folder
  • Move the
    prometheusConfiguration.properties
    to the /conf of your HiveMQ installation.

Note: Always adjust the

prometheusConfiguration.properties
file to suit your individual needs and make sure that the IP address of the network interface can be reached by your Prometheus server.

# Prometheus Monitoring Plugin Configuration
#
# -------------------------------------------------------------------------


# The ip where the servlet will be hosted
ip=<your-ip>

# The port where the servlet will work on
port=9399

# The path for the servlet which gets called by prometheus
# (IMPORTANT: /servlet will be inserted between <ip>:<port> and <metric_path>
# For example 127.0.0.1:9399/servlet/metrics
metric_path=/metrics

Installing Prometheus

The next step is to install the Prometheus application on a machine of your choice. In our experience, you should not run Prometheus on the same machine on which you are running HiveMQ.

To install Prometheus, follow the Prometheus Guide.

A working prometheus.yml file that is based on the HiveMQ Prometheus Plugin configuration in this post, looks like this:

global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'hivemq'
    scrape_interval: 5s
    # prepending '/servlet' to the metrics_path we configured in the HiveMQ Prometheus Plugin
    metrics_path: '/servlet/metrics'
    static_configs:
      #using port 9399 because we configured it the HiveMQ Prometheus Plugin
      - targets: ['<node1-ip>:9399', '<node2-ip>:9399']

Note: This example is tailored for a 2 node cluster. If you want more nodes, you need to add the additional nodes to the targets. You also need to prepend “/servlet” to the metric_path that you configured in the your

prometheusConfiguration.properties
file.

Using Prometheus for displaying metrics

Prometheus is more than just a data source for monitoring dashboards like Grafana. Additionally, Prometheus comes with built-in functionality to display metrics on-the-fly.
This ability is particularly helpful when you want an in-depth look into specific metrics that you don’t monitor constantly.
To take a look, navigate to

http://<prometheus-host-ip>:9090/
. When Prometheus and the HiveMQ Prometheus plugin are configured correctly, you can access your HiveMQ metrics in the Expression field.

Displaying HiveMQ metrics in Prometheus

Installing Grafana

The next step on our way to building a monitoring dashboard is installing and starting Grafana.
Grafana works out of the box and is reached via localhost:3000.

Once Grafana is up and running, we can configure Prometheus to be the data source for Grafana.

Step 1: Add Data Source

Step 1: Add Data Source


Step 2: Configuring Prometheus

Now, we can focus on the dashboard. In response to the high number of questions we receive about dashboards, the HiveMQ team has put together a great little dashboard template that displays the key metrics for most MQTT deployments. Use the template as a convenient starting point for building a dashboard that is perfectly tailored to your individual use case.

Download the template right here.
The JSON file inside the zip can be imported to Grafana.

Step 3: Import Dashboard

Step 3: Import Dashboard


That’s it. We now have a working dashboard that displays our metrics and provides the type of monitoring that has proven vital in many MQTT deployments.

This is just one possibility for monitoring your MQTT use case. Your individual requirements can vary. We suggest reading the getting started guide from Grafana to decide what works best for you and your deployment.

Summary and resource list

Monitoring is an important part of operations for any application and HiveMQ is no exception. As you can see from this blog post, it is not difficult to create a monitoring setup for HiveMQ with Prometheus and Grafana. We hope that our dashboard template gets you off to a good start and strongly recommend that you fine tune your dashboards to meet the individual needs of each deployment.

Here are some useful resources:

HiveMQ 3.4.1 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.4.1. This is a maintenance release for the 3.4 series and brings the following improvements:

  • Improved memory usage for cluster replication in large deployments
  • Improved performance for operating system metrics
  • Improved logging for the cluster merging process
  • Improved configuration for operating system metrics
  • Improved memory usage for retained messages which are created via the plugin system
  • Improved start scripts for Windows 10
  • Added better handling for temporary folder permissions on Windows
  • Improved behavior when sources for operating system metrics are not available
  • Improved plugin service for retained messages in single node deployments
  • Fixed an issue when a node with a newer version is leaving the cluster during a rolling upgrade
  • Fixed an edge case with rolling upgrades during a network-split
  • Added validation for empty topics for subscriptions which are created via plugin services

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.4.x user.

Have a great day,
The HiveMQ Team


Using Syslog with HiveMQ

$
0
0

hivemq_monitoring-influx

Logging is a key ingredient in the diagnosing, monitoring and trouble shooting of applications. Logging lets you see what your application is actually doing and helps you debug unwanted behavior.

MQTT brokers like HiveMQ are critical infrastructure components that should be monitored and connected to your company’s central logging system. When it comes to choosing a centralized logging system for your MQTT broker, there are more options available today than ever before. For example, the ELK Stack (Elasticsearch, Logstash, Kibana) and Splunk. Solutions such as Syslog continue to be very popular and command a sizable market share. Naturally, HiveMQ integrates seamlessly with a Syslog based logging infrastructure.

Customized logging

To give our customers the best possible logging experience, HiveMQ has implemented the powerful logback logging framework. Logback offers a variety of features that let you tailor your HiveMQ logging to meet your individual needs.

Consolidating your log files

HiveMQ is designed to scale to millions of concurrently connected devices and to handle a throughput of hundreds of thousands of messages per second. To reach these numbers in your MQTT deployment, you need to scale out horizontally by taking advantage of the HiveMQ cluster feature.
Many HiveMQ customers run demanding deployments with dozens of HiveMQ broker nodes.
When you work with such a high number of nodes, consolidating the log files from all your brokers is a must. If you don’t consolidate your log files centrally, every debugging scenario creates the necessity to manually sift through dozens of log files. This situation becomes worse if your MQTT deployment uses a load balancer (which is usually the case in ambitious MQTT deployments). If a load balancer is in use, it is not possible to identify which HiveMQ node a single client was connected to when the incident that needs to be debugged occurred.

Syslog advantages

Syslog has long been considered the defacto standard for message logging. Syslog decouples the software that generates messages from the system that stores the messages. Use Syslog to consolidate the log files from multiple HiveMQ broker nodes into a single log file that is easier to manage and analyze. A configurable prefix in the log statements ensures that each statement can be associated with the HiveMQ node on which it was created.

Enabling Syslog in your HiveMQ deployment

Syslog can be enabled easily by adding a Syslog appender to the

logback.xml
file.
Here’s an example of an appender:

<configuration>
    ...

    <appender name="SYSLOG" class="ch.qos.logback.classic.net.SyslogAppender">

        // IP-Address of your syslog server
        <syslogHost>$IP-Address</syslogHost>

        <facility>user</facility>
        // replace X with the actual node
        <suffixPattern>[nodeX] %-30(%d %level)- %msg%n%ex</suffixPattern>
    </appender>

    <root level="DEBUG">
        <appender-ref ref="SYSLOG" />
    </root>

    ...
</configuration>

Simply replace $IP-Address with the actual address of the Syslog server that you are using.

That’s it.

When you add the appender to the

logback.xml
configuration file, you enable the use of a Syslog server and can start benefitting from consolidated HiveMQ log files.

HiveMQ 4 EAP is here!

$
0
0

hivemq_monitoring-influx

We recently released an Early Access Preview Version of HiveMQ 4. HiveMQ 4 is one of the first MQTT Brokers to fully implement the MQTT 5 specification and is the first enterprise-grade MQTT broker available for MQTT 5.

HiveMQ 4 EAP is available for free and is available as standalone application and as Docker image.

You can learn about all new features by clicking the button below.

Learn about HiveMQ 4 EAP now

HiveMQ 3.4.2 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.4.2. This is a maintenance release for the 3.4 series and brings the following improvements:

  • Improved Logging for invalid PUBLISH messages with wildcard characters (# or +) in the topic.
  • Improved performance for UNSUBSCRIBE messages that include a large amount of topic filters.
  • Improved Keep Alive handling if MQTT messages are sent by clients before the CONNACK is received.
  • Fixed an issue when the client identifier is changed by a plugin in the OnConnectCallback.
  • Improved handling for persistent clients when the Inflight-queue limit is exceeded.
  • Removed some cosmetic error logs on HiveMQ shutdown or when a client disconnects.
  • Improved handling for max-connections throttling configuration.
  • Fixed an issue where com.hivemq.logger.* metrics are not showing any values.
  • Fixed an issue for shared subscriptions that start with a leading slash when a cluster node is restarted.
  • Improved performance for cluster joins with a topic-level-distribution larger than 1.
  • Fixed an issue with OCSP stapling when Elliptic Curve certificates are used.
  • Improved Plugin System Authentication to include authentication for the Web UI.
  • Improved the plugin system method publishToClient in the PublishService. Messages are now queued for offline clients with clean-session=false in cluster and single mode.
  • Updated integrated native SSL/TLS library to latest version.
  • Improved cluster stability in rare network-split edge cases.
  • Improved plugin system Async- and BlockingSubscriptionStore to allow batched Add and Remove operations.

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.4.x user.

Have a great day,
The HiveMQ Team

New on Docker Hub: HiveMQ Images

$
0
0

The popularity of Docker for deploying all kinds of applications and services in a containerized environment has been increasing exponentially. That’s not surprising since orchestration platforms such as Kubernetes, Docker Swarm, and Mesos keep improving functionality for running containers. The ability to integrate Docker with different cloud infrastructures (think load balancers, block storage) and enterprise grade platforms for bare metal installations makes building complex services from basic container images a breeze.

Here at HiveMQ we’re happy to announce the introduction of a continuously updated HiveMQ Docker repository on Docker Hub that can help you streamline your development and deployment efforts.

Overview

Our Docker repository will provide a single location for the container images of select past, current, and future versions of HiveMQ. You can use these images to run HiveMQ instances on the Docker daemon instance of your choice.
As well as a pre built DNS discovers image that already includes the DNS discovery plugin, which is tailor made for containerized, orchestrated deployments leveraging technologies like Kubernetes.

The images adhere to all Dockerfile best practices as defined by Docker: Best practices for writing Dockerfiles and the documentation in Docker library: official images.

Base Image

Running a basic installation of the most recent HiveMQ version is now as easy as installing Docker and entering a single command:

docker run -p 1883:1883 -p 8080:8080 hivemq/hivemq3

The first time you run the command, it pulls the most recent version of HiveMQ from Docker Hub and runs it in a container. The file system and processes of the HiveMQ container are isolated from the host.
You are now able to connect MQTT clients on port 1883 on your local machine and access the Web UI on

http://localhost:8080

The Base Image is meant to provide a quick possibility to start a HiveMQ single node for testing purposes.
You can also use this base image to build your own custom image and include any files that you need, such as:

  • Customized configuration files
  • Custom plugins and the corresponding configurations
  • License files
  • Custom entry point scripts (for configuring at start up)

DNS Discovery Image

If you want to run a containerized HiveMQ cluster, we strongly recommend using this image. It already contains the HiveMQ DNS cluster discovery plugin and is designed to be used with orchestration software that provides a Round-robin A-record DNS service.

Run a local cluster with Docker Swarm

Note that we do not recommend using Docker Swarm in production

Start a single node Swarm cluster by running:

docker swarm init

Create an overlay network for the cluster nodes to communicate on:

docker network create -d overlay --attachable myNetwork

Create the HiveMQ service on the network, using the lates HiveMQ DNS discovery docker image:


docker service create \
  --replicas 3 --network myNetwork \
  --env HIVEMQ_DNS_DISCOVERY_ADDRESS=tasks.hivemq \
  --publish target=1883,published=1883 \
  --publish target=8080,published=8080 \
  -p 8000:8000/udp \
  --name hivemq \
    hivemq/hivemq3:dns-latest

Example HiveMQ Service for Docker Swarm

This will provide a 3 node cluster with the MQTT(1883) and Web UI(8080) ports forwarded to the host network.
Meaning you can connect MQTT clients on port 1883. The connection will be forwarded to any of the cluster nodes.
The HiveMQ Web UI can be used in a single node cluster. A sticky session for the HTTP requests in clusters with multiple nodes cannot be upheld with this configuration, as the internal load balancer forwards requests in an alternating fashion. To enable the use of sticky sessions, the Docker Swarm Enterprise version is required.

Managing the cluster

To scale the cluster up to 5 nodes, run

docker service scale hivemq=5

To remove the cluster, run

docker service rm hivemq

To read the logs for all HiveMQ nodes in real time, use

docker service logs hivemq -f

To get the log for a single node, get the list of service containers using

docker service ps hivemq

And print the log using

docker service logs <id>

where

<id>
is the container ID listed in the service ps command.

Adding a HiveMQ license

To use a license with this image, you must first encode the license as a string.

cat path/to/your/license.lic | base64

And set the resulting string as the value for the HIVEMQ_LICENSE environment variable of the container.

HiveMQ and Kubernetes

We recommend running HiveMQ with Kubernetes, when running a containerized HiveMQ depoyment in production.
Please refer to this detailed blog post on running a HiveMQ cluster in Kubernetes for more information.

Building a custom image

As mentioned in the overview, you can build your own image from the provided base image and utilize any of the provided HiveMQ versions. Here is an example of a Dockerfile that does all of the following:


ARG TAG=latest
# (1)
FROM hivemq/hivemq3:${TAG} 
# (2)
ENV MY_CUSTOM_PLUGIN_ENV myvalue 
# (3)
ENV HIVEMQ_CLUSTER_PORT 8000

# (4)
COPY --chown=hivemq:hivemq your-license.lic /opt/hivemq/license/your-license.lic 
COPY --chown=hivemq:hivemq myconfig.xml /opt/hivemq/conf/config.xml
COPY --chown=hivemq:hivemq myplugin.jar /opt/hivemq/plugins/myplugin.jar
COPY --chown=hivemq:hivemq myentrypoint.sh /opt/myentrypoint.sh
# (5)
RUN chmod +x /opt/myentrypoint.sh 
# (6)
ENTRYPOINT ["/opt/myentrypoint.sh"]

Sample Dockerfile for plugin usage

  1. Uses the
    hivemq/hivemq3:latest
    image as a base, with a build argument that (optionally) specifies which base tag to use.
  2. Defines an environment variable for the plugin.
  3. Defines an environment variable that is substituted in the HiveMQ configuration file on start up. For details, see Using environment variables for configuration.
  4. Copies required files such as a valid HiveMQ license file, a customized configuration, a custom plugin file and custom entry point to the corresponding folders and applies proper ownership inside the container.
  5. Sets the custom entry point as executable.
  6. Defines the entry point for the image. This definition is optional, but it allows you to run additional commands or programs (for configuration purposes) before you start the actual HiveMQ instance.

Here is one way that you can build the Dockerfile:

docker build --build-arg TAG=3.4.2 -t hivemq-myplugin .

The result is an image built on the HiveMQ base image version 3.4.2 and the current path as the build context. The finished image is tagged locally as

hivemq-myplugin:latest
.

Tagging and build scheme

The following tags are designed to simplify the use of the HiveMQ base image in scripts and Dockerfiles:

  • latest: This tag will always point to the latest version of the HiveMQ base image.
  • dns-latest: This tag will always point to the latest version of the HiveMQ DNS discovery image.
  • {version}: Base image providing the given version of the broker (e.g. 3.4.2)
  • dns-{version}: DNS discovery image providing the given version of the broker (e.g. 3.4.2)

The builds are based on the Dockerfiles defined in hivemq-docker-images.

For more information on tags and other metadata, refer to the Docker Hub page.

Running a HiveMQ cluster with Docker and Kubernetes

$
0
0

The use of containerization and orchestration mechanisms like Docker and Kubernetes is ever growing among all IT deployments. Both the invention and spreading of the DevOps principle as well as the increasing market share and importance of cloud computing providers like Amazon Web Services or Google Cloud Platform are two of the main contributing factors behind this trend. This blog post shows that with the use of the HiveMQ DNS discovery Docker image it is simple and convenient to deploy a HiveMQ cluster with Docker and Kubernetes.

Docker and Kubernetes

Docker is a wide spread software project, providing software containers that aim to make application deployments more easy and portable. Kubernetes is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.

DNS service discovery

DNS service discovery (DNS-SD) is a way of using standard DNS programming interfaces, servers and packet formats to browse the network for services. The HiveMQ DNS discovery plugin utilizes this technology to generate a discovery method for HiveMQ cluster nodes that is both very dynamic, which means that arbitrary numbers of HiveMQ broker nodes can be added and removed at runtime, and low maintenance. As long as a DNS discovery service, providing Round-robin A records is running and configured correctly, the configuration efforts on the broker side are very little and details about the logical network configuration are inconsequential.

Round-robin in this case means that the DNS service responds to a request for a service with a list of all available hosts (or pods in Kubernetes) for this service.
A records are DNS records that point to an IP address directly.


$ kubectl exec hivemq-discovery-0 -- nslookup hivemq-discovery
Server: 10.0.0.12 
Address: 10.0.0.12#51 
 
Name: hivemq-discovery.default.svc.cluster.local 
Address: 172.17.0.1 
Name: hivemq-discovery.default.svc.cluster.local 
Address: 172.17.0.2 
Name: hivemq-discovery.default.svc.cluster.local 
Address: 172.17.0.3

Example DNS record

The HiveMQ DNS Discovery Plugin will utilizes these A records for cluster discovery.
DNS-SD is a perfect candidate for the use with orchestrated containerized environments or most cloud providers.

Installation and configuration

To achieve a dynamic HiveMQ cluster with Kubernetes, which uses the Kubernetes DNS-SD for cluster discovery, the following is required:

  • HiveMQ Docker image
  • HiveMQ DNS discovery plugin
  • Kubernetes cluster environment

HiveMQ Docker image with DNS cluster discovery

A Kubernetes cluster requires the existence of containerized services, to build a cluster. In this example we will be using the HiveMQ DNS Docker image from Dockerhub, which includes the HiveMQ DNS Discovery Plugin.
If you want to create your own Docker images, please take a look at our HiveMQ Docker Images Repository.

Creating a Kubernetes headless cluster service

To enable the utilization of DNS-SD in a Kubernetes deployment, a headless service providing Round-robin A records is necessary.
A headless Kubernetes service enables direct reaching of each individual pot. It is accomplished by setting

clusterIP:none
in the services specs.
By creating this headless service without selectors for the pod, we also ensure that the DNS service of Kubernetes will return Round-robin A records.

An appropriate YAML configuration file for HiveMQ Kubernetes cluster looks like this:

apiVersion: v1
kind: ReplicationController
metadata:
  name: hivemq-replica
spec:
  replicas: 3
  selector:
    app: hivemq-cluster1
  template:
    metadata:
      name: hivemq-cluster1
      labels:
        app: hivemq-cluster1
    spec:
      containers:
      - name: hivemq-pods
        image: hivemq/hivemq3:dns-latest 
        ports:
        - containerPort: 8080
          protocol: TCP
          name: web-ui
        - containerPort: 1883
          protocol: TCP
          name: mqtt
        env:
        - name: HIVEMQ_DNS_DISCOVERY_ADDRESS
          value: "hivemq-discovery.default.svc.cluster.local." 
        - name: HIVEMQ_DNS_DISCOVERY_TIMEOUT
          value: "20"
        - name: HIVEMQ_DNS_DISCOVERY_INTERVAL
          value: "21"
        readinessProbe:
          tcpSocket:
            port: 1883
          initialDelaySeconds: 30
          periodSeconds: 60
          failureThreshold: 60
        livenessProbe:
          tcpSocket:
            port: 1883
          initialDelaySeconds: 30
          periodSeconds: 60
          failureThreshold: 60
---
kind: Service
apiVersion: v1
metadata:
  name: hivemq-discovery
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  selector:
    app: hivemq-cluster1
  ports:
    - protocol: TCP
      port: 1883
      targetPort: 1883
  clusterIP: None

hivemq-k8s.yml


Line 17: We are using the official HiveMQ 3 DNS discovery image from Dockerhub
Line 27: Make sure to set the value for HIVEMQ_DNS_DISCOVERY_ADDRESS according to your Kubernetes namespace and configured domain.
Download this file.
Line 58: We set clusterIP: None, making the hivemq-cluster1 service headless

Go to the Kubernetes Web UI, click on “+CREATE”, “Upload YAML or JSON file” and select the hivemq-k8s.yaml file and press “Upload”.

As soon as Kubernetes is finished with building the environment we have a working 3 node HiveMQ cluster.
To verify the that the cluster discovery is working properly go to hivemq-replica replication controller and check the log file for one of the pods.

A correct configuration will result in a log line reading

... INFO  - Cluster size = 3, members ...

This is a sure sign that the cluster discovery is working properly.

Creating necessary load balancers

The next and last step to our functional HiveMQ cluster is creating 2 separate load balancers for the MQTT (1883) and Web UI (8080) port, so we enable external access to the HiveMQ cluster nodes.

For the MQTT load balancer use the following YAML configuration file and create the service the same as we did the HiveMQ replication controller.


kind: Service
apiVersion: v1
metadata:
  name: hivemq-mqtt
  annotations:
    service.spec.externalTrafficPolicy: Local
spec:
  selector:
    app: hivemq-cluster1
  ports:
    - protocol: TCP
      port: 1883
      targetPort: 1883
  type: LoadBalancer

mqtt.yml

Go to Services -> hivemq-mqtt to find the external end point of the load balancer that can be used to establish MQTT connections to our HiveMQ broker cluster.

The load balancer for the HiveMQ Web UI required the use of sticky sessions. Otherwise the login session information gets lost when browsing to the Web UI.
Make sure your individual Kubernetes environment allows the use of sticky sessions
The following YAML configuration file can be used to leverage the connecting client’s IP for sticky sessions.


kind: Service
apiVersion: v1
metadata:
  name: hivemq-web-ui
spec:
  selector:
    app: hivemq-cluster1
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  sessionAffinity: ClientIP
  type: LoadBalancer

web-ui.yml

You can access the HiveMQ Web UI via this load balancer’s external endpoint with your web browser of choice.

That’s it! We now have fully functional, accessible HiveMQ cluster inside our Kubernetes environment, which we can dynamically scale with the help of our replication controller.

Adding a HiveMQ license to your Docker image

If you are a HiveMQ customer and want to use your license with Kubernetes and HiveMQ DNS Discovery Docker image you can base64-encode the license and set the resulting String as the environment variable

HIVEMQ_LICENSE
:
cat <path/to/license.lic> | base64

Encoding HiveMQ license

Summary and resource list

Choosing the example of Kubernetes this blog post showed that the new HiveMQ Docker image, including the HiveMQ DNS discovery plugin can be leveraged to create dynamic, auto scaling HiveMQ cluster. Likewise the HiveMQ DNS cluster plugin can be use with any other environment that supports DNS-SD like AWS and Route53 or Google Cloud DNS. If you’re looking to utilize the HiveMQ DNS cluster discovery in your specific environment and are facing difficulties, please do not hesitate to leave us a comment or contact us at: contact@hivemq.com

Here are some useful resources:

HiveMQ Recognized with 2 Industry Awards

$
0
0

This past month has been an exciting time for the HiveMQ team. We are thrilled to announce we have won two prestigious awards that recognize the success and growth of our company.

In October, dc-square was accepted into the German Accelerator Tech, an acceleration program that supports German companies to enter the US market. dc-square was one of only 14 companies accepted into the program for Q1/2019. Acceptance into this program will allow us to expand our HiveMQ community and customer base in the US market. In 2019, we will open an office in San Francisco to lead this expansion.

This past week, dc-square was named in Deloitte’s Technology Fast 50 Award 2018, an award that recognizes fast growing German companies. dc-square’s revenue has grown over 1200% in the last 4 years placing us as the sixth fastest growing startup in Germany. Given the vibrant startup community in Germany, we are very honoured to be recognized as one of the leaders.

This past year has been a very exciting time for all of us on the HiveMQ team. We now have over 100 customers using HiveMQ in production. Our customers are using HiveMQ as the core infrastructure platform for building some very cool new digital products . They see HiveMQ and MQTT as being the way to create fast, reliable and cost efficient IoT applications. We are really proud that HiveMQ is being so broadly accepted.

Thank you to all our customers and community that have made 2018 a great year for HiveMQ. We have lots of amazing things planned for 2019 to make HiveMQ and MQTT even more successful.

Viewing all 54 articles
Browse latest View live