Quantcast
Channel: HiveMQ – HiveMQ
Viewing all 54 articles
Browse latest View live

HiveMQ 3.0 is now available!

$
0
0

logo_1

We are very pleased to announce HiveMQ 3.0

The third major version of the HiveMQ enterprise MQTT broker brings a lot of improvements regrading performance, stability, configuration and the plugin system. We have advanced all parts of HiveMQ to scale towards 1 million connections and beyond.

See all new features and improvements

Together with the release of HiveMQ 3.0, we have created benchmarks to show the stellar performance of HiveMQ and its high throughput and low latency. The benchmark demonstrates the massive performance of HiveMQ in three different, but common MQTT scenarios: Roundtrip latency, Telemetry use case and FanOut scenario.

Get your free copy of the benchmarks

Also as you might have noticed we have relaunched the HiveMQ website which is now much cleaner, easier to read and has space for more awesome content regarding MQTT and HiveMQ.

Hope you like the improvements, benchmarks and the new website.


The HiveMQ Team


HiveMQ 3.0.1 Released

$
0
0

The HiveMQ Team is pleased to announce the availability of HiveMQ 3.0.1. This is the first maintenance release of the 3.0 series and brings improvements all customers profit from:

  • Improved Shutdown for deployments with tens of thousands MQTT connections
  • Improved behaviour of the OnAuthorization Callback when short living clients (like mosquitto_pub)
  • Improved internal topic tree for better performance
  • HiveMQ now does not publish LWT messages for client takeovers anymore
  • Reduced chattiness of HiveMQ for cosmetic exception cases
  • Improved performance of $SYS topics
  • Added classloader workarounds for wrongly packaged plugins
  • Increased default size of the PluginExecutorService
  • Fixed Debian / Ubuntu Start scripts

You can download the new HiveMQ version here.

Have a great day,
The HiveMQ Team

JVM Metrics Plugin released

$
0
0

The new holistic monitoring approach of HiveMQ 3 allows to hook in all kinds of custom metrics to HiveMQs internal registry. One of the most popular customer use cases was to include information about the JVM via custom plugins, so the central HiveMQ monitoring also included this important system information.

We are pleased to announce the release of the official HiveMQ JVM Metrics Plugin now. This plugin exposes all available information of the Java Virtual Machine to HiveMQs internal metric registry, so these metrics are included in the centralized HiveMQ monitoring of your choice.

This officially supported plugin includes information about:

  1. Memory usage
  2. Threads
  3. Class Loaders
  4. Garbage Collection
  5. Open Files
  6. Buffer Pool

If you want to learn more about the plugin, check it out in our plugin registry: HiveMQ JVM Metrics Plugin.

HiveMQ 3.0.2 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.0.2. This is a planned maintenance release and brings the following improvements:

  • Optimized used memory for clean session clients
  • Fixed an issue with queued messages when stress-testing HiveMQ
  • Fixed a regression in the config file and TLS versions can be overwritten again
  • Added additional in-memory stores
  • HiveMQ now writes a heapdump file if HiveMQ crashes due to insufficient RAM
  • Improved diagnostics mode
  • Performance improvements

You can download the new HiveMQ version here.

Have a great day,
The HiveMQ Team

HiveMQ 3.0.3 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.0.3. This is a planned maintenance release and brings the following improvements:

  • Removed cosmetic warnings when starting HiveMQ in cluster mode on some machines
  • A ScheduledCallback can now implement more than one interfaces
  • Fixed an exception in the cluster which could occur when a load balancer is in use after failover
  • Fixed rare edge cases which could lead to message reordering for persistent sessions
  • Fixed a file system (ext4) related bug which could lead to premature deletion of files in the persistence folder
  • Stability improvements
  • Performance improvements

You can download the new HiveMQ version here.

Have a great day,
The HiveMQ Team

Using Spring in HiveMQ Plugins

$
0
0

One of the outstanding features of HiveMQ is the ability to customize the broker with custom Java plugin code, which allows to create virtually every integration you can imagine. Java has one of the best ecosystems in terms of available high quality third-party libraries, so it’s seldom needed to build complex integration functionality from scratch on your own.

The HiveMQ plugin system uses Guice for the dependency injection, which allows to write clean and reusable code. Many popular Java libraries rely on Spring in an intrusive way, though, so sometimes it’s needed to integrate Spring with Guice for your plugins.

There are a few things you have to take care of to use Spring dependency injection in HiveMQ Plugins. As HiveMQ uses Guice for depencendy injection, it is necessary to unterstand when you want Guice to inject something and when you want Spring to handle the injection.

Setting up the Spring ApplicationContext

Since HiveMQ Plugins have their own isolated ClassLoader, you have to setup Springs AplicationContext with a ClassPathXmlApplicationContext. Otherwise Spring won't be able to load your xml file.

Example:

ClassLoader classLoader = this.getClass().getClassLoader();
ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext();
ctx.setClassLoader(classLoader);
ctx.setConfigLocation("spring-context.xml");
ctx.refresh();

You also need to setup the ApplicationContext within the

configurePlugin()
method of your
HiveMQPluginModule
. This is necessary to make your instances managed by Spring available to Guice. This means you can use Guice to inject Objects created by Spring.

Example:

protected void configurePlugin() {
    ClassPathXmlApplicationContext ctx = getApplicationContext();
    SpringIntegration.bindAll(binder(), ctx.getBeanFactory());
    bind(ApplicationContext.class).toInstance(ctx);
}

for this to work you also need an additional depency in your

pom.xml

...

    com.google.inject.extensions;
    guice-spring
    4.0

....

If you want to use Guice to inject instances managed by Spring you need to use the

@Named
Annotation, because every instance managed by Spring is registered with Guice as a named instance.

Example:

@javax.inject.Inject
@javax.inject.Named("helloWorld")
private HelloWorld helloWorld;

Injecting HiveMQ Plugin Services into Spring instances

If you want to use any of HiveMQ's Services (for example the

PublishService
), you need to let Guice inject them into your instances.

For this to work with an instance managed by Spring, you cannot use

@javax.inject.Inject
anywhere in the class (Because Guice and Spring both process this annotation). Instead you should use
@Autowired
for instances you want to be injected by Spring and
@com.google.inject.Inject
for the instances you want to be handled by Guice.

Example:

@org.springframework.beans.factory.annotation.Autowired
private HelloWorld helloWorld;

@com.google.inject.Inject
private PublishService publishService

The next step is to tell Guice to take care of this instance created by Spring. You can do this easily by adding a line to the

configurePlugin()
method of your
HiveMQPluginModule
.

Example:

// you can also get the bean/singleton by class 
// or any other way which is provided by the BeanFactory
Object o = ctx.getBeanFactory().getBean("myBean");
requestInjection(o);

Register Callbacks managed by Spring

To register your callback implementations with HiveMQ's

CallbackRegistry
you need to inject them into your
PluginEntryPoint
by using Guice (even if they are managed by Spring), because the PluginEntryPoint must always be instantiated by Guice.

Remember: Spring managed instances need to be injected with

@Named
for Guice to recognize them.

Example:

public class MyPlugin extends PluginEntryPoint {

    //won't work here
    //@Autowired

    @javax.inject.Inject
    @javax.inject.Named("myCallback")
    private MyCallback myCallback;

    @javax.annotation.PostConstruct
    public void postConstruct() {
        getCallbackRegistry().addCallback(myCallback);
    }

}

Example Callback managed by Spring:

@Component
public class MyCallback implements OnBrokerStart {
    
    @Autowired
    private HelloWorld helloWorld;
    
    @Override
    public void onBrokerStart() throws BrokerUnableToStartException {
        helloWorld.doStuff();
    }

}

Example Plugin

For a full code example see Github
https://github.com/hivemq/hivemq-spring-example-plugin

HiveMQ 3.1.0 released

$
0
0

We’re pleased to announce the general availability of HiveMQ 3.1.0.

HiveMQ 3.1.0 is a feature release that brings lots of improvements and features.

The most noticeable features and improvements are:

  • Distributed Cluster
  • Environment Variables in the configuration file
  • A systemd startup script
  • Next-Gen Shared Subscriptions
  • Custom Cluster Discovery mechanisms
  • Asynchronous plugin services

In order to upgrade from 3.0, check out the Upgrade Documentation

See all new features in detail

HiveMQ 3.1.1 Released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.1. This is a planned maintenance release and brings the following improvements:

  • Cluster ID is now only visible in the log if the cluster is enabled.
  • Fixed a performance regression for Quality of Service 1 and 2 messages for some MQTT clients.
  • Small Performance and Memory Improvements

You can download the new HiveMQ version here.

Have a great day,
The HiveMQ Team


HiveMQ 3.1.2 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.2. This is a bugfix release and fixes the following issue:

  • Fixes a rare issue where QoS 1 and 2 messages could get ignored for some clients in HiveMQ cluster environments and thus could get “lost”.

You can download the new HiveMQ version here.

We strongly recommend to upgrade if you are an HiveMQ 3.1.x user and use a clustered HiveMQ setup. The HiveMQ 3.0.x versions were not affected by this issue. HiveMQ single node installations are not affected by the issue

Have a great day,
The HiveMQ Team

Deploying elastic HiveMQ clusters with Docker

$
0
0

Docker

The previous blog post took a look at HiveMQ in containerized deployments. This post explores how HiveMQ can be used in conjunction with Docker to create a maintainable and scalable deployment of a HiveMQ cluster with a minimal configuration effort.

Cluster Configuration

To build a HiveMQ cluster, multiple containers running HiveMQ will be started on a Docker host. Thanks to HiveMQ’s dynamic multicast discovery these containers can all be started from the same Docker image without even changing the HiveMQ configuration. This allows you to create and start HiveMQ cluster nodes in a matter of seconds.

When building a HiveMQ cluster on Docker the recommended transport for HiveMQ’s cluster is UDP. The recommended discovery is multicast. This discovery utilizes an IP multicast address to find other HiveMQ cluster nodes in the same network. Since it is a dynamic discovery additional nodes can be added at later point without any modification to the existing configuration.

You can take at look at the HiveMQ Documentation for other possible methods of cluster node discovery or even create your own with HiveMQ’s extensible plugin system.

Cluster Setup

For two (or more) containers to form a cluster, the configuration files need some small modifications. The cluster needs to be enabled and the bind-ports for UDP and TCP failure detection need to be set in HiveMQ’s XML configration file. Also these ports need to be exposed in the Dockerfile.

mkdir ~/hivemq-cluster
mkdir ~/hivemq-cluster/node1-data
mkdir ~/hivemq-cluster/node2-data
cd hivemq-cluster
vim Dockerfile
vim config.xml

Dockerfile. ( Replace YOUR-HIVEMQ-DOWNLOAD-LINK with your personal download link which you get from the HiveMQ Download page)

#
# HiveMQ Cluster Dockerfile
#

# Pull base image. The official docker openjdk-8 image is used here.
FROM java:8-jdk

# Install Java.
RUN \
    apt-get install -y unzip &&\
    wget --content-disposition  &&\
    unzip hivemq-*.zip -d /opt/ &&\
    mv /opt/hivemq-* /opt/hivemq
  
# Define working directory.
WORKDIR /opt/hivemq

# Define commonly used JAVA_HOME variable
ENV HIVEMQ_HOME /opt/hivemq

COPY config.xml /opt/hivemq/conf/config.xml

# Expose MQTT port
EXPOSE 1883

# Expose Cluster ports for UDP and TCP failure connection
EXPOSE 7800
EXPOSE 7900

# Define default command.
CMD ["/opt/hivemq/bin/run.sh"]

config.xml




    
        true
        
            
                7800
            
        
        
            
                7900
            
        
    
    
        
            1883
            0.0.0.0
        
    

then build and run the image with

docker build -t hivemq-cluster:3.1.2 .
docker run -d -p 11883:1883 --name=hivemq-node-1 -v ~/hivemq-cluster/node1-data:/opt/hivemq/data hivemq-cluster:3.1.2
docker run -d -p 11884:1883 --name=hivemq-node-2 -v ~/hivemq-cluster/node2-data:/opt/hivemq/data hivemq-cluster:3.1.2

Note the different ports and the different folders in the run commands for the containers.

After starting to the containers you can connect to your nodes with the following credentials:

node 1

  • Host: IP/hostname of your Docker host (e.g. localhost)
  • Port: 11883

node 2:

  • Host: IP/hostname of your Docker host (e.g. localhost)
  • Port: 11884

You can also check the log of the containers to see that they formed a cluster with the following commands:

docker exec -it hivemq-node-1 tail -f /opt/hivemq/log/hivemq.log
docker exec -it hivemq-node-2 tail -f /opt/hivemq/log/hivemq.log

Adding more nodes

Now this cluster consists of two nodes, if you want more nodes in your HiveMQ cluster you can add additional ones analogous to the already running nodes. Just remember to configure different ports and volume folders for each node.

Due to using a dynamic multicast discovery the nodes will discover each other without any additional configuration which allows you to always scale your cluster as your demand grows.

Networking

In this example the default networking configuration from Docker is used. You could also configure more complex networking between the containers. Even spanning networks between multiple Docker hosts is an option and will be needed if you want to form a HiveMQ cluster over multiple Docker hosts.

You can read more about docker container networking in the Docker Networking Documentation. Or use third-party tools like Pipework or Weave to setup even more complex networking scenarios.

Advanced

Here we started each container manually, if you want to configure multiple containers at once, for example a whole HiveMQ cluster you can take a look at Docker Compose.

Also there are management solutions which are built on top of Docker and allow you to operate and maintain Docker containers at a larger scale. For example docker-swarm, Kubernetes or Mesosphere DCOS.

Conclusion

Docker and HiveMQ are a perfect fit when it comes to deploying and maintainig a HiveMQ cluster. This blog post explored how easy it is to setup a full-fledged containerized HiveMQ cluster deployment for development, testing, integration and production purposes.

HiveMQ 3.1.3 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.3. This is a scheduled maintenance release and brings the following improvements:

  • Fixed an issue where the start script didn’t handle whitespaces correctly
  • Improved license file handling in case of incorrect license files
  • Improved metric reporting for some metrics
  • Fixed a problem if offline client queues are full on systems with imprecise timestamp resolutions
  • Fixed a problem where the plugin system would report too much disconnected clients in some cases
  • Fixed a problem that prohibited the HelloWorld plugin to start correctly
  • The systemd start script now starts with a much higher maximum file limit
  • Removed cosmetic warnings from logging
  • Cluster performance improvements
  • Stability improvements

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.1.x user.

Have a great day,
The HiveMQ Team

MQTT Client Load Balancing with Shared Subscriptions

$
0
0

shared-subs

MQTT is based on the publish / subscribe principle, which means messages can be distributed to any number of subscribers. (If this is news to you, check out the MQTT Essentials). Sometimes plain PubSub is not enough, though, and some kind of client load balancing is desired to cover some of the more advanced use cases that involve multiple MQTT subscribers. This blog post introduces the concept of Shared Subscriptions, a mechanism that allows to distribute the messages of a subscription group to the members of the subscription group.

Introducing Shared Subscriptions

Shared Subscriptions are a non-standard-MQTT feature of HiveMQ that allows MQTT clients to share the same subscription on the broker. When using standard MQTT subscriptions, each subscribing client receives a copy of the message. If Shared Subscriptions are used, all clients that share the same subscription with a subscription group will receive messages in an alternating fashion. This mechanism is sometimes referred to as client load balancing, since the message load of a single topic is distributed amongst all subscribers.

MQTT clients can subscribe to a shared subscription with standard MQTT mechanisms, which means all common MQTT clients like Eclipse Paho can be used without modifying anything on the client side. Shared Subscriptions use a special topic syntax for subscribing, though.

The topic structure for shared subscriptions is the following:

$share:GROUPID:TOPIC

The shared subscription consists of 3 parts:

  • A static shared subscription identifier ($share)
  • A group identifier
  • The concrete topic subscriptions (may include wildcards)

A concrete example for such an subscriber would be $share:my-shared-subscriber-group:myhome/groundfloor/+/temperature.

Shared Subscriptions in-depth

When using Shared Subscriptions, each subscription group can be conceptually imagined as a virtual client that acts as proxy for multiple real subscribers at once. HiveMQ selects one subscriber of the group and delivers the message to this client. By default a round-robin approach is used. The following picture demonstrates the principle:

Docker

There is no limitation on the number of Shared Subscription Groups in a HiveMQ deployment. So for example the following scenario would be possible:

Docker

In this example there are two different groups with 2 subscribing clients in each shared subscription group. Both groups have the same subscription but have different group identifiers. When a publisher sends a message with a matching topic, one (and only one) client of each group receives the message.

Shared Subscriptions Use Cases

There are many use cases for shared subscriptions, especially in high-scalability scenarios. Among the most popular use cases are:

  • Client Load Balancing for MQTT clients which can’t handle the load on subscribed topics on their own.
  • Worker (backend) applications which ingest MQTT streams and need to be scaled out horizontally.
  • HiveMQ intra-cluster node traffic should be relieved by optimizing subscriber node-locality for incoming publishes.
  • QoS 1 and 2 are used for their delivery semantics but Ordered Topic guarantees are not needed for the use cases.
  • There are hot topics with higher message rate than other topics in the system and these topics are the scalability bottleneck.

How to subscribe with Shared Subscriptions?

Subscribing clients with Shared Subscriptions is straight forward. The following Java code (which uses the Java Eclipse Paho code shows two MQTT clients that subscribe to the same subscription group:

MqttClient client = new MqttClient(
    "tcp://broker.mqttdashboard.com:1883", //URI
    MqttClient.generateClientId(), //ClientId
    new MemoryPersistence()); //Persistence
client.connect();
client.subscribe(“$share:my-group:#”, 1);

MqttClient client2 = new MqttClient(
    "tcp://broker.mqttdashboard.com:1883", //URI
    MqttClient.generateClientId(), //ClientId
    new MemoryPersistence()); //Persistence
client2.connect();
client2.subscribe(“$share:my-group:#”, 1);

Now both MQTT clients share a root wildcard subscription (they have the virtual group my-group) and would receive half of all MQTT messages that are sent over the MQTT broker due to the root wildcard subscription.

MQTT clients can join and leave the subscription group any time. If a third client would join the group, each client would receive 1/3 of all MQTT messages.

You can read about all semantics of shared subscriptions in the official HiveMQ Documentation.

Scaling MQTT Subscribers with Shared Subscriptions

The Shared Subscription mechanism is a fantastic approach to integrate backend systems via plain MQTT (for example if it’s not feasible to use HiveMQs plugin system and dynamic scaling is needed). Shared Subscribers can be added anytime as soon as they’re needed. An arbitrary number of “worker” applications can be used to consume the messages of the Shared Subscription Group.

So you get essentially work-distribution in a pushing fashion, since the messages are pushed by the broker to the Shared Subscription clients.

Shared Subscriptions are perfect for scaling and load balancing MQTT clients. When using a HiveMQ cluster, there are additional advantages in terms of latency and scalability since message routing is optimized internally. Learn more about Shared Subscriptions and HiveMQ clusters in the official documentation.

Conclusion

We have seen that Shared Subscriptions are a great way to distribute messages across different MQTT subscribers with standard MQTT mechanisms, which means you can add MQTT client load balancing without any proprietary additions to your MQTT clients. This is especially useful for backend systems or “hot-topics” that can quickly overwhelm a single MQTT client.

Do you already use or plan to use Shared Subscriptions? Let us know in the comments!

Oh and by the way: Shared Subscriptions may possibly come as official MQTT feature with the next MQTT specification, so Shared Subscriptions are definitely one of the more interesting concepts you should know about when deploying MQTT.

RESTful HTTP APIs with HiveMQ

$
0
0

HiveMQ rest-service

These days no enterprise software is complete without providing a way to integrate with other components in a software landscape. While MQTT is an awesome way to integrate backend systems, often this will be achieved by using HTTP-APIs, sometimes also called “Webservices”.

Of course, with HiveMQ the integration into other systems can be done purely in Java using the open source plugin system. HiveMQ also provides a RestService which allows you to create a custom HTTP-API or even a REST-API which can be consumed by other applications.

This blog post will give an introduction into the latter one and show you how you can add a HTTP-API to HiveMQ with only a few lines of code.

RESTService

Plugin developers can utilize HiveMQs RestService for creating accessible HTTP APIs directly within HiveMQ. If you are getting started with HiveMQ plugins and plugin services, take a look at the Plugin Developer Guide to get a jumpstart with HiveMQ plugins.

The RestService enables you to serve HTTP content directly from HiveMQ, namely:

  • Multiple HTTP endpoints
  • JAX-RS based HTTP/REST-APIs
  • Servlets and ServletFilters

But HiveMQ not only gives a plugin developer a simple way to add such APIs, it also makes all other plugin services available for the use within these APIs. In the following example code these services are used to get the subscriptions of all currently known MQTT clients. There is a lot more you can do with those services, see the Plugin Developer Guide for a list of those services.

HTTP Listeners

HiveMQ can serve different HTTP endpoints called listeners. These listeners can be added to HiveMQ either by configuring them in HiveMQ’s config file or by declaring them programatically.

You can serve different content on different listeners or on all available listeners. You can even serve some content on a specific listener and some other content on all listeners.

Example HiveMQ configuration file with HTTP listener on port 8080:




    
        
            1883
            0.0.0.0
        
    
   
    
        
            
                default
                8080
                0.0.0.0
            
        
    

Adding a listener on all interfaces with port 8888 programatically:

restService.addListener(new HttpListener("listener-name", "0.0.0.0", 8888));

By default HiveMQ will serve JAX-RS Applications or JAX-RS Resources with the root path

/
and serve Servlets under the root path
/servlet
.

HTTP-API

HiveMQ utilizes the JAX-RS API to provide a simple and well-known principle for implementing Webservices with Java. The easiest way to create such a service is to create a JAX-RS Resource and make it known to the RESTService.

Adding a Resource to the RESTService:

restService.addJaxRsResources(ExampleResource.class);

Example API implementation:

@Path("/example")
@Produces("application/json")
public class ExampleResource {

    private final BlockingSubscriptionStore subscriptionStore;

    @Inject
    public ExampleResource(final BlockingSubscriptionStore subscriptionStore) {
        this.subscriptionStore = subscriptionStore;
    }

    @GET
    @Path("/subscriptions")
    public Response getSubscriptions() {
        final Multimap subscriptions = subscriptionStore.getSubscriptions();
        return Response.ok(subscriptions.asMap()).build();
    }
}

These few lines of code now create a HTTP API endpoint at

http://broker-ip:8080/example/subscriptions/
which will return a JSON response containing a list of clients and their subscriptions on a HTTP GET request.

Servlets

HiveMQ’s RESTService can also serve Servlets directly from HiveMQ. An example Servlet which serves a HTML page with a list of all connected MQTT clients would look like this.

public class ExampleServlet extends HttpServlet {

    private final BlockingClientService clientService;

    @Inject
    public ExampleServlet(final BlockingClientService clientService) {
        this.clientService = clientService;
    }

    public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws IOException {

        final Set connectedClients = clientService.getConnectedClients();
        final PrintWriter writer = response.getWriter();

        writer.write("
    "); for (String clientId : connectedClients) { writer.write("
  • " + clientId + "
  • "); } writer.write("
"); } }

It can be added to RESTService by calling:

restService.addServlet(ExampleServlet.class, "/clients");

It can then be viewed in the browser at the URL

http://broker-ip:8080/servlet/clients
.

Async HTTP-API

HiveMQ’s plugin services also offer an async API for every service so it makes a lot of sense to also serve a HTTP-API in an async fashion. By using the async APIs you get better scalability and the general resource consumption of your plugin will also be lower than using the blocking APIs.

To show how the async APIs can be used together with JAX-RS the following is implementing the same example in an asychronous way.

JAX-RS example with async API:

@Path("/example")
@Produces("application/json")
public class ExampleResource {

    private final AsyncSubscriptionStore subscriptionStore;

    @Inject
    public ExampleResource(final AsyncSubscriptionStore subscriptionStore) {
        this.subscriptionStore = subscriptionStore;
    }

    @GET
    @Path("/subscriptions")
    public void getSubscriptions(@Suspended final AsyncResponse asyncResponse) {

        //since the information could be extremely large and will be collected from all cluster nodes
        //choose a very large timeout to handle worst-case delays.
        asyncResponse.setTimeout(5, TimeUnit.SECONDS);

        final ListenableFuture> future = subscriptionStore.getSubscriptions();

        Futures.addCallback(future, new FutureCallback>() {
            @Override
            public void onSuccess(final Multimap result) {
                final Response response = Response.ok(result.asMap()).build();
                asyncResponse.resume(response);
            }

            @Override
            public void onFailure(final Throwable t) {
                asyncResponse.resume(t);
            }
        });
    }
}

Async Servlet

Of course HiveMQ supports the current Servlet 3.0 API which provides asynchronous execution of a request.

Servlet example with async API:

public class ExampleServlet extends HttpServlet {

    private final AsyncClientService clientService;

    @Inject
    public ExampleServlet(final AsyncClientService asyncClientService) {
        this.clientService = asyncClientService;
    }

    public void doGet(final HttpServletRequest request, final HttpServletResponse response) throws IOException {

        final AsyncContext asyncContext = request.startAsync(request, response);

        //since the information could be extremely large and will be collected from all cluster nodes
        //choose a very large timeout to handle worst-case delays.
        asyncContext.setTimeout(5000);

        final ListenableFuture> future = clientService.getConnectedClients();

        Futures.addCallback(future, new FutureCallback>() {

            @Override
            public void onSuccess(final Set result) {
                try {
                    final PrintWriter writer = asyncContext.getResponse().getWriter();

                    writer.write("
    "); for (String clientId : result) { writer.write("
  • " + clientId + "
  • "); } writer.write("
"); asyncContext.complete(); } catch (IOException e) { e.printStackTrace(); } } @Override public void onFailure(final Throwable t) { final HttpServletResponse httpResponse = (HttpServletResponse) asyncContext.getResponse(); try { httpResponse.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); } catch (IOException e) { e.printStackTrace(); } asyncContext.complete(); } }); } }

Example Project

For more example code there is a rest-example-plugin which contains the blocking as well as the asynchronous examples and and shows how to add subscriptions via HTTP-API.

The example project is available on Github.

Conclusion

HTTP-APIs are a good and well-known way to integrate different software systems, especially if these systems do not support MQTT (yet). With the RESTService, HiveMQ provides an easy option for everyone who need their MQTT broker to communicate with other backend systems that don’t support MQTT in just a few lines of code while still providing scalability in a non-blocking fashion.

HiveMQ 3.1.4 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.4. This is an important maintenance release and brings the following improvements:

  • Fixed a persistence leak in the client session persistence that caused growing persistences
  • Fixed an issue that could cause to lose wildcard subscriptions in the cluster
  • Fixed the subscription count metric
  • Fixed a cosmetic logging error that could be thrown when shutting down HiveMQ
  • Fixed a potential memory leak for QoS 1 and 2 messages
  • Fixed a race condition that could cause to lose subscriptions on very fast client takeovers
  • Fixed a cosmetic exception in the cluster when a node got unavailable
  • Improved message routing in cases where a cluster node
  • Cluster stability improvements
  • Stability improvements
  • Minor performance improvements

You can download the new HiveMQ version here.

We strongly recommend to upgrade if you are an HiveMQ 3.1.x user.

Have a great day,
The HiveMQ Team

HiveMQ 3.1.5 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.5. This is a maintenance release and brings the following improvements:

  • Fixed an issue when removing subscriptions in the cluster
  • Improved cluster performance for adding huge amounts of subscriptions at the same time
  • Improved Message Retries in Split Brain scenarios
  • Improved Shared Subscription Handling for topic edge cases
  • Improved logging for some rare errors

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.1.x user.

Have a great day,
The HiveMQ Team


Introducing the HiveMQ InfluxDB Monitoring Plugin

$
0
0

HiveMQ InfluxDB Plugin

Monitoring a production MQTT broker system is very important to recognize problems in a timely manner and hopefully before these problems cause headaches for the production system. HiveMQ is an Enterprise MQTT Broker that is loved by operations and devops teams worldwide due to the simplicity and ease of monitoring.

InfluxDB users are going to love our new native integration with the scalable time-series database. HiveMQ is now able to integrate directly with InfluxDB with the free off-the-shelf HiveMQ InfluxDB Plugin. You can publish the built-in HiveMQ metrics and your own business metrics from your plugin with ease to InfluxDB and create pretty dashboards with Grafana on top.

You can download the InfluxDB HiveMQ plugin via the plugin directory or build it yourself from source.

Download the plugin

Two free HiveMQ plugins you don’t want to miss: Client Status and Retained Messages Query Plugin

$
0
0

plugins_puzzle

Our friends at ART+COM just released two of their HiveMQ plugins to the public. They are available on Github and may be useful to many MQTT deployments that need to utilize HTTP as an information channel.

HiveMQ Client Status Plugin

This plugin is available at Github.

This plugin exposes a HTTP GET endpoint to retrieve the currently connected and disconnected clients with their respective client identifiers and IP addresses. It is useful if you need to quickly check what clients are connected to the broker.

It creates a new HTTP GET endpoint

/clients
and returns a HTTP Response like this:

[ {
  "clientId" : "another client",
  "ip" : "127.0.0.1",
  "connected" : true
}, {
  "clientId" : "my-inactive-client",
  "connected" : false
} ]

HiveMQ Retained Message Query Plugin

This plugin is available at Github.

This plugin allows to query retained messages via HTTP instead of MQTT from HiveMQ. You can read about the motivation to write this plugin by the folks at ART+COM in the blog.

In short, this plugin exposes a HTTP Endpoint that expects a POST request with a query.

An example query could look like this:

{
  "topic": "path/of/the/topic",
  "depth": 0,
  "flatten": false
}

A response would look like this:

{
  "topic": "path/of/the/topic",
  "payload": "23",
  "children": [
	{
	  "topic": "path/of/child/topic",
	  "payload": "\"foo\""
	}
  ]
}

If you’re a MQTT.JS user, there is a small library called mqtt-topping by the ART+COM people that adds syntactic sugar for this plugin.

We hope you like the plugins as much as we do. Let us know in the comments what you think!

HiveMQ 3.1.6 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.6. This is a maintenance release and brings the following improvements:

  • Fixed an issue that prevented HiveMQ to stop under rare circumstances if custom plugins throw Exceptions while bootstrapping
  • Fixed the use of plugin metric annotations for classes with non-public constructors
  • Fixed a build date warning when using HiveMQ as Windows Service
  • Improved stability for rare edge cases that violate the protocol specification
  • Fixed cosmetic errors that could occur when stopping HiveMQ under high load
  • Fixed an issue that could lead to high memory usage when using the cluster and nodes join with old state
  • Improved the performance when clients are subscribing to a high amount of topics in the same SUBSCRIBE message
  • Silenced a cosmetic log message that occurred when a client disconnected while receiving retained messages

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.1.x user.

Have a great day,
The HiveMQ Team

What’s new in HiveMQ 3.2

$
0
0

We are pleased to announce the release of HiveMQ 3.2. This version of HiveMQ is the most scalable HiveMQ version ever and brings massive improvements in the cluster performance as well as single node performance. Although there are massive improvements under the hood, HiveMQ 3.2 is a drop-in replacement for HiveMQ 3.1. You can use our rolling upgrade mechanism if you are already using HiveMQ in clustered mode, so you don’t have any downtime of your MQTT deployment.

While HiveMQ 3.2 is primarily a performance and stability release with tons of improvements under the hood, there are of course many new features you’ll love:

Increased Cluster Performance

Increased cluster performance
HiveMQ introduced the distributed and linearly scalable cluster in version 3.1. The new version improves latency between cluster nodes and reduces overall traffic significantly. The routing intelligence is more sophisticated than ever and advanced batching and traffic reduction mechanisms make sure to get the best performance possible. Many hundreds of thousands of messages per second between an arbitrary number of broker nodes are possible out-of-the-box with default configuration. Dockerized deployments are of course supported as well as classic VM or bare metal deployments.

The best thing is, that of course rolling upgrades are supported and you can even operate clusters with HiveMQ version 3.1 and 3.2 deployed simultaneously. We take rolling upgrades seriously.

PROXY Protocol

PROXY Protocol
HiveMQ now supports the PROXY Protocol which was pioneered by the HAProxy load balancer and is available in many load balancers nowadays. This protocol allows to forward information about the original MQTT client, like IP address or port, when the TCP connection is proxied by the load balancer.
Both, version 1 and 2, are supported by HiveMQ transparently.

X509 client certificate authentication like the Common Name of the certificate can also be forwarded by the load balancer with the PROXY protocol implementation of HiveMQ. Besides the officially supported information from the PROXY protocol, HiveMQ supports custom PROXY protocol extensions. If your load balancer can forward custom information via TLVs, HiveMQ can read it. The best part is, that you can utilize all PROXY protocol information in your custom plugins, including custom information.

Plugin System Extensions

Plugin System Extensions
The new version brings new features for the free and open source plugin system that plugin developers are going to love.

Callbacks can now determine to which listener a MQTT client connected to. If you need to determine if a connection was made over SSL or websockets, you are now finally able to do this without any hassle.

It’s now also possible to disconnect MQTT clients programmatically. You can kick out the MQTT client forcefully on the server side and you can also prevent sending out LWT messages when the client gets disconnected. Plugin developers can of course disconnect a MQTT client that is connected on another cluster node, so this feature is fully available in clusters – as most HiveMQ plugin features.

Plugins that utilize HiveMQ’s internal HTTP server via the RestService have a lot more options that are useful for sophisticated JAX-RS scenarios. You can now add custom ExceptionMappers as well as ContextResolvers and custom MessageBodyWriters and MessageBodyReaders.

We listened and now introduced a new callback that gives you more control over how topic subscriptions are handled. While the popular OnSubscribeCallback allows you to inspect the whole MQTT SUBSCRIBE packet, there was no way to change the return code for the MQTT SUBACK message for each subscriptions (since this was typically done with Authorization). Setting custom return codes for subscriptions is now possible with the new OnTopicSubscriptionCallback that even allows to override subscription return codes from the authorization logic. Developers now get fine-grained control over subscriptions.

Increased Performance

Increased Performance
All HiveMQ users profit from the improvements under the hood in this release. We were able to make the memory footprint even lower in this release. Users will notice a even better overall performance and lower latencies in many cases.

Innovative Active Anti Amplification mechanisms are incorporated in this release, which improves memory consumption for huge fanout cases and improves the performance significantly for large messages.

Linux users now automatically utilize a new epoll integration which uses the edge-triggered interface instead of the level-triggered interface. This is beneficial in high scalability scenarios with lots of connected MQTT clients. This is completely transparent and you don’t need to change settings to profit from this improvements, it is used automatically for you if edge-triggered epoll is available.

Monitoring Made Easy

Monitoring Made Easy
HiveMQ now includes 350+ metrics that allow to monitor virtually every aspect of the MQTT broker, including detailed metrics for the cluster that allow to find issues in the production environment before they become problems.

The most popular plugins for monitoring, the JVM Metrics plugin and the JMX plugin are now part of the HiveMQ core distribution, so these are installed by default to make monitoring as easy as possible for you.

The logging configuration is now auto-reloadable by default, so if you need to change the log level at runtime, you can do this now easily by modifying the included logback.xml. No restarts are required for changing the log level!

Reloadable Keystores

Reloadable Keystores
An often requested features was to have auto-reloadable Java keystores and truststores. We listened and now you can modify your keystores and truststores at runtime. If your SSL certificates expired or you need to change it, no restarts of the broker are required anymore. The broker will pick up the changes as soon as the files are changed on the file system.

Additional Features

Additional Features
There are many small improvements and new features in this release:
  • The Diagnostic Mode now logs all relevant metrics so it’s even easier for you to get help by our awesome support team
  • Configuration of external IP address for cluster nodes are now possible. This is useful if cluster nodes are behind NATs or if you are in cloud environments where your external bind address is not visible (e.g. AWS)
  • Additional Shared Subscription Syntax: It’s now possible to replace the colons ( : ) in the shared subscription syntax with slashes ( / ).
  • We included a diagnostics startup script that makes it even easier to start HiveMQ with diagnostic node
  • Encrypted communication with TLS is now possible between cluster nodes

 

In order to upgrade to HiveMQ 3.2 from HiveMQ 3.1, take a look at our Upgrade Guide.

Download HiveMQ 3.2 now

HiveMQ 3.1.7 released

$
0
0

The HiveMQ team is pleased to announce the availability of HiveMQ 3.1.7. This is a maintenance release and brings the following improvements:

  • Fixed an issue that could prevent to send CONNACK messages in the cluster in rare circumstances
  • Fixed a potential memory leak caused by misbehaving MQTT clients in the cluster
  • Fixed a problem with rolling upgrades that could occur under some circumstances
  • Fixed a problem caused by authorization plugins that could prevent HiveMQ from sending out messages
  • Fixed a problem that allowed clients to publish to $SYS topics under some rare circumstances
  • Fixed a problem with wildcard subscriptions that were not replicated in the cluster under some circumstances
  • Fixed an issue that would send out queued messages with a wrong QoS level under some rare circumstances in the cluster
  • Fixed an issue that could cause messages to be lost with shared subscriptions when clients disconnect while delivering a message
  • Fixed cluster message metric names for monitoring purposes

You can download the new HiveMQ version here.

We recommend to upgrade if you are an HiveMQ 3.1.x user.

Have a great day,
The HiveMQ Team

Viewing all 54 articles
Browse latest View live


Latest Images