Websockets Configuration

Overview

Chat for Jira Service Management Server/Data Center utilizes websockets as a communication method between the chat widget and Jira, to provide instant reception of messages while minimizing bandwidth usage. In environments where websockets are unavailable, the connection to Jira will gracefully downgrade (with a warning in the browser's console window) to simple periodical polling mechanism, which is guaranteed to work without any special setup. This polling, while also providing a nearly instant message reception time, consumes more network bandwidth and may (in extreme cases) use up Jira's connection pool, therefore it is preferrable to configure your environment for websockets.

Setting up websockets in production environment requires a bit of setup, because typically Jira backend is located behind a proxy server. Required proxy configurations for NginxApache and AWS ELB are described below. If you are using a different proxy, please consult its documentation regarding websockets forwarding.

Proxy Configuration

Nginx

In addition to the usual Jira setup for Nginx proxy, you will need to configure Nginx proxy to enable websockets forwarding (in bold):

location / {
                proxy_pass http://jirahost;

                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
        }     

In the example above, http://jirahost is a Jira backend URL - you should obviously replace it with the actual address of your Jira.

To narrow down websockets tunelling, you can use /com-spartez-support-chat/ws/ as the "location" and only provide websockets support for that - /com-spartez-support-chat/ws/ is the base address of all chat's websocket endpoints within your Jira. However, it is ok to set this support globally to all proxied addresses in Jira.

In addition to the above, it is crucial that the proxy keeps the websocket connection open even when it is idle - this is controlled by the proxy_read_timeout directive - the default of which is 60 seconds. Chat keeps the websocket open by sending a "keepalive" message every 30 seconds. It is important to not set the timeout directive to less than 30 seconds. For example, you can set it to 10 minutes:

proxy_read_timeout          600;


For more information about proxying websockets by Nginx, go to its documentation: http://nginx.org/en/docs/http/websocket.html.

Apache

In addition to the usual Jira setup for Apache proxy, you will beed to configure Apache proxy to enable websockets forwarding.

Firstly, you need to enable the mod_proxy_wstunnel module:

sudo a2enmod mod_proxy_wstunnel

Then, set up Apache for websockets forwarding:

ProxyRequests Off
ProxyPass "/com-spartez-support-chat/ws/" "ws://jirahost/com-spartez-support-chat/ws/"

In the example above, ws://jirahost is a Jira backend URL - you should obviously replace it with the actual address of your Jira. The /com-spartez-support-chat/ws/ is the base address of all chat's websockets endpoints within JIRA - you should not change this part.

The ProxyPass statement from the config above should be the first ProxyPass in your configuration, if you have multiple ProxyPass directives, so that it does not get overriden.

In addition to the above, it is crucial that the proxy keeps the websocket connection open even when it is idle - this is controlled by the ProxyWebsocketIdleTimeout directive. Chat keeps the websocket open by sending a "keepalive" message every 30 seconds. It is important to not set the timeout directive to less than 30 seconds. For example, you can set it to 10 minutes:

ProxyWebsocketIdleTimeout 600


For more information about proxying websockets by Apache, go to its documentation: https://httpd.apache.org/docs/trunk/mod/mod_proxy_wstunnel.html

Amazon AWS Elastic Load Balancer

If you have your Jira hosted on Amazon AWS and you are using their Elastic Load Balancer (ELB), you need to follow these directions to set up websockets support:

  1. Set up JIRA for ELB, as documented in these instructions.
  2. When selecting the load balancer to use, pick the "Network" Load Balancer (not the "Application" or "Classic" one). "Network" load balancer is the one that natively supports websockets.
  3. set up security group for your Jira AWS instance, so that access to the port, on which JIRA listens for connections from the load balancer, is accessible.
  4. make sure that the health checks of targets user in the load balancer's target group pass (instance is considered "healthy").
  5. make sure that your Jira base URL points to the DNS name of your load balancer.

Data Center

As per Atlassian Data Center installation documentation, the suggested load balancer is Apache. The documentation provides a sample load balancer configuration, which however will not be sufficient for enabling websockets connections to your cluster - with the sample configuration, websockets requests will fail with error 404 and chat communication will fall back to short polling.

To enable websockets connections, you need to apply changes to the sample configuration (added entries are in bold):

LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so

<VirtualHost *:80>
        ProxyRequests off
 
        ServerName MyCompanyServer
 
        <Proxy balancer://jiracluster>
                # JIRA node 1
                BalancerMember http://node1:8080 route=node1
                # JIRA node 2  Commented pending node installation
                # BalancerMember http://node2:8080 route=node2
 
                # Security "we aren't blocking anyone but this the place to make those changes
                Order Deny,Allow
                Deny from none
                Allow from all
 
                # Load Balancer Settings
                # We are not really balancing anything in this setup, but need to configure this
                ProxySet lbmethod=byrequests
                ProxySet stickysession=JSESSIONID
        </Proxy>
 
        <Proxy balancer://jiracluster-ws>
                # JIRA node 1
                BalancerMember ws://node1:8080 route=node1
                # JIRA node 2  Commented pending node installation
                # BalancerMember ws://node2:8080 route=node2
 
                # Security "we aren't blocking anyone but this the place to make those changes
                Order Deny,Allow
                Deny from none
                Allow from all
 
                # Load Balancer Settings
                # We are not really balancing anything in this setup, but need to configure this
                ProxySet lbmethod=byrequests
                ProxySet stickysession=JSESSIONID
        </Proxy>

        # Here's how to enable the load balancer's management UI if desired
        <Location /balancer-manager>
                SetHandler balancer-manager
 
                # You SHOULD CHANGE THIS to only allow trusted ips to use the manager
                Order deny,allow
                Allow from all
        </Location>
 
        # Don't reverse-proxy requests to the management UI
        ProxyPass /balancer-manager !

# Proxy all websockets requests to the jiracluster-ws load balancer
ProxyPass /com-spartez-support-chat/ws/ balancer://jiracluster-ws/com-spartez-support-chat/ws/
        
        # Reverse proxy all other requests to the JIRA cluster
        ProxyPass / balancer://jiracluster/
        ProxyPreserveHost on
</VirtualHost>

Crucial parts of the modified config are:

  • mod_proxy_wstunnel module must be loaded and enabled

  • additional load balancer for websockets is added - in the sample above it is called balancer://jiracluster-ws. This balancer is almost identical as the original one, but the protocols are changed from http to ws. After adding or removing a cluster node, it has to be added or removed from both the original and the "jiracluster-ws" load balancer configurations.

  • websocket requests (the ones going to /com-spartez-support-chat/ws/) are passed to the added load balancer