Reporting for Confluence Performance Baseline
This document describes the baseline performance for Reporting for the Confluence application. It serves as a reference of the application performance at its current state with the setup outlined in this document. However, please note that this is not a guide for application performance tuning.
Approach
We use a similar setup to the DCAPT (Data Center Application Performance Test) framework used by Atlassian to design the test to set baseline performance for Reporting for the Confluence application. We appreciate your sharing any baseline-related experience you have or may have in the future - please go ahead and Email us.
This is a dynamic report that will be changed as we receive new data from users and as we introduce other variables for the baseline reporting test. We appreciate sharing any baseline-related experience you may have - please go ahead and Email us. Your feedback, as well as a continuous baseline review from our dev teams, will ensure this baseline performance report is dynamic and current.
Test Environment Setup
We are running our test in an AWS environment using AWS Quick Starts for deploying Confluence Data Center. AWS environments allow us to scale up and easily do modifications in the components as needed quickly. The specifications are as follows:
Below is the specification where Confluence is running as a server. It can scale up between 1, 2, and 4 nodes during the test.
Item | Specifications | No of Nodes |
Confluence Instance Type | Confluence version 7.4.9 EC2 m5.2xlarge:
| 1 - 4 |
Database | RDS db.m5.xlarge:
| 1 |
Plugin Installed | Reporting for Confluence v. 6.15.27 |
Below is the specification for the test runner that simulates the number of users accessing the Confluence server virtually. The specification below can be adjusted depending on how many concurrent users we plan to run.
Item | Specifications |
Test runner environment | EC2 C5.2xlarge:
|
Technology
Our team is using Taurus to configure and run automated performance tests. While it is an open-source tool, Taurus also provides a variety of testing tools from JMeter, Gatling, Selenium, and many more. It then provides the result in the most effective report.
In this test, we use Selenium executor to run functional tests locally with Selenium WebDriver using Chrome browser. It uses JUnit test runners to assert whether the page is rendered correctly.
Methodology
We ran three separate phases based on the number of nodes; 1 node, 2 nodes, and 4 nodes. In each phase, we executed different scenarios sequentially by modifying the number of concurrent users. In this case, we had two variables that affect the time to render which are the number of nodes and concurrent users.
Our main objective was to benchmark Reporting performance in different configurations based on the number of concurrent users and the number of nodes. Especially, how long it takes for each Reporting scenario to render and how the number of nodes can affect the performance.
For the purposes of this test, we doubled the number of concurrent users on each test run with the assumption of 5 concurrent users running on a single node as a baseline.Â
Parameters | Value |
Ramp-up | 1m |
Iterations | 1000 |
Hold-for | 15m |
Concurrent users | 5, 10, 20, 40, 80 |
Descriptions:
Concurrent user - number of targets concurrent virtual users.
Ramp-up - ramp-up time to reach target concurrency.
Hold-for - time to hold target concurrency.
Iterations - limit scenario iterations number.
Test scenario and dataset
We use the enterprise-scale dataset provided by Atlassian for these tests. We deliberately excluded Attachments files in the test data but maintained the links to Attachments as the attachment file size itself does not affect the Reporting query process.Â
Data dimensions | Value for an enterprise-scale dataset |
Pages | ~900 000 |
Blogposts | ~100 000 |
Attachments | ~2 300 000 (links only) |
Comments | ~6 000 000 |
Spaces | ~5 000 |
Users | ~5 000 |
We selected common scenarios and macro usage that have a significant impact on the performance. Below is the list of Reporting for Confluence scenarios that are included in the test:
List all Contents in a Space
Displaying a list of children for a Page with customized details
List all Pages in a Space
Test Results
Test results on single node setup
Average response time (seconds) | ||||
No. | Scenarios | 5 users | 10 users | 20 users |
1 | Building a forum in Confluence with Reporting | 2.55 | 4.05 | 7.33 |
2 | List all Comments including unresolved Inline Comments | 17.35 | 35.22 | 83.48 |
3 | List all Contents in a Space | 13.82 | 26.11 | 63.90 |
4 | Displaying a list of children for a Page with customized details | 4.34 | 5.69 | 5.45 |
5 | List all Pages in a Space | 2.33 | 3.05 | 4.12 |
6 | Listing all available Spaces | 7.67 | 21.61 | 68.44 |
7 | Check whether a Page exists | 1.26 | 1.44 | 1.47 |
Test results on 2 nodes instance
Average response time (seconds) | ||||
No. | Scenarios | 10 users | 20 users | 40 users |
1 | Building a forum in Confluence with Reporting | 2.96 | 4.29 | 6.17 |
2 | List all Comments including unresolved Inline Comments | 17.11 | 26.55 | 48.24 |
3 | List all Contents in a Space | 13.66 | 19.86 | 37.39 |
4 | Displaying a list of children for a Page with customized details | 4.65 | 6.85 | 8.86 |
5 | List all Pages in a Space | 2.72 | 3.77 | 5.02 |
6 | Listing all available Spaces | 7.27 | 10.83 | 25.40 |
7 | Check whether a Page exists | 1.65 | 2.20 | 2.59 |
Test results on 4 nodes instance
Average response time (seconds) | ||||
No. | Scenarios | 20 users | 40 users | 80 users |
1 | Building a forum in Confluence with Reporting | 4.28 | 7.95 | 21.46 |
2 | List all Comments including unresolved Inline Comments | 19.39 | 33.51 | 74.83 |
3 | List all Contents in a Space | 17.18 | 34.44 | 86.34 |
4 | Displaying a list of children for a Page with customized details | 5.73 | 9.35 | 21.43 |
5 | List all Pages in a Space | 4.25 | 8.98 | 26.08 |
6 | Listing all available Spaces | 9.16 | 16.63 | 42.88 |
7 | Check whether a Page exists | 3.01 | 6.62 | 19.77 |
Comparing average response time on different nodes
Average response time (seconds) | ||||
No. | Scenarios | 1 node | 2 nodes | 4 nodes |
1 | Building a forum in Confluence with Reporting | 7.33 | 4.29 | 4.28 |
2 | List all Comments including unresolved Inline Comments | 83.48 | 26.55 | 19.39 |
3 | List all Contents in a Space | 63.90 | 19.86 | 17.18 |
4 | Displaying a list of children for a Page with customized details | 5.45 | 6.85 | 5.73 |
5 | List all Pages in a Space | 4.12 | 3.77 | 4.25 |
6 | Listing all available Spaces | 68.44 | 10.83 | 9.16 |
7 | Check whether a Page exists | 1.47 | 2.20 | 3.01 |
Conclusion
From the test results, we can see that at 20 users, the increase from 1 to 2 nodes in an instance can reduce the average response time for the page to load. However, increasing nodes from 2 to 4 nodes do not show a significant difference reduction in the time for the page to load.
Please note that the time taken for a page containing Reporting macros to load does not necessarily represent the time taken for Reporting macros to render as there are other factors contributing to the loading time such as the presence of macros from other plugins. The page will only load once all macros, including macros from other plugins, are completely rendered.Â