Worker Size – Cloudhub vs on-Premise – why compare?

Written by:
Published on December 22, 2019

Worker Size – Cloudhub vs on-Premise – why compare?

MuleSoft has a few offerings in terms of hosting. For the purposes of this blog, I would like to class them as Cloudhub hosted or non-Cloudhub hosted.

Cloudhub provides a bursting option only for 0.1 and 0.2 vCores. The reason for this classification is to compare like for like performance between the MuleSoft Managed Cloudhub option against the non-Cloudhub hosted option, specifically in the bursting context.

Why Compare?

There are many reasons organizations need to compare performance in this specific classification. Here are a few that are worth thinking about:

  • Compare Cloudhub hosting firepower against non-Cloudhub hosting firepower in bursting specific scenarios
  • Have quantifiable metrics to be able to make informed decisions for key design activity from an architectural perspective
  • As license cost for bursting vCores on Cloudhub is exactly the same as non-Cloudhub, management can have a good perspective of where the better value for money is, and how much the difference is.
  • Deployable worker size on Cloudhub is often compared to on-premise more densely packed deployment models.
  • Will there be a performance impact of network going to the cloud and coming back to calling application when compared to locally hosted service?

Comparison Setup

A simple hello World!! API is deployed onto both Cloudhub trial version and On-Premise Docker container. The idea was to find the best performance that could be extracted from either of these 2 setups and use that as a basis for comparison.

The on premise Docker container has the following setup. This was run on a local machine to cut out network delays with on-premise.

  • CPU: 2.7 GHz Dual-Core Intel Core i5
  • Memory: 1867 MHz DDR3
  • 0.1 vCore has 500 MB memory
  • 0.2 vCore has 1GB memory

Hello World!! simple API for our test looked like below.

cloudhub anypoint mulesoft

Performance Difference

Below is the best performance in terms of TPS (Transactions Per second) that could be extracted without making any change to code in terms of tuning.

On PremiseCloudhub
0.1 VCore129 TPS854 TPS
0.2 VCore370 TPS1106 TPS


  • 0.1 vCore on Cloudhub gave 6X+ better performance compared to on-premise
  • 0.2 vCores on Cloudhub gave close to 3X performance compared to on-premise
  • With better on-premise CPU and memory specification the performance outcome might be different but this blog gives a relative perspective for firepower gap
  • It is recommended that organisation think carefully about Cloudhub vs On-Premise hosting taking into consideration the performance aspect as well
  • Bursting works on a credit system and performance vary based on credit left over, but even with a credit system the performance was still better
  • As the license costs for the Cloud-Hub vCore and On-Premise vCore are the same, organisational license consumption is the same in both cases
  • The bursting feature is only available for 0.1 and 0.2 vCores. Most APIs are deployed with these 2 worker sizes

End Note

Hope you found this article interesting, do drop us a comment below with your inputs, views, and opinions regarding Cloudhub vs on-Premise – why compare?

Automated Code Analysis in Mulesoft Projects:

Interested in learning about/trying out our exciting new product on automated Mulesoft Code Analysis and learn how it reduces project costs by over 80%?

Please follow the link below to know more:

Quantifying Benefits of a Code Analyzer in the Mule Project lifecycle

3 Replies to “Worker Size – Cloudhub vs on-Premise – why compare?”

  1. Hi,

    Thanks for informative article just wanted to know,Is there a way to know what is running behind the cloud hub. As you have specified that in on prem you have i5 and other configuration but how can we compare these if don’t know the configuration behind the cloud hub?

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Blog Posts

Other Blog Posts

MuleSoft Runtime Code Scanning – Why Do You Need It?

One of the most frequently asked questions is if we have static code analysis and a well defined DevOps process, why would we need run time code analysis? In this article, let’s explore the differences between the two and why you might want to have runtime code analysis (and IZ Runtime Analyzer) even if you have …

Read more

Ensuring Software Quality in Healthcare: Leveraging IZ Analyzer for MuleSoft Code Scanning 🏥💻

Ensuring software quality in the healthcare industry is a top priority, with direct implications for patient safety, data security, and regulatory compliance. Healthcare software development requires adherence to specific rules and best practices to meet the unique challenges of the industry. In this blog post, we will explore essential software quality rules specific to healthcare …

Read more

Mule OWASAP API Security Top 10 – Broken Object Level Authorization

In Mule, Object-Level Authorization refers to the process of controlling access to specific objects or resources within an application based on the permissions of the authenticated user. It ensures that users can only perform operations on objects for which they have appropriate authorization. To demonstrate a broken Object-Level Authorization example in Mule, let’s consider a …

Read more

How KongZap Revolutionises Kong Gateway Deployment

In a rapidly evolving digital landscape, businesses face numerous challenges. Faster time to market is the only option business can choose. When it comes end to end Kong Gateway life cycle from deploying to managing Kong Gateway, every one of these challenges is applicable. However, KongZap, a groundbreaking solution is a game-changer by addressing some …

Read more