integralzone

Worker Size – Cloudhub vs on-Premise – why compare?

Written by:
Published on December 22, 2019

Worker Size – Cloudhub vs on-Premise – why compare?

MuleSoft has a few offerings in terms of hosting. For the purposes of this blog, I would like to class them as Cloudhub hosted or non-Cloudhub hosted.

Cloudhub provides a bursting option only for 0.1 and 0.2 vCores. The reason for this classification is to compare like for like performance between the MuleSoft Managed Cloudhub option against the non-Cloudhub hosted option, specifically in the bursting context.

Why Compare?

There are many reasons organizations need to compare performance in this specific classification. Here are a few that are worth thinking about:

  • Compare Cloudhub hosting firepower against non-Cloudhub hosting firepower in bursting specific scenarios
  • Have quantifiable metrics to be able to make informed decisions for key design activity from an architectural perspective
  • As license cost for bursting vCores on Cloudhub is exactly the same as non-Cloudhub, management can have a good perspective of where the better value for money is, and how much the difference is.
  • Deployable worker size on Cloudhub is often compared to on-premise more densely packed deployment models.
  • Will there be a performance impact of network going to the cloud and coming back to calling application when compared to locally hosted service?

Comparison Setup

A simple hello World!! API is deployed onto both Cloudhub trial version and On-Premise Docker container. The idea was to find the best performance that could be extracted from either of these 2 setups and use that as a basis for comparison.

The on premise Docker container has the following setup. This was run on a local machine to cut out network delays with on-premise.

  • CPU: 2.7 GHz Dual-Core Intel Core i5
  • Memory: 1867 MHz DDR3
  • 0.1 vCore has 500 MB memory
  • 0.2 vCore has 1GB memory

Hello World!! simple API for our test looked like below.

cloudhub anypoint mulesoft

Performance Difference

Below is the best performance in terms of TPS (Transactions Per second) that could be extracted without making any change to code in terms of tuning.

On PremiseCloudhub
0.1 VCore129 TPS854 TPS
0.2 VCore370 TPS1106 TPS

Conclusion

  • 0.1 vCore on Cloudhub gave 6X+ better performance compared to on-premise
  • 0.2 vCores on Cloudhub gave close to 3X performance compared to on-premise
  • With better on-premise CPU and memory specification the performance outcome might be different but this blog gives a relative perspective for firepower gap
  • It is recommended that organisation think carefully about Cloudhub vs On-Premise hosting taking into consideration the performance aspect as well
  • Bursting works on a credit system and performance vary based on credit left over, but even with a credit system the performance was still better
  • As the license costs for the Cloud-Hub vCore and On-Premise vCore are the same, organisational license consumption is the same in both cases
  • The bursting feature is only available for 0.1 and 0.2 vCores. Most APIs are deployed with these 2 worker sizes

End Note

Hope you found this article interesting, do drop us a comment below with your inputs, views, and opinions regarding Cloudhub vs on-Premise – why compare?

Automated Code Analysis in Mulesoft Projects:

Interested in learning about/trying out our exciting new product on automated Mulesoft Code Analysis and learn how it reduces project costs by over 80%?

Please follow the link below to know more:

Quantifying Benefits of a Code Analyzer in the Mule Project lifecycle

3 Replies to “Worker Size – Cloudhub vs on-Premise – why compare?”

  1. Hi,

    Thanks for informative article just wanted to know,Is there a way to know what is running behind the cloud hub. As you have specified that in on prem you have i5 and other configuration but how can we compare these if don’t know the configuration behind the cloud hub?

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Blog Posts

Other Blog Posts

MuleSoft Meet up at Reading (Virtual)

A Virtual meetup was organized by Integral Zone on 30th September, 2021. The meet up was attended by a number of participants across the globe. There were two keynote speakers who spoke on two different topics: Building asynchronous REST APIs at scale. By Kalidass M, Director of Engineering at Integral Zone Why automated code quality …

Read more

Mulesoft Development Fundamentals: Dataweave Best Practices

‘MuleSoft development fundamentals’ is a blog series that takes you through various aspects of MuleSoft development from “How to structure your Mule code” to “Things to cater to when you deploy to production”. We would love to share our expertise with the Community, having worked with several MuleSoft Enterprise clients. Please find all the blogs …

Read more

IZ Analyzer – Scanning API Projects

In an API and microservices world, the quality of the deliverable becomes paramount – since a weak link can break the whole chain. In the previous blog posts, we had seen how Mulesoft code implementations could be quality tested in an automated manner with IZ Analyzer. Now with API Analyzer plugin, APIs can be quality …

Read more