integralzone

Cloudhub vs on-Premise – why compare?

Written by:
Published on December 22, 2019

Cloudhub vs on-Premise – why compare?

MuleSoft has a few offerings in terms of hosting. For the purposes of this blog, I would like to class them as Cloudhub hosted or non-Cloudhub hosted.

Cloudhub provides a bursting option only for 0.1 and 0.2 vCores. The reason for this classification is to compare like for like performance between the MuleSoft Managed Cloudhub option against the non-Cloudhub hosted option, specifically in the bursting context.

Why Compare?

There are many reasons organizations need to compare performance in this specific classification. Here are a few that are worth thinking about:

  • Compare Cloudhub hosting firepower against non-Cloudhub hosting firepower in bursting specific scenarios
  • Have quantifiable metrics to be able to make informed decisions for key design activity from an architectural perspective
  • As license cost for bursting vCores on Cloudhub is exactly the same as non-Cloudhub, management can have a good perspective of where the better value for money is, and how much the difference is.
  • Deployable worker size on Cloudhub is often compared to on-premise more densely packed deployment models.
  • Will there be a performance impact of network going to the cloud and coming back to calling application when compared to locally hosted service?

Comparison Setup

A simple hello World!! API is deployed onto both Cloudhub trial version and On-Premise Docker container. The idea was to find the best performance that could be extracted from either of these 2 setups and use that as a basis for comparison.

The on premise Docker container has the following setup. This was run on a local machine to cut out network delays with on-premise.

  • CPU: 2.7 GHz Dual-Core Intel Core i5
  • Memory: 1867 MHz DDR3
  • 0.1 vCore has 500 MB memory
  • 0.2 vCore has 1GB memory

Hello World!! simple API for our test looked like below.

cloudhub anypoint mulesoft

Performance Difference

Below is the best performance in terms of TPS (Transactions Per second) that could be extracted without making any change to code in terms of tuning.

On PremiseCloudhub
0.1 VCore129 TPS854 TPS
0.2 VCore370 TPS1106 TPS

Conclusion

  • 0.1 vCore on Cloudhub gave 6X+ better performance compared to on-premise
  • 0.2 vCores on Cloudhub gave close to 3X performance compared to on-premise
  • With better on-premise CPU and memory specification the performance outcome might be different but this blog gives a relative perspective for firepower gap
  • It is recommended that organisation think carefully about Cloudhub vs On-Premise hosting taking into consideration the performance aspect as well
  • Bursting works on a credit system and performance vary based on credit left over, but even with a credit system the performance was still better
  • As the license costs for the Cloud-Hub vCore and On-Premise vCore are the same, organisational license consumption is the same in both cases
  • The bursting feature is only available for 0.1 and 0.2 vCores. Most APIs are deployed with these 2 worker sizes

End Note

Hope you found this article interesting, do drop us a comment below with your inputs, views, and opinions regarding Cloudhub vs on-Premise – why compare?

You can learning more about an exciting new code quality product that reduces your Mule project costs by 79%, follow the below link :

https://integralzone.com/iz-analyzer-mule-benefits/

3 Replies to “Cloudhub vs on-Premise – why compare?”

  1. Hi,

    Thanks for informative article just wanted to know,Is there a way to know what is running behind the cloud hub. As you have specified that in on prem you have i5 and other configuration but how can we compare these if don’t know the configuration behind the cloud hub?

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Blog Posts

Other Blog Posts

Kubernetes – Configure PostgreSQL Streaming Replication

PostgreSQL relies on replication for high availability, failover, and balancing read requests across multiple nodes. Streaming Replication ensures that data written to the primary PostgreSQL database is mirrored on to one or more standby (replica) nodes. Standby nodes accept read-only connections, so all traffic related to any Reporting or BI Dashboard applications can be routed …

Read more

Accelerating C4E Adoption: Vlog 2: Accelerators

In this Vlog, we introduce you to IZ Accelerators, a product that adds value in the Design stage of your C4E lifecycle.

Read more

Accelerating C4E Adoption: Vlog 1: Introduction

Through these videos, we aim to provide insight into products and tools that have significantly contributed to accelerating C4E Adoption at reduced costs and increased efficiency.

Read more