Mulesoft

Choosing Worker Size in CloudHub

Written by:
Published on February 10, 2019

Choosing Worker Size in CloudHub

Running applications in Mulesoft’s CloudHub is probably the easiest way for any organization using Mule to manage and deploy applications. It guarantees; 99.99% up-time per annum, zero downtime during application deployment, easy management of properties, flow monitoring, application insights, persistent queues, schedulers tracking, persistent object stores, and the list goes on.

CloudHub is a component of Mulesoft that provides hosted and managed servers capable of running any supported version of Mule run-time. CloudHub allows a high level of application isolation by running only one mule application per mule worker. This ensures that issues created by one application will not affect any other application in any environment.

MuleSoft hosts its entire cloud infrastructure on AWS(Amazon Web Services). It is important to understand the different plans and server types offered by AWS in order to make efficient decisions on how many vcores and how many workers to choose for any application deployment in CloudHub.

Where does AWS fit into this?

CloudHub mainly offers two sizes of workers, 0.x and x.0. The various machine sizes in 0.x are 0.1 and 0.2 vcores and in x.0 are 1 vcore, 2 vcores, 4 vcores, 8 vcores, 16 vcores, 32 vcores (for selected accounts). Although these two types are totally different, they may not seem so to someone not aware of AWS machine types.

When comparing these to AWS machine types, 0.1 vcore and 0.2 vcores are AWS T2 micro equivalent which means these machines come with something called Burstable Performance. Whereas comparing x.0 vcore machines to AWS machine types are A1 medium, large, etc. equivalent which means the machine has fixed performance plan.

What is Fixed Performance?

Fixed performance servers in AWS are just like any other server in general where a certain amount of memory and CPU power is guaranteed from the very beginning and there is no consideration if the CPU is getting used or not. You can visit AWS documentation for the same to learn more about AWS instance types: https://aws.amazon.com/ec2/instance-types/.

What is Burstable Performance?

Burstable performance machines are closely monitored for the extent of CPU that is consumed. If the machine is not using CPU power, then corresponding credits are added for that machine in AWS. These are added every hour and can be accumulated for a total of 24 hours. These credits are basically CPU processing credits and offer higher processing power (more than given) when used. A detailed explanation on AWS Burstable Performance can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html

How does all this come together?

After understanding the different instance types offered by MuleSoft CloudHub, the next critical step is to understand how to put this to use.

For – An application with a scheduler

A typical scheduler application would be the one which runs a once or twice a day depending on the requirement and usually stays idle at all other times. When the scheduler is running to process records in bulk it requires a significant amount of CPU power to function with acceptable performance. If the memory requirements are met, it is wise to deploy such an application to 0.1 or 0.2 vcores at one or multiple instances (workers) to ensure the successful running of these application with the lowest possible cost.

For – An application which is an API

APIs built by an organisation also have a peak usage time within a day. No API gets continuously high usage for 24 hours unless it’s an exceptional case. At peak times this API would also need high processing power for its computations which can be obtained by the using the credits accumulated during non-peak hours.

Is there a catch to this?

While all the information and strategies mentioned above can be used to make smarter decisions for choosing vcores, this is only applicable for the processing capacity and requirements of the application. It does not take into consideration the amount of memory to be used by the application. If the memory requirements of the application are not met, there is no other option other than upgrading to a higher vcore.

End Note:

Hope you found this article interesting, do drop us a comment below with your inputs, views, and opinions regarding Choosing Worker Size in CloudHub

Also, if you are interested in learning more about an exciting new code quality product that reduces your Mule project costs by 79%, follow the below link :

2 Replies to “Choosing Worker Size in CloudHub”

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Blog Posts

Other Blog Posts

Kubernetes – Configure PostgreSQL Streaming Replication

PostgreSQL relies on replication for high availability, failover, and balancing read requests across multiple nodes. Streaming Replication ensures that data written to the primary PostgreSQL database is mirrored on to one or more standby (replica) nodes. Standby nodes accept read-only connections, so all traffic related to any Reporting or BI Dashboard applications can be routed …

Read more

Accelerating C4E Adoption: Vlog 2: Accelerators

In this Vlog, we introduce you to IZ Accelerators, a product that adds value in the Design stage of your C4E lifecycle.

Read more

Accelerating C4E Adoption: Vlog 1: Introduction

Through these videos, we aim to provide insight into products and tools that have significantly contributed to accelerating C4E Adoption at reduced costs and increased efficiency.

Read more