Mulesoft

Choosing Worker Size in CloudHub

Written by:
Published on February 10, 2019

Choosing Worker Size in CloudHub

Running applications in Mulesoft’s CloudHub is probably the easiest way for any organization using Mule to manage and deploy applications. It guarantees; 99.99% up-time per annum, zero downtime during application deployment, easy management of properties, flow monitoring, application insights, persistent queues, schedulers tracking, persistent object stores, and the list goes on.

CloudHub is a component of Mulesoft that provides hosted and managed servers capable of running any supported version of Mule run-time. CloudHub allows a high level of application isolation by running only one mule application per mule worker. This ensures that issues created by one application will not affect any other application in any environment.

MuleSoft hosts its entire cloud infrastructure on AWS(Amazon Web Services). It is important to understand the different plans and server types offered by AWS in order to make efficient decisions on how many vcores and how many workers to choose for any application deployment in CloudHub.

Where does AWS fit into this?

CloudHub mainly offers two sizes of workers, 0.x and x.0. The various machine sizes in 0.x are 0.1 and 0.2 vcores and in x.0 are 1 vcore, 2 vcores, 4 vcores, 8 vcores, 16 vcores, 32 vcores (for selected accounts). Although these two types are totally different, they may not seem so to someone not aware of AWS machine types.

When comparing these to AWS machine types, 0.1 vcore and 0.2 vcores are AWS T2 micro equivalent which means these machines come with something called Burstable Performance. Whereas comparing x.0 vcore machines to AWS machine types are A1 medium, large, etc. equivalent which means the machine has fixed performance plan.

What is Fixed Performance?

Fixed performance servers in AWS are just like any other server in general where a certain amount of memory and CPU power is guaranteed from the very beginning and there is no consideration if the CPU is getting used or not. You can visit AWS documentation for the same to learn more about AWS instance types: https://aws.amazon.com/ec2/instance-types/.

What is Burstable Performance?

Burstable performance machines are closely monitored for the extent of CPU that is consumed. If the machine is not using CPU power, then corresponding credits are added for that machine in AWS. These are added every hour and can be accumulated for a total of 24 hours. These credits are basically CPU processing credits and offer higher processing power (more than given) when used. A detailed explanation on AWS Burstable Performance can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html

How does all this come together?

After understanding the different instance types offered by MuleSoft CloudHub, the next critical step is to understand how to put this to use.

For – An application with a scheduler

A typical scheduler application would be the one which runs a once or twice a day depending on the requirement and usually stays idle at all other times. When the scheduler is running to process records in bulk it requires a significant amount of CPU power to function with acceptable performance. If the memory requirements are met, it is wise to deploy such an application to 0.1 or 0.2 vcores at one or multiple instances (workers) to ensure the successful running of these application with the lowest possible cost.

For – An application which is an API

APIs built by an organisation also have a peak usage time within a day. No API gets continuously high usage for 24 hours unless it’s an exceptional case. At peak times this API would also need high processing power for its computations which can be obtained by the using the credits accumulated during non-peak hours.

Is there a catch to this?

While all the information and strategies mentioned above can be used to make smarter decisions for choosing vcores, this is only applicable for the processing capacity and requirements of the application. It does not take into consideration the amount of memory to be used by the application. If the memory requirements of the application are not met, there is no other option other than upgrading to a higher vcore.

End Note:

Hope you found this article interesting, do drop us a comment below with your inputs, views, and opinions regarding Choosing Worker Size in CloudHub

Also, if you are interested in learning more about an exciting new code quality product that reduces your Mule project costs by 79%, follow the below link :

2 Replies to “Choosing Worker Size in CloudHub”

Leave a Reply

Your email address will not be published. Required fields are marked *

Other Blog Posts

Other Blog Posts

MuleSoft Runtime Code Scanning – Why Do You Need It?

One of the most frequently asked questions is if we have static code analysis and a well defined DevOps process, why would we need run time code analysis? In this article, let’s explore the differences between the two and why you might want to have runtime code analysis (and IZ Runtime Analyzer) even if you have …

Read more

Ensuring Software Quality in Healthcare: Leveraging IZ Analyzer for MuleSoft Code Scanning 🏥💻

Ensuring software quality in the healthcare industry is a top priority, with direct implications for patient safety, data security, and regulatory compliance. Healthcare software development requires adherence to specific rules and best practices to meet the unique challenges of the industry. In this blog post, we will explore essential software quality rules specific to healthcare …

Read more

Mule OWASAP API Security Top 10 – Broken Object Level Authorization

In Mule, Object-Level Authorization refers to the process of controlling access to specific objects or resources within an application based on the permissions of the authenticated user. It ensures that users can only perform operations on objects for which they have appropriate authorization. To demonstrate a broken Object-Level Authorization example in Mule, let’s consider a …

Read more

How KongZap Revolutionises Kong Gateway Deployment

In a rapidly evolving digital landscape, businesses face numerous challenges. Faster time to market is the only option business can choose. When it comes end to end Kong Gateway life cycle from deploying to managing Kong Gateway, every one of these challenges is applicable. However, KongZap, a groundbreaking solution is a game-changer by addressing some …

Read more