ArticlesResources

Moving from Hardware to the Cloud: Elastic HPC Enhances the Capacity to Innovate

Many CAE users today — running powerful but demanding applications suites like ANSYS® and Numeca® — lack the compute power to meet their needs. Some users rely solely on their workstations to power their simulations. Some work for companies that have invested in HPC hardware but often must wait to use the clusters, due to high levels of demand. Yet, others use their HPC clusters intensively at times, although they are often underutilized.

Each of the above cases represents an opportunity for CAE engineers to optimize their approach. Rather than invest in more workstations or HPC hardware, the solution lies in an elastic and cloud-based approach to HPC. By accessing HPC resources in the cloud, workstation users can gain access to true HPC compute resources for the first time, while those with dedicated HPC hardware or clusters can obtain additional compute power for bursting.

The result is that you – whether you are a CAE engineer or other type of HPC user – can easily access the HPC resources that you need, whenever you need them. These HPC resources can be rapidly scaled up and down as your variable HPC needs demand. That means that you only need to pay for the HPC resources that you are using at any given time and can avoid overinvesting in infrastructure that is poorly utilized.

For instance, one of ADC’s elastic HPC customers runs workloads that consume 4,000 cores worth of compute. But those 4,000 cores are only needed two hours out of every day. Rather than over invest in hardware to power their peak workloads during these two hours, they instead provision a buffer that they can burst into during those crucial two hours every 24-hour cycle.

As elastic HPC users, they can further ensure that there is additional room to handle unexpected traffic and workload surges on top of the 4,000 core baseline. This means that whenever they experience a major, unexpected surge, they have the resources at hand to be able to meet those needs.

Elastic HPC allows you to be as conservative in your resource planning as need be. You can feel safe in knowing your elastic HPC needs will be met every time.

When is a provider really providing elastic HPC?

When your usage demands require bursting into 400 or 4,000 cores or more of additional compute power, you need those resources immediately. And you need the process of provisioning and releasing those additional resources automatically after the demand has gone back down.

While many HPC service providers today claim to offer elastic HPC, Advania Data Centers is one of the few that can actually deliver it via its cloud-based HPCFLOW service. Customers control their HPC resources via a versatile and user-friendly self-service portal. Elastic HPC allocations can also be automated via APIs in ADC’s stack, providing instantaneous provisioning and releasing. This means that they can add or subtract as many nodes as they need, at any time.

ADC is also able to accommodate users with a range of existing HPC investments. This includes users who have their own HPC cluster and are using elastic HPC only as an add-on to their existing compute power. It also includes customers who have no in-house HPC resources at all. For the latter, elastic HPC is a cost-effective gateway into the HPC world, providing a zero CapEx approach where they can pay only for the HPC resources that they use.

Every customer is different; ADC custom-fits the solution

 Another strength ADC offers to users interested in elastic HPC’s power and flexibility is its in-house HPC expertise. Its seasoned HPC experts work closely with customers to understand their needs before designing, provisioning and managing their elastic HPC clusters. The result is that each customer’s cluster is optimized to reflect their particular performance and budget needs, whether you need 400 or 40,000 cores.

If you’re curious about what performance enhancements and cost savings elastic HPC can provide, click here to sign up for a customized HPCFLOW benchmark today.

Facebooktwitterlinkedin