VIRTUAL SERVER INFRASTRUCTURE

AWS Announces General Availability of Amazon EC2 Hpc6a Instances

Amazon Web Services | January 11, 2022

Amazon Web Services, Inc. (AWS) announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a instances, a new instance type that is purpose-built for tightly coupled high performance computing (HPC) workloads. Hpc6a instances, powered by 3rd Gen AMD EPYC processors, expand AWS’s portfolio of HPC compute options and deliver up to 65% better price performance compared to similar compute-optimized Amazon EC2 instances that customers use for HPC workloads today. Hpc6a instances make it even more cost-efficient for customers to scale HPC clusters on AWS to run their most compute-intensive workloads like genomics, computational fluid dynamics, weather forecasting, molecular dynamics, computational chemistry, financial risk modeling, computer-aided engineering, and seismic imaging. Hpc6a instances are available on demand via a low-cost, pay-as-you-go usage model with no upfront commitments. To get started with Hpc6a instances, visit aws.amazon.com/ec2/instance-types/hpc6.

Organizations across numerous sectors rely on HPC to solve their most complex academic, scientific, and business problems. However, effectively using HPC is expensive because it requires the ability to process large amounts of data, which demands an abundance of compute power, fast memory and storage, and low-latency networking within HPC clusters. Some organizations build infrastructure on premises to run HPC workloads, but that involves expensive upfront capital investment, lengthy procurement cycles, ongoing management of overhead to monitor hardware and keep software up to date, and limited flexibility when the infrastructure inevitably becomes obsolete and must be upgraded. Customers across many industries run their HPC workloads in the cloud to take advantage of the superior security, scalability, and elasticity it offers. Engineers, researchers, and scientists rely on AWS to run their largest and most complex HPC workloads and choose Amazon EC2 instances with senhanced networking (e.g. C5n, R5n, M5n, and C6gn) to scale tightly coupled HPC workloads that require high levels of inter-instance communications with thousands of interdependent tasks. While the performance of these instances is sufficient for most HPC use cases, as workloads further scale to solve increasingly difficult problems, customers are looking to maximize price performance as they run HPC workloads that can grow to tens of thousands of servers on AWS.

New Hpc6a instances are purpose-built to offer the best price performance for running HPC workloads at scale in the cloud. Hpc6a instances deliver up to 65% better price performance for HPC workloads to carry out complex calculations across a range of cluster sizes—up to tens of thousands of cores. Hpc6a instances are enabled with Elastic Fabric Adapter (EFA)—a network interface for Amazon EC2 instances—by default. With EFA networking, customers benefit from low latency, low jitter, and up to 100 Gbps of EFA networking bandwidth to increase operational efficiency and drive faster time-to-results for workloads that rely on inter-instance communications. Hpc6a instances are powered by 3rd Gen AMD EPYC processors that run at frequencies up to 3.6 GHz and provide 384 GB of memory. Using Hpc6a instances, customers can more cost-effectively tackle their biggest and most difficult academic, scientific, and business problems with HPC, and realize the benefits of AWS with superior price performance.

By consistently innovating and creating new purpose-built Amazon EC2 instances for virtually every type of workload, AWS customers have realized huge price performance benefits for some of today’s most business-critical applications. While high performance computing has helped solve some of the most difficult problems in science, engineering, and business, effectively running HPC workloads can be cost-prohibitive for many organizations. Purpose-built for HPC workloads, Hpc6a instances now help customers realize up to 65% better price performance for their HPC clusters at virtually any scale, so they can focus on solving the biggest problems that matter to them most without the cost barriers that exist today.”

David Brown, Vice President of Amazon EC2 at AWS

“We are excited to continue our momentum with AWS and provide their customers with this new, powerful instance for high performance computing workloads,” said Dan McNamara, Senior Vice President and General Manager, Server Business at AMD. “AMD EPYC processors are helping customers of all sizes solve some of their biggest and most complex problems. From small universities to enterprises to large research facilities, Hpc6a instances powered by 3rd Gen AMD EPYC processors open up the world of powerful HPC performance with cloud scalability to more customers around the world.”

Customers can use Hpc6a instances with AWS ParallelCluster (an open-source cluster management tool) to provision Hpc6a instances alongside other instance types, giving customers the flexibility to run different workload types optimized for different instances within the same HPC cluster. Hpc6a instances benefit from the AWS Nitro System, a collection of building blocks that offload many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and increased security while also reducing virtualization overhead. Hpc6a instances are available for purchase as On-Demand Instances or Reserved Instances, or with Savings Plans. Hpc6a instances are available in US East (Ohio) and AWS GovCloud (US-West), with availability in additional AWS Regions coming soon.

Maxar partners with innovative businesses and more than 50 governments to monitor global change, deliver broadband communications, and advance space operations with capabilities in Space Infrastructure and Earth Intelligence. “Amazon EC2 Hpc6a instances are yet another exciting announcement from AWS that enables Maxar to continue to meet and exceed our customer requirements for big compute workflows—whether to accelerate the research and operations of Numerical Weather Prediction workloads or to create the world’s best, most up-to-date, and accurate digital twin models with our Maxar Precision3D product suite,” said Dan Nord, SVP and Chief Product Officer at Maxar Technologies. “Hpc6a’s AMD EPYC processors combined with the EFA networking capability provide us a 60% performance improvement over alternatives, while also being more cost efficient. This enables Maxar to strategically choose among the suite of AWS HPC cluster configurations that we’ve developed to best suit our clients’ needs while maximizing flexibility and resiliency.”

DTN’s global weather station network delivers hyper-local, accurate, and real-time weather intelligence to empower organizations with actionable insights. “Our collaboration with AWS allows us to better serve our customers with high-resolution weather prediction systems that feed analytics engines,” said Lars Ewe, Chief Technology Officer at DTN. “We’re very excited to see the price performance of Hpc6a instances, and we expect this to be our go-to Amazon EC2 instance choice for HPC workloads going forward.”

TotalCAE has over 20 years of experience with HPC for computer-aided engineering (CAE). TotalCAE helps eliminate IT headaches by professionally managing customers’ HPC engineering environment and engineering applications so they can focus on engineering, and not IT. “TotalCAE Platform makes it easy for CAE departments to adopt the agility and flexibility of AWS in just a few clicks for hundreds of engineering applications like Ansys Fluent, Siemens Simcenter STAR-CCM+, and Dassault Systèmes Abaqus,” said Rod Mach, President at TotalCAE. “As an AWS HPC Competency Partner, we help customers run their CAE workloads in the cloud. With HPC6a instances, we have seen up to a 30% performance boost for computational fluid dynamics workloads at a lower cost, enabling TotalCAE to offer customers industry leading price performance and scalability in the cloud.”

About Amazon Web Services
For over 15 years, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 84 Availability Zones (AZs) within 26 geographic regions, with announced plans for 24 more Availability Zones and eight more AWS Regions in Australia, Canada India, Israel, New Zealand, Spain, Switzerland, and the United Arab Emirates. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs.

About Amazon
Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Amazon strives to be Earth’s Most Customer-Centric Company, Earth’s Best Employer, and Earth’s Safest Place to Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV, Amazon Echo, Alexa, Just Walk Out technology, Amazon Studios, and The Climate Pledge are some of the things pioneered by Amazon.

Spotlight

Driven by ever-increasing pressure on a multitude of issues – including cost control, manageability, security, regulatory compliance, and business continuity – some IT managers are considering desktop virtualization models as an alternative to traditional distributed software deployment.

Spotlight

Driven by ever-increasing pressure on a multitude of issues – including cost control, manageability, security, regulatory compliance, and business continuity – some IT managers are considering desktop virtualization models as an alternative to traditional distributed software deployment.

Related News

VIRTUAL SERVER INFRASTRUCTURE

IBM and Airspan Networks Plan to Work to Accelerate 5G-enabled Open RAN Adoption in Europe

IBM Global Business Services, Airspan | September 21, 2021

IBM and Airspan Networks Inc., which provides groundbreaking, disruptive software and hardware for 5G network solutions, announced plans to collaborate on the launch of a 5G-enabled Open RAN testbed across the IBM Watson IoT Center in Munich, Germany and IBM’s Global Industry Solution Center (GISC) in Nice, France, to showcase long-distance control over 5G-enabled edge computing. The goal of developing this testbed is to help clients across Europe innovate and develop multi-vendor solutions designed to address different customer use case requirements, based on open, interoperable standards, while optimizing performance. IBM Global Business Services and Airspan plan to work together to accelerate the adoption of Open RAN technology and its ecosystem incorporating IBM’s leading global hybrid cloud and AI orchestration services. IBM Global Business Services, a leading systems integrator in the telco industry, is focused on processes, methodologies, and edge experience to deliver value and transformational projects with emerging technologies. The Open RAN testbed is intended to advance the development of Open RAN software and hardware solutions, and end-to-end interoperability testing with private 5G stand-alone core networks. The two companies plan to provide partners and customers with the opportunity to collaborate, integrate and test features for next generation campus networks. As part of the intended collaboration, Airspan Networks is providing its Open RAN AirVelocity 2700 indoor radio unit and virtualized Open RAN Centralized Unit (vCU) and Distributed Unit (vDU) OpenRANGE software to help customers test and validate 5G private network solutions using Open RAN. IBM is expected to provide its Global Business Services technology integration services, as well as IBM Cloud Pak for Network Automation and IBM Cloud Pak for Watson AIOps, to allow customers to more efficiently manage and orchestrate edge cloud implementation and applications. In addition, the IBM Global Business Services team is planning to implement a visual inspection application for customers to further extend Industry 4.0 5G edge computing use cases on Open RAN. “Open approaches and standards-based technologies are vital to help unleash the full potential of 5G and edge computing. That’s why, in collaboration with Airspan, we hope to work to advance emerging use cases that harness Open RAN and bring new value to telecom clients. The planned expansion of the Open RAN testbed will allow us to demonstrate these capabilities as we accelerate 5G and edge computing innovation,” said Marisa Viveros, Vice President of Strategy and Offerings, Telecom, Media and Entertainment Industry at IBM." “Through critical collaboration with leaders like IBM and testing in these labs, which could help accelerate the development of Open RAN and 5G solutions and the open architecture ecosystem, we believe Airspan can continue to be at the forefront of innovation and industry disruption through end-to-end Open RAN solutions,” said Airspan Chief Sales and Marketing Officer Henrik Smith-Petersen. This year, IBM announced the Open RAN Center of Excellence in Spain to accelerate the progress of Open RAN and standards-based technologies in Europe. In May 2021, Airspan announced the opening of a 5G Innovation Lab in the UK as a showcase and demonstration facility for partners, customers and government institutions, to focus on the development of Open RAN software, 5G sub 6 GHz and mmWave indoor and outdoor equipment, and private network use cases. IBM Global Business Services and Airspan are working toward definitive agreements detailing joint plans to accelerate the adoption of Open RAN technology and its ecosystem incorporating IBM’s leading global hybrid cloud and AI orchestration services. Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only. About Airspan Airspan Networks Holdings Inc. (NYSE American: MIMO) is a U.S.-based provider of groundbreaking, disruptive software and hardware for 5G networks, and a pioneer in end-to-end Open RAN solutions that provide interoperability with other vendors. As a result of innovative technology and significant R&D investments to build and expand 5G solutions, Airspan believes it is well-positioned with 5G indoor and outdoor, Open RAN, private networks for enterprise customers and industrial use applications, fixed wireless access (FWA), and CBRS solutions to help mobile network operators of all sizes deploy their networks of the future, today. With over one million cells shipped to 1,000 customers in more than 100 countries, Airspan has global scale. www.airspan.com. About IBM Global Business Services IBMers believe in progress—that the application of intelligence, reason and science can improve business, society and the human condition. To learn more about IBM Global Business Services, please visit https://www.ibm.com/services

Read More

VPN

Hillstone Networks Sets New Standard in Intelligent, Reliable and Automated Security Solutions With StoneOS 5.5R9

Hillstone Networks | January 19, 2022

Hillstone Networks, a leading provider of infrastructure protection solutions, introduced the latest iteration of its flagship StoneOS solution. Organizations today demand a comprehensive, intelligent, high-performing and automated security solution that works. StoneOS delivers. This is a major upgrade resulting in unparalleled capabilities to help protect organizations, their critical assets and their workforces from the myriad of security threats they face every day.” Tim Liu, CTO & co-founder of Hillstone Networks 6 Key Updates to StoneOS 5.5R9 include: Machine learning technology leveraged to enhance intelligent detection and prevention -StoneOS leverages the latest ML-based data sets to help bolster DGA detection. The extended support of a cloud sandbox allows for improved unknown threat detection and enhanced intelligence sharing. Extended VPN capability delivers refined secure access for the remote workforce – The new additional VPN features support extended user scenarios, a configuration wizard, and a performance upgrade to help meet the growing demand of feature-rich VPN solutions at a lower TCO. Additional enhancements unleash the power of hardware acceleration for traffic decryption –The new StoneOS release optimizes the throughput performance of SSL proxy and introduces the whitelist capability to help exempt certain entities in particular scenarios. Automated, scalable and smarter policy management and operations – A new mini policy feature allows central orchestration systems to meet dynamically changing security requirements. Additionally, app-based policy rule recommendation and NAT policy redundancy checks help improve efficiency. Advanced integration capabilities for 3rd party and SDN solutions – In addition to RESTful APIs and SNMP, the new StoneOS supports configuration and management over Netconf. Beyond that, StoneOS improves its ability to leverage external resources. Comprehensive system robustness optimization from services to modules – Beyond extending high availability solutions to IPv6, the new StoneOS release brings service level robustness by redesigning the software architecture and module level optimization for the data center firewalls. About Hillstone Networks Hillstone Networks’ proven Infrastructure Protection solutions provide enterprises and service providers with the visibility and intelligence to comprehensively see, thoroughly understand, and rapidly act against multilayer, multistage cyberthreats. Favorably rated by leading analysts and trusted by global companies, Hillstone protects from the edge to cloud with improved total-cost-of-ownership.

Read More

SERVER VIRTUALIZATION

Liqid Helps Customers Create and Scale VMware Host Servers in Seconds, with Composable Infrastructure

Liqid | September 16, 2021

Liqid, provider of the world’s most comprehensive composable disaggregated infrastructure (CDI) platform, today announced integration and support for composable hosts in VMware virtualized environments. With this new capability, Liqid customers can now deploy and scale host servers via software in seconds and centralize both physical and virtual infrastructure management within VMware with Liqid’s new vCenter Plug-in. By bringing Composable Disaggregated Infrastructure (CDI) to virtualized workloads, Liqid helps customers manage costs by improving resource utilization and reducing physical management of host servers – further driving the flexibility and agility of the cloud to datacenters and the edge. These ecosystem integration-focused features expand the catalog of integrations begun with the recent launch of Liqid’s Dynamic SLURM Integration that automates the creation of bare metal servers to meet a SLURM job’s precise requirements from a multiverse of possible options. With VMware vCenter integration, Liqid is continuing to extend the tangible benefits of CDI into a growing number of significant areas within IT, further allowing more customers to realize revolutionary datacenter efficiencies with their existing physical servers composed with GPU, FPGA, and NVMe storage resources. VMware has successfully redefined infrastructure efficiency and flexibility for decades now,” said Matt Halcomb, Principal Solutions Architect, World Wide Technology (WWT). “We believe Liqid's composable software will enable our customers to extract increased value from VMware virtualization by drastically accelerating virtual host deployment and scaling straight from vCenter. As modern workloads increase in complexity, Liqid’s software defined hardware allows customer to deploy a fully adaptive bare-metal host environment that compliments VMware’s capabilities.” While enterprise organizations have been utilizing VMware’s virtualization solutions to address high-value applications such as AI and machine learning, the conventional servers used to host these applications delay time deploy and scale, lack flexibility, and are inefficient, ultimately limiting the value of virtualization by restricting configuration possibilities and increasing datacenter costs. Further, these server configurations prevent critical accelerator resources such as GPU, FPGA, NVMe, and memory from being shared across the network, leading to poor utilization. The manual tasks of moving resources and deploying and scaling hosts increase operational costs. For organizations utilizing VMware’s virtualization solutions to derive maximum value from their data center, Liqid introduces composable hosts for virtualized environments. With Liqid Matrix CDI, bare-metal host servers can be created via software and matched perfectly to the resource requirements of any given virtual machine to accelerate time-to-value, and increase the agility and efficiency of their new and existing VMware virtual server, desktop and hyperconverged infrastructure environments. The Liqid vCenter Plug-in for VMware vCenter Server provides a web-based tool integrated with the VMware vSphere Web Client user interface that allows customers to compose bare-metal hosts, add and remove resources, and view key configuration information in their VMware vCenter. The new features offer customers the following: Be More Cloud-like: Realize cloud-like, dynamic resource orchestration for bare-metal resources for data centers on-prem, at the edge, or within traditional cloud environments. Accelerate Host Deployment: Reduce host server deployment times from days or weeks to minutes via software composability for accelerated ROI. Scale Physical Resources on Demand: When host severs need more storage or accelerator resources, add them hands-free, in seconds without regard for what will physically fit in a server. Meet Impossible Workload Needs: Liqid composability removes the server chassis as the limiting factor when designing a host server. Since GPU and storage resources can reside outside the server, anything is possible, including 1U servers with 16GPUs. Be Change Ready: With Physical hosts that are as nimble as your virtual environment, you can adapt to new reality quicker and realize the benefits sooner. Increased Resource Utilization: Overprovisioning often leads to trapped, unused resources. With Liqid, only deploy the resources a host needs today. If GPU resources aren’t utilized at night, redeploy them during off hours to maximize results. Reduce Manual Tasks: Instead of spending time manually doing physical adds and changes, leverage vCenter to complete tasks, hands-free. Leverage Existing Investments: Most importantly, Liqid plugs into the existing infrastructure, making VMware virtual environments more flexible, agile and efficient. About Liqid Liqid provides the world’s most comprehensive software-defined composable disaggregated infrastructure (CDI) platform. Liqid Matrix™ software enables users to dynamically right size their IT resources on the fly. Liqid empowers users to manage, scale, and configure physical, bare-metal server systems in seconds and then reallocate core data center devices on-demand, via Liqid Matrix software, as workflows and business needs evolve.

Read More