VMware Unleashes Its Kubernetes Strategy

CRN | March 10, 2020

VMware Unleashes Its Kubernetes Strategy
VMware’s comprehensive Kubernetes strategy went into full throttle Tuesday with the wide release of its two main components—the new Tanzu container platform and direct integration of the container orchestrator into the vSphere virtualization platform. With some of VMware’s most significant product updates and introductions in recent years, the company that pioneered virtualization and private cloud took a leap forward in advancing a multi-cloud posture that leverages Kubernetes to empower developers building cloud-native apps as well as IT operations teams supporting legacy ones. To enable modern application architectures and hybrid cloud adoption, VMware introduced Tanzu, a portfolio of Kubernetes-based services, as well as VMware Cloud Foundation 4.0, which incorporates vSphere 7.0, a re-architecture of that flagship product to incorporate Kubernetes.

Spotlight

In a recent IDG survey, an average of 89% of respondents across multiple industries indicated they have adopted or plan to adopt a digital-first strategy. Market research firm IDC predicts that digital transformation spending will reach $1.7 trillion worldwide by the end of 2019. For the most part, those pursuing digital transformation initiatives are seeking to create new products, services, and capabilities that will generate new revenue and better customer experiences. Delivering an easy-to-use, low-friction digital experience to customers and employees depends on a complex combination of infrastructure, data, and applications. As the complexity increases, so does the risk that something will go wrong because one system or process does not interact properly with another.

Spotlight

In a recent IDG survey, an average of 89% of respondents across multiple industries indicated they have adopted or plan to adopt a digital-first strategy. Market research firm IDC predicts that digital transformation spending will reach $1.7 trillion worldwide by the end of 2019. For the most part, those pursuing digital transformation initiatives are seeking to create new products, services, and capabilities that will generate new revenue and better customer experiences. Delivering an easy-to-use, low-friction digital experience to customers and employees depends on a complex combination of infrastructure, data, and applications. As the complexity increases, so does the risk that something will go wrong because one system or process does not interact properly with another.

Related News

Run:AI Creates First Fractional GPU That Effectively Creates Virtualized Logical GPUs

Run:AI | May 07, 2020

Run:AI's fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors. Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes. The addition of fractional GPU sharing is a key component in Run:AI's mission to create a true virtualized AI infrastructure, combining. Run:AI, a company virtualizing AI infrastructure, today released the first fractional GPU sharing system for deep learning workloads on Kubernetes. Especially suited for lightweight AI tasks at scale such as inference, the fractional GPU system transparently gives data science and AI engineering teams the ability to run multiple workloads simultaneously on a single GPU, enabling companies to run more workloads such as computer vision, voice recognition and natural language processing on the same hardware, lowering costs. Today's de facto standard for deep learning workloads is to run them in containers orchestrated by Kubernetes. However, Kubernetes is only able to allocate whole physical GPUs to containers, lacking the isolation and virtualization capabilities needed to allow GPU resources to be shared without memory overflows or processing clashes. Read More: IT Management Solution Announces Updated Range of Virtualization Services for Businesses Run:AI's fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors. This enables several deep learning workloads to run in containers side-by-side on the same GPU without interfering with each other. The solution is transparent, simple and portable; it requires no changes to the containers themselves. To create the fractional GPUs, Run:AI had to modify how Kubernetes handled them. "In Kubernetes, a GPU is handled as an integer. You either have one or you don't. We had to turn GPUs into floats, allowing for fractions of GPUs to be assigned to containers. Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes, Dr. Ronen Dar, co-founder and CTO of Run:AI. A typical use-case could see 2-4 jobs running on the same GPU, meaning companies could do four times the work with the same hardware. For some lightweight workloads, such as inference, more than 8 jobs running in containers can comfortably share the same physical chip. The addition of fractional GPU sharing is a key component in Run:AI's mission to create a true virtualized AI infrastructure, combining with Run:AI's existing technology that elastically stretches workloads over multiple GPUs and enables resource pooling and sharing. Some tasks, such as inference tasks, often don't need a whole GPU, but all those unused processor cycles and RAM go to waste because containers don't know how to take only part of a resource. Run:AI's fractional GPU system lets companies unleash the full capacity of their hardware so they can scale up their deep learning more quickly and efficiently, Run:AI co-founder and CEO Omri Geller. Read More: Synamedia Advances Video Network Portfolio with HPE ProLiant Servers for Network Virtualization About Run:AI Run:AI has built the world's first virtualization layer for AI workloads. By abstracting workloads from underlying infrastructure, Run:AI creates a shared pool of resources that can be dynamically provisioned, enabling full utilization of expensive GPU compute. IT teams retain control and gain real-time visibility – including seeing and provisioning run-time, queueing and GPU utilization – from a single web-based UI. This virtual pool of resources enables IT leaders to view and allocate compute resources across multiple sites - whether on premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with existing IT and data science workflows.

Read More

SERVER VIRTUALIZATION

VMware Enhances vSphere with Nvidia AI Software Support

VMware | March 25, 2021

With the arrival of vSphere 7 Update 2 in early March 2021, VMware took a significant step into the world of artificial intelligence and machine learning, and now support for Nvidia AI software has arrived on its flagship platform. This wasn't entirely surprising. Nvidia and VMware announced a collaboration during September's online VMworld event, intending to have Nvidia's AI Enterprise software accessible on VMware's platform for deploying and managing virtual machines. With this latest release, vSphere is "exclusively certified" to run Nvidia's AI Enterprise applications and frameworks, which have now been containerized and can be run through an organization's infrastructure rather than in a silo. This, of course, means more support for Nvidia GPUs, which are needed to run the software. "It opens up for both of us a good opportunity," Lee Caswell, vice president of marketing at VMware, told ITPro Today. "We're looking to go and help AI become mainstream in the enterprise. They're looking to open that up for all of our 300,000 vSphere customers, who can now have access to these new capabilities." Access to this market is vital for Nvidia because it will improve revenues in their data center division, which generated $6.7 billion in the previous fiscal year. Obtaining a position in the emerging enterprise AI market is also essential for VMware, which has spent the last few years expanding its offerings beyond the virtualization technology it pioneered, mostly by acquisitions; in 2018, VMware added cloud-native technology to its portfolio with the purchase of the Kubernetes startup Heptio, and about a year later bought back Pivotal a cloud-native platform. Access to this market is vital for Nvidia because it will improve revenues in their data center division, which generated $6.7 billion in the previous fiscal year. Obtaining a position in the emerging enterprise AI market is also essential for VMware, which has spent the last few years expanding its offerings beyond the virtualization technology it pioneered, mostly by acquisitions; in 2018, VMware added cloud-native technology to its portfolio with the purchase of the Kubernetes startup Heptio, and about a year later bought back Pivotal a cloud-native platform. Nvidia and Vsphere Caswell explained that traditionally, AI software has been run on bare metal to prevent possible performance loss associated with moving compute-heavy workloads to VMs or containers. The issue with this approach is that bare metal deployments are not portable. As a result, AI workloads are limited to silos, which is a problem for enterprises who want to use AI on-the-fly throughout their IT infrastructure. VMware and Nvidia were able to containerize Nvidia's AI Enterprise software with almost the same benchmarked performance levels as operating on bare metal by using properties inherent in vSphere's hypervisor. This makes Nvidia AI software easily available across an organization's infrastructure, resolving the issue of ensuring portability while sacrificing substantial efficiency. To function properly, AI software must be able to take advantage of GPUs, which take most of the load off of a server's CPUs by doing much of the heavy lifting. VMware has added support for Nvidia's A100 Tensor Core GPUs, which are used in Nvidia-Certified Systems, Nvidia-tested and licensed server designs sold exclusively by eight equipment manufacturers, including ASUS, Dell EMC, HPE, and Supermicro. In addition to running AI workloads, GPUs can be used for other vSphere features, such as Multi-Instance GPU, which enables GPU cycles to be shared by many users, and Distributed Resource Scheduler for automated workload placement to prevent performance bottlenecks. "Up to seven VMs can now share a single GPU," Caswell said. "That's a more cost-effective way to deploy at the enterprise

Read More

VIRTUAL SERVER INFRASTRUCTURE

Denodo Positioned as a Leader for the Second Consecutive Year in the 2021 Gartner® Magic Quadrant™ for Data Integration Tools

Denodo | September 01, 2021

Denodo, the leader in data virtualization, announced that Gartner® has once again positioned the Company as a “Leader” in its 2021 Magic Quadrant for Data Integration Tools. The report stated, “The data integration tool market is seeing renewed momentum, driven by requirements for hybrid and multi-cloud data integration, augmented data management, and data fabric designs.” The report further stated that, “Leaders have been advancing their metadata capabilities, by introducing some highly dynamic optimization and advanced design assistance functions. They have been extending their capabilities to allow for ML over this active metadata to assist developers with various degrees of support and automation in integration design and implementation. Leaders are adept at providing tools that can support both hybrid integration and multi-cloud integration options, bridging the data silos that exist across on-premises and multi-cloud ecosystems.” Denodo pioneered data virtualization more than 20 years ago and has continuously enhanced its capabilities to enable its customers to effectively manage distributed architectures such as logical data warehouse, data fabric, and data mesh. The latest version 8.0 of the Denodo Platform furthers Denodo’s longtime focus in data virtualization in enabling agile data integration and delivery with latest innovations in AI/ML-powered smart query acceleration, automated and secure cloud data integration, and unified user experience with integrated active data catalog and data science notebook. “I am thrilled to see Denodo recognized yet again in the Leaders’ Quadrant based on our Ability to Execute and Completeness of Vision,” said Ravi Shankar, senior vice president and chief marketing officer at Denodo. “According to us, it is truly gratifying to see that our execution and vision for data integration is so aligned with that of Gartner criteria for Leaders. Denodo was recently mentioned by Gartner as having the second highest revenue growth in 2020 among the top 10 vendors in the data integration tools market, Worldwide1. Also, Denodo is one of the two vendors to receive Customers’ Choice in the 2021 Gartner Peer Insights Voice of the Customer: Data Integration Tools. That execution combined with our vision for logical data fabric, complete with advanced data virtualization, AI/ML, and hybrid/multi-cloud capabilities, positions the Denodo Platform as the future of data integration and data management.” Gartner Disclaimers Gartner Peer Insights reviews constitute the subjective opinions of individual end-users based on their own experiences, and do not represent the views of Gartner or its affiliates. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates. Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. About Denodo Denodo is the leader in data virtualization providing agile, high performance data integration, data abstraction, and real-time data services across the broadest range of enterprise, cloud, big data, and unstructured data sources at half the cost of traditional approaches.

Read More