Run:AI | May 07, 2020
Run:AI's fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors.
Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes.
The addition of fractional GPU sharing is a key component in Run:AI's mission to create a true virtualized AI infrastructure, combining.
Run:AI, a company virtualizing AI infrastructure, today released the first fractional GPU sharing system for deep learning workloads on Kubernetes. Especially suited for lightweight AI tasks at scale such as inference, the fractional GPU system transparently gives data science and AI engineering teams the ability to run multiple workloads simultaneously on a single GPU, enabling companies to run more workloads such as computer vision, voice recognition and natural language processing on the same hardware, lowering costs.
Today's de facto standard for deep learning workloads is to run them in containers orchestrated by Kubernetes. However, Kubernetes is only able to allocate whole physical GPUs to containers, lacking the isolation and virtualization capabilities needed to allow GPU resources to be shared without memory overflows or processing clashes.
Read More: IT Management Solution Announces Updated Range of Virtualization Services for Businesses
Run:AI's fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors. This enables several deep learning workloads to run in containers side-by-side on the same GPU without interfering with each other. The solution is transparent, simple and portable; it requires no changes to the containers themselves.
To create the fractional GPUs, Run:AI had to modify how Kubernetes handled them. "In Kubernetes, a GPU is handled as an integer. You either have one or you don't. We had to turn GPUs into floats, allowing for fractions of GPUs to be assigned to containers. Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes,
Dr. Ronen Dar, co-founder and CTO of Run:AI.
A typical use-case could see 2-4 jobs running on the same GPU, meaning companies could do four times the work with the same hardware. For some lightweight workloads, such as inference, more than 8 jobs running in containers can comfortably share the same physical chip.
The addition of fractional GPU sharing is a key component in Run:AI's mission to create a true virtualized AI infrastructure, combining with Run:AI's existing technology that elastically stretches workloads over multiple GPUs and enables resource pooling and sharing.
Some tasks, such as inference tasks, often don't need a whole GPU, but all those unused processor cycles and RAM go to waste because containers don't know how to take only part of a resource. Run:AI's fractional GPU system lets companies unleash the full capacity of their hardware so they can scale up their deep learning more quickly and efficiently,
Run:AI co-founder and CEO Omri Geller.
Read More: Synamedia Advances Video Network Portfolio with HPE ProLiant Servers for Network Virtualization
Run:AI has built the world's first virtualization layer for AI workloads. By abstracting workloads from underlying infrastructure, Run:AI creates a shared pool of resources that can be dynamically provisioned, enabling full utilization of expensive GPU compute. IT teams retain control and gain real-time visibility – including seeing and provisioning run-time, queueing and GPU utilization – from a single web-based UI. This virtual pool of resources enables IT leaders to view and allocate compute resources across multiple sites - whether on premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with existing IT and data science workflows.
Kyndryl | November 22, 2021
Kyndryl and VMware announced an expansion of the companies' strategic partnership focused on app modernization and multicloud services. This collaboration will enable customers to enhance their digital innovation and business transformation with enterprise control.
The primary goal of the expanded partnership is to accelerate IT and business reinvention for customers through the combination of VMware solutions and Kyndryl's design, build and managed services. The companies also aim to help customers speed their digital transformations by rapidly building and deploying new, more secure applications designed and built for a world of distributed work.
As Kyndryl and VMware begin their respective journeys as stand-alone companies, they share a productive relationship built on more than 20 years of collaboration between VMware and IBM that has consistently provided customers with a powerful combination of strategic guidance and world-class technologies. The agreement adds a focus on providing differentiated solutions for multicloud infrastructure and management, digital workspace services, managed applications, resiliency and security, and network and edge computing.
"We're excited to embark on this journey with VMware and intend to leverage a rich and productive history of joint solution architectures, common designs, and deep relationships to provide customers the solutions, services and support they need to achieve their business transformation goals," said Stephen Leonard, global alliances and partnerships leader, Kyndryl. "Through this important partnership, Kyndryl and VMware will help companies design and deploy mission-critical workloads that can modernize their applications and operations to reap the benefits of cloud and multicloud computing."
Kyndryl also plans to work with VMware to expand its existing multicloud advisory, implementation, and management services to support the VMware Tanzu platform and deploy vSphere workloads to VMware multi-cloud infrastructure running in all public clouds.
Multicloud is the digital business model for the next 20 years. With the average organization running hundreds of apps across many different clouds, customers need solutions and strategic partners that enable their organizations to be as agile and resilient as possible. This is the power of the VMware and Kyndryl partnership. Kyndryl is a strategic partner that brings world-class solutions, skills, expertise, and experience to the companies' mutual customers. Together, we will empower customers to achieve smarter paths to cloud, edge, and app modernization; provide autonomy for developers; and enable a more secure, frictionless distributed workforce."
Susan Nash, Senior Vice President, Strategic Corporate Alliances, VMware
To further the global reach and impact of their collaboration, Kyndryl and VMware have established local, regional, and worldwide alignment of their respective capabilities, expertise and resources that will facilitate solutions planning, investment, and execution.
Kyndryl and VMware also are jointly developing innovations through a Joint Innovation Lab (JIL), which spearheads and drives delivery model innovations to better reach and serve customers. The JIL programs will further focus on developing solutions for app modernization, containers, observability and security with VMware Tanzu, as well as multicloud management solutions. Kyndryl and VMware will closely align and optimize their collaboration in support of VMware Cross-Cloud services to provide infrastructure and applications services and support to customers, independent of the underlying cloud provider environments.
Kyndryl is the world's largest IT infrastructure services provider. The company designs, builds, manages, and modernizes the complex, mission-critical information systems that the world depends on every day. Kyndryl's nearly 90,000 employees serve over 4,000 customers in more than 60 countries around the world, including 75 percent of the Fortune 100.
VMware is a leading provider of multicloud services for all apps, enabling digital innovation with enterprise control. As a trusted foundation to accelerate innovation, VMware software gives businesses the flexibility and choice they need to build the future. Headquartered in Palo Alto, California, VMware is committed to building a better future through the company's 2030 Agenda.
Algoblu | March 09, 2021
Algoblu reported today its Network Element Virtualization (NEV) platform that virtualizes and orchestrates fundamental network assets to help transporters offer more application-arranged redid administrations to both business and private clients. Because of the new FPGA-based innovation, the expense per digit diminishes by multiple occasions and activity proficiency builds multiple times.
Advantages incorporate improved data transmission effectiveness and network security, multi-cloud access, worked on network provisioning and investigating. Directed clients are in cloud gaming, 4k/8K web based, video conferencing, modern IoT and different ventures that require ensured network SLAs.
“We are pleased to collaborate with Algoblu to develop a Network Element Virtualization chip built on leading-edge FPGA technology. The chip is key to Algoblu's NEV architecture with an FPGA-based SMartNIC, all developed in an elegant way,” says Dr. Endric Schubert, CTO at Missing Link Electronics.
“Algoblu’s Network Element Virtualization technology allows carriers to provide services across complex networks and, more than this, offer different classes of network services in a simple way, customized for our customers’ needs and their applications' requirements,” said Jordan Deng, founder and CEO of CIK Telecom, the third largest independent service provider in Canada.
“Most vendors focus on managing existing network resources. We virtualize and orchestrate underlying network resources, which is a different approach that benefits telecom companies directly,” said Lawrence Lee, founder and CEO of Algoblu. “We know how challenging it is for telcos to provide services across multi-vendor networks. With NEV, they can virtualize existing network infrastructures to be one, providing personalized multi-tier services with different encryption, bandwidth and latency, as well as guaranteed SLAs.”
Since its inception in 2012, Algoblu’s mission was to build a flexible, programmable network that focuses on virtualizing network resources to achieve increased flexibility and scalability for telecom companies and enterprise customers. Algoblu's existing corporate users span financial, insurance, retail, telecom operators, gaming, smart transportation, e-commerce, education, sports, manufacturing and many other industries, including a number of the world's top 500 companies.