Spotlight

Join host Sean Donahue as he corners his good friend, Kevin Binder, to find out what the most common questions are at AWS re:Invent. Kevin tackles the tough customer questions such as "How does Citrix Cloud compare to AWS?" or "What if I choose different cloud platforms for different requirements?

Spotlight

Join host Sean Donahue as he corners his good friend, Kevin Binder, to find out what the most common questions are at AWS re:Invent. Kevin tackles the tough customer questions such as "How does Citrix Cloud compare to AWS?" or "What if I choose different cloud platforms for different requirements?

Related News

ADVA Announces dacoso is Using Its Virtualized Encryption Technology to Offer Secure Managed Services

ADVA | May 19, 2020

The virtualized encryption technology will give dacoso’s customers comprehensive data protection in cloud environments. We’re excited to be offering our customers the comprehensive protection of ADVA’s ConnectGuard™ Cloud technology. For many of them, total cloud security is key to their business needs. ADVA’s ConnectGuard™ Cloud is a highly scalable, standards-compliant encryption solution that provides hybrid and multi-cloud environments with the ultimate defense against cyber threats. ADVA today announced that dacoso is using its cloud-native security solution, ConnectGuard™ Cloud, to offer secure managed services. The virtualized encryption technology will give dacoso’s customers comprehensive data protection in cloud environments. With unprecedented scale and efficiency, dacoso will be able to roll out secure connectivity within minutes, safeguarding private, public, and edge/branch clouds at Layers 2, 3 and 4. Managed centrally in dacoso’s network by ADVA’s Ensemble Controller, the VPN services offer a major boost to the region’s businesses as they look to reduce costs and enhance flexibility by migrating more of their data and applications to the cloud. Now dacoso’s enterprise customers can harness the latest cloud tools while ensuring data integrity, regulatory compliance and true peace of mind for their end users. We’re excited to be offering our customers the comprehensive protection of ADVA’s ConnectGuard™ Cloud technology. For many of them, total cloud security is key to their business needs. By providing strong and flexible data protection in virtual environments, we’re enabling our customers to realize their full potential by accessing cloud applications with complete peace of mind, We’re convinced by the robust security and performance benefits of ConnectGuard Cloud™. So much so that we’re already harnessing the solutions to protect our own branch and cloud locations. We understand its power to protect mission-critical data while also reducing costs and enhancing network utilization, Karsten Geise, head of business and product development, dacoso. Read More: Best Practices to backup VMware vSphere Virtual Machines Implemented entirely in software, ADVA’s ConnectGuard™ Cloud is a highly scalable, standards-compliant encryption solution that provides hybrid and multi-cloud environments with the ultimate defense against cyber threats. It enables dacoso to offer its customers high-performance, transport-layer-independent security with none of the performance issues of IPSec-based technologies. As a far more efficient and cost-effective alternative to hardware security appliances, ConnectGuard™ Cloud empowers dacoso’s customers to extend encryption to remote workers and branch offices by leveraging low-cost uCPE platforms. What’s more, with zero-touch provisioning, dacoso can roll out fully secured cloud connectivity almost instantly without any manual configuration. Our partnership with dacoso is great news for a huge number of enterprises across this region who are currently undergoing their digital transformation. With more and more of dacoso’s customers moving their workloads to cloud environments, our ConnectGuard™ Cloud technology provides the ideal way to ensure privacy and data integrity while keeping cost and complexity low. As a virtualized encryption solution with zero-touch provisioning and automated key management, our ConnectGuard™ Cloud is the most readily scalable option available. It improves operational simplicity and latency performance compared to appliance-based encryption methods. What’s more, it keeps latency at its lowest levels – a key requirement for many emerging cloud applications, Hartmut Müller-Leitloff, SVP, sales, EMEA, ADVA. Read More: HPE Makes Nimble Storage dHCI Available Through HPE Greenlake, Enables VDI and Virtual Machines-as-a-Service About ADVA ADVA is a company founded on innovation and focused on helping our customers succeed. Our technology forms the building blocks of a shared digital future and empowers networks across the globe. We’re continually developing breakthrough hardware and software that leads the networking industry and creates new business opportunities. It’s these open connectivity solutions that enable our customers to deliver the cloud and mobile services that are vital to today’s society and for imagining new tomorrows. Together, we’re building a truly connected and sustainable future.

Read More

AMD EPYC Processors with Nutanix Hybrid Cloud Infrastructure Deliver Leading Virtualization Performance and Security Features

AMD | October 17, 2020

AMD today announced the continued expansion of the AMD EPYC™ processor ecosystem for virtualized environments and hyperconverged infrastructure (HCI) with Lenovo announcing the ThinkAgile HX, the latest solution based on AMD EPYC processors and Nutanix’s hybrid cloud infrastructure solution. This new solution expands the ecosystem of AMD EPYC based cloud and virtualized solutions. As customers want more value for their data center budget, IT departments are moving to HCI to modernize and transform their enterprise data center. This creates a high performing and efficient data center that is easier to manage for the quick-changing needs of businesses. By choosing AMD EPYC processors and Nutanix hybrid and multicloud solutions, customers can accelerate workloads like digital workspaces including VDI with fantastic performance, advanced security features, and broad ecosystem support from major ISVs and OEM partners.

Read More

Run:AI Creates First Fractional GPU That Effectively Creates Virtualized Logical GPUs

Run:AI | May 07, 2020

Run:AI's fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors. Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes. The addition of fractional GPU sharing is a key component in Run:AI's mission to create a true virtualized AI infrastructure, combining. Run:AI, a company virtualizing AI infrastructure, today released the first fractional GPU sharing system for deep learning workloads on Kubernetes. Especially suited for lightweight AI tasks at scale such as inference, the fractional GPU system transparently gives data science and AI engineering teams the ability to run multiple workloads simultaneously on a single GPU, enabling companies to run more workloads such as computer vision, voice recognition and natural language processing on the same hardware, lowering costs. Today's de facto standard for deep learning workloads is to run them in containers orchestrated by Kubernetes. However, Kubernetes is only able to allocate whole physical GPUs to containers, lacking the isolation and virtualization capabilities needed to allow GPU resources to be shared without memory overflows or processing clashes. Read More: IT Management Solution Announces Updated Range of Virtualization Services for Businesses Run:AI's fractional GPU system effectively creates virtualized logical GPUs, with their own memory and computing space that containers can use and access as if they were self-contained processors. This enables several deep learning workloads to run in containers side-by-side on the same GPU without interfering with each other. The solution is transparent, simple and portable; it requires no changes to the containers themselves. To create the fractional GPUs, Run:AI had to modify how Kubernetes handled them. "In Kubernetes, a GPU is handled as an integer. You either have one or you don't. We had to turn GPUs into floats, allowing for fractions of GPUs to be assigned to containers. Run:AI also solved the problem of memory isolation, so each virtual GPU can run securely without memory clashes, Dr. Ronen Dar, co-founder and CTO of Run:AI. A typical use-case could see 2-4 jobs running on the same GPU, meaning companies could do four times the work with the same hardware. For some lightweight workloads, such as inference, more than 8 jobs running in containers can comfortably share the same physical chip. The addition of fractional GPU sharing is a key component in Run:AI's mission to create a true virtualized AI infrastructure, combining with Run:AI's existing technology that elastically stretches workloads over multiple GPUs and enables resource pooling and sharing. Some tasks, such as inference tasks, often don't need a whole GPU, but all those unused processor cycles and RAM go to waste because containers don't know how to take only part of a resource. Run:AI's fractional GPU system lets companies unleash the full capacity of their hardware so they can scale up their deep learning more quickly and efficiently, Run:AI co-founder and CEO Omri Geller. Read More: Synamedia Advances Video Network Portfolio with HPE ProLiant Servers for Network Virtualization About Run:AI Run:AI has built the world's first virtualization layer for AI workloads. By abstracting workloads from underlying infrastructure, Run:AI creates a shared pool of resources that can be dynamically provisioned, enabling full utilization of expensive GPU compute. IT teams retain control and gain real-time visibility – including seeing and provisioning run-time, queueing and GPU utilization – from a single web-based UI. This virtual pool of resources enables IT leaders to view and allocate compute resources across multiple sites - whether on premises or in the cloud. The Run:AI platform is built on top of Kubernetes, enabling simple integration with existing IT and data science workflows.

Read More