Application Container Market Driven by Growing Demand for Server Virtualization

digitaljournal | June 28, 2019

Application containers are a product of the growing demand for server virtualization. They comprise a lightweight virtual environment in which any set of processes, such as CPU or memory, can be isolated and virtualized. Application containers use the kernel of the host system, allowing them to launch instantly, with zero performance overhead. The use of the shared kernel also improves performance attributes and allows smooth utilization of multiple computing resources. This is in contrast to traditional hypervisor/virtual machine systems, in which the guest and host systems can run on different kernels.While application containers provide a lighter level of security than virtual machines, the demand from the global application container market is growing due to their benefits in terms of runtime and storage size. Traditional virtual machine systems can take up several gigabytes, whereas application containers consume only a few megabytes. This improves the network connectivity massively, allowing the end user to implement better security protocols and customer relationship management software.

Spotlight

The history of virtualization explained by Aruba Cloud and Microsoft Hyper-V is sometimes mistaken for a Type 2 virtualization solution. This is because unless you are using the stand-alone version, you require an operating system. Without the Hyper-V role installed, the operating system uses a HAL (Hardware Abstraction Layer) to access the hardware. When Hyper-V is installed, the HAL is replaced by a Hypervisor and the operating system and virtual machines access the hardware through the hypervisor. So essentially the hypervisor is a HAL with additional features required for virtual machines.


Other News
VIRTUAL SERVER INFRASTRUCTURE

Expanding the Open RAN Ecosystem with a Wide Portfolio of High-Performance O-RAN Compliant Radios

Mavenir | February 24, 2022

Mavenir, the Network Software Provider building the future of networks with cloud-native software that runs on any cloud and transforms the way the world connects, announces a wide portfolio of O-RAN compliant Radio Units (RUs) – expanding the Open RAN radio ecosystem, to provide Communications Service Providers (CSPs) with a wider choice of radios as they progress in rolling out open and interoperable networks. OpenBeam, the Future of Radio, is providing CSPs with a comprehensive portfolio of O-RAN compliant radio products spanning micro, macro, millimeter wave (mmWave) and massive MIMO (mMIMO) to support Open RAN deployments in 2022 and beyond. The OpenBeam radio portfolio covers a wide range of spectrum, both licensed and unlicensed and strictly follows the philosophy of open interfaces and O-RAN 7.2 interface to which Mavenir is strongly committed with Open RAN CU/DU products. OpenBeam radios will be available to the Open RAN Ecosystem including vendors, operators, and system integrators. According to Dell’Oro’s January 2022 reporti, total Open RAN revenues remain on track to approach $6B or 15% of the overall RAN market by 2026. Additionally, in the Remote Radio Unit (RRU) and Active Antennas Unit (AAU), the growth of Open RAN Units shows CAGR >50% versus a declining number of traditional legacy radios. Alongside a strong existing ecosystem of partners that Mavenir MAVair Open vRAN interworks with (more than 15 O-RAN RRU partners), the new OpenBeam suite provides an innovative and comprehensive radio portfolio that is specifically designed for the growing needs of CSPs with agile, cost-efficient, smart radios to meet critical demands on the network now, and as the network changes and expands. The radio solutions can be used for a wide range of use cases, including basic coverage across all frequency bands for enterprise, urban and rural deployment opportunities. The robust set of options address the needs of CSPs to be agile and cost-efficient with low power consumption, low wind load, and are built with integrated intelligence and automation. Designed for the growing needs of private enterprises to public networks, the portfolio supports both new and legacy radio access technologies. All radios have a modular design, using proven technology to support both beamforming and multi-band needs. We have engaged with customers globally to curate a comprehensive O-RAN portfolio that addresses the needs of both private enterprises as well as traditional communication providers. OpenBeam portfolio covers a wide range of deployment scenarios starting from micro-RUs to 64TR Massive MIMO Radios. OpenBeam radios deliver industry-leading performance and energy efficiency packed in a small footprint.” Rajesh Srinivasa, Senior Vice President of Radio Business Unit at Mavenir Pardeep Kohli, Chief Executive Officer at Mavenir, said, “With the incredible growth of virtualization and Open RAN, we always believed that the ecosystem had to be accelerated as this is fundamental for the success of the future of networks. Mavenir has been working with many partners in the ecosystem, and we have also injected more direct contributions when it comes to innovative design. “Mavenir is a strong believer in new generation software-based networks which are orchestrated by artificial intelligence (AI) and analytics software and adapt in a dynamic way to the user behaviors and market demands. The intelligent, dynamic and adaptable software, together with strong underlying automation, is what drives innovation in the future of networks.” About Mavenir Mavenir is building the future of networks and pioneering advanced technology, focusing on the vision of a single, software-based automated network that runs on any cloud. As the industry's only end-to-end, cloud-native network software provider, Mavenir is focused on transforming the way the world connects, accelerating software network transformation for 250+ Communications Service Providers in over 120 countries, which serve more than 50% of the world’s subscribers.

Read More

VIRTUAL DESKTOP STRATEGIES

Tailscale SSH Now in Beta for Simple and Secure Remote Connections

Tailscale | June 27, 2022

Tailscale has released Tailscale SSH to beta, which makes authentication and authorization trustworthy and effortless by replacing SSH keys with the Tailscale identity of any machine. With Tailscale, each server and user device gets its own identity and node key for authenticating and encrypting the Tailscale network connection, and uses access control lists defined in code for authorizing connections, making it a natural extension for Tailscale to now manage access for SSH connections in your network. “SSH is an everyday tool for developers, but managing SSH keys for a server isn’t so simple or secure, SSH keys are difficult to protect and time consuming to manage. Protecting your network connections with SSH keys requires that admins spend significant resources managing, provisioning, or deprovisioning user access. Tailscale SSH removes the pain from SSH key management with the same powerful simplicity Tailscale offers for virtual private networks.” Tailscale Product Manager Maya Kaczorowski Kris Nóva, Senior Principal Engineer and published distributed systems expert used Tailscale to create a private network between her homelab in New York and a datacenter in Iceland: “Tailscale is seriously the best user experience of my life. I ran a Kubernetes 1.24 cluster on Tailscale with eBPF CNI networking on top of a tailnet, which connects my private subnet at home, across the Arctic ocean to a private subnet in a volcano-powered datacenter in Iceland. It blew my mind how easy and powerful it was to use. I’m excited to use their new SSH feature.” With Tailscale SSH, users can now securely code from their iPad running Tailscale, across operating systems to a Linux workstation, without having to figure out how to get their SSH private key onto their iPad. Enterprise Tailscale customers will reduce churn and resources on SSH key management or bastion jump boxes, and avoid risk of exposing memory unsafe servers to the open internet. The beta release gives all users: Authentication and encryption: Authenticate, authorize, and encrypt SSH connections using Tailscale. No need to generate, distribute, and manage SSH keys. SSO and MFA: Use existing identity providers and multi-factor authentication to protect SSH connections the same way you authorize and protect application access. Built-in key rotation: Tailscale makes it simple to rotate keys with a single command and manages key distribution. Node keys can be rotated by re-authenticating the device, as frequently as every day. Re-verify SSH connections: Tailscale works with existing identity providers and re-verifies before SSH connections are established, and gives users the option to re-authenticate when establishing high-risk SSH connections. Revoke SSH access easily: When an employee offboards, Tailscale allows admins to revoke access to SSH to a machine almost instantaneously with Tailscale ACLs. Manage permissions as code: Define connections to devices using a standard syntax and understand SSH access controls in a centralized configuration file. Reduced latency with point-to-point connections: Connect directly from a device to a server, without having to hairpin through a bastion. Developers can connect wherever they work, without slowing them down by routing their traffic through the main office. Add a user or server painlessly: Maintain users and servers in a network without adding complexity. Tailscale ACLs to give the right people access and add it to a team's known hosts. Tailscale makes network security accessible to teams of any scale and gives developers and DevOps teams the ability to connect to resources easily and securely in the cloud, on-premises, and everywhere in between. Tailscale uses the WireGuard® protocol, the open source, opinionated standard for secure connectivity. It is set up and configured in a matter of minutes on average, while other VPN solutions take weeks to fully implement and several hours a week to maintain. About Tailscale Tailscale builds software that makes it easy to interconnect and secure devices, no matter where they are. Every day, banks and multinational companies use Tailscale to protect their corporate networks. Homelabs and start-ups trust Tailscale to collaborate and share access to tooling. We're building a future for the Internet that's easy, small and safe, like it used to be. Founded in 2019 and fully distributed, we’re backed by Accel, CRV, Heavybit, Insight Partners, and Uncork Capital.

Read More

VMWARE

Envoy Gateway Makes Using Envoy Proxy Easier for Developers and Reverses Fragmentation

VMware | May 16, 2022

Members of the steering group for Envoy Gateway (EG), including Envoy creator Matt Klein and representatives from Ambassador Labs, Fidelity Investments, Tetrate, and VMware, Inc., today announced their joint commitment to the project, which launched today at KubeCon + CloudNativeCon, Europe 2022, under the auspices of the Cloud Native Computing Foundation® (CNCF®). Envoy Gateway is a new effort within the Envoy proxy open source project to simplify Envoy use in cloud-native application development. Envoy Gateway will reduce existing, redundant efforts around Envoy and make it much easier for application developers to use Envoy as a basic API gateway “out of the box” and as a Kubernetes Ingress controller. Exposing a simplified set of APIs, and implementing the Kubernetes Gateway API, EG makes it easier to extend Envoy. Developers will now have a cost-free, unfettered way to provide external access to their work in progress. At the same time, Envoy Gateway will not replace API management features currently found in commercial products. “Envoy has achieved a great deal of success since we first released it in 2016,” said Matt Klein, founder of the Envoy proxy project. “And community has been at the heart of Envoy from the beginning. With the community-driven Envoy Gateway project, we see the opportunity to make Envoy accessible to many more users through the addition of simplified APIs and new capabilities explicitly targeted at north-south / edge proxy use cases.” Envoy is already widely used for traffic between separate services in a microservices application—that is, east-west traffic. With Envoy Gateway, Envoy will also be easy to use for north-south traffic—traffic between an application and the outside world, as with consumers of an application’s APIs. Envoy Gateway—Extensible Open Source Infrastructure for the Cloud-Native Future IT organizations worldwide want to establish and use a rich, robust, modern stack of open source software for cloud-native application development and delivery, under the management of organizations such as the Linux Foundation and CNCF. Commercial offerings and projects within each IT team can then add value on top of this core infrastructure. Envoy is fast becoming the go-to networking substrate within this modern, cloud-native stack. However, the need for API access, traffic routing, and other ingress capabilities has recently led to fragmentation in the Envoy ecosystem. Envoy Gateway will bring this needed functionality back into the main Envoy project and make it less confusing and time-consuming for developers to access Envoy. Implementation Via Kubernetes Gateway API Envoy Gateway will expose a version of the Kubernetes-native Gateway API, with Envoy-specific extensions. This is an expressive, extensible, role-oriented API well-suited to use by developers. Gateway API is either implemented, or in progress, for Istio, the Contour project (which originated at VMware), Emissary-ingress (which originated at Ambassador Labs), and others. When users create Gateway API resources, they will be translated into native Envoy API calls, so Envoy and xDS, its native API, will not need to be changed to add this new support. Advantages for Developers, Infrastructure Administrators and Business Decision-Makers Application developers will experience the most positive impact from Envoy Gateway. They will be able to run Envoy Gateway and begin routing traffic to their applications. They will no longer need to build their own control plane, or extend an existing control plane such as a Go or Java control plane, or bring in a vendor solution at the early stages of their projects. They can just configure routes for the application and share them. Infrastructure administrators will be able to easily offer an Envoy-native experience to application teams, without needing to adopt a vendor solution just to get basic gateway functionality. They will be able to manage instances of Envoy Gateway without interfering with developer access to them. Envoy Gateway will allow them to deliver consistent application networking capabilities across heterogeneous environments. Executives and decision-makers will have Envoy as a standard and, we expect, widely-used solution for API access and Kubernetes ingress. They will also benefit from faster and easier development and delivery of more secure and robust software and services. About Envoy Originally created by Matt Klein and built at Lyft, Envoy is a high performance C++ distributed proxy designed for single services and applications, as well as a communication bus and “universal data plane” designed for large microservice “service mesh” architectures. Built on the learnings of solutions such as NGINX, HAProxy, hardware load balancers, and cloud load balancers, Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner. When all service traffic in an infrastructure flows via an Envoy mesh, it becomes easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place. About Ambassador Labs Ambassador Labs, the cloud native developer experience leader, enables developers to code, test, ship, and run applications faster and easier than ever. Maker of top Cloud Native Computing Foundation (CNCF) open source projects, including Emissary-ingress and Telepresence, Ambassador Labs delivers a developer control plane for Kubernetes that integrates the development, deployment, and production infrastructure for developers and organizations worldwide including Microsoft, PTC, NVidia, and Ticketmaster About Fidelity Investments Fidelity’s mission is to inspire better futures and deliver better outcomes for the customers and businesses we serve. With assets under administration of $11.3 trillion, including discretionary assets of $4.2 trillion as of March 31, 2022, we focus on meeting the unique needs of a diverse set of customers. Privately held for over 75 years, Fidelity employs more than 57,000 associates who are focused on the long-term success of our customers. About VMware VMware is a leading provider of multi-cloud services for all apps, enabling digital innovation with enterprise control. As a trusted foundation to accelerate innovation, VMware software gives businesses the flexibility and choice they need to build the future. Headquartered in Palo Alto, California, VMware is committed to building a better future through the company’s 2030 Agenda.

Read More

VIRTUAL SERVER INFRASTRUCTURE

Liqid Welcomes VMware Cloud CTO to its Board of Directors

Liqid | June 03, 2022

LIQID Inc., one of the world’s leading software companies delivering data center composability, announced today that the company has welcomed VMware Cloud CTO Marc Fleischmann as a member of the company’s Board of Directors. With a technology career spanning IT infrastructure, cloud and data services, machine learning and analytics, and global IT business services, Fleischmann will collaborate with the Liqid Board and the company’s leadership team to identify new opportunities to expand Liqid Matrix™ composable disaggregated infrastructure (CDI) software into new world-class solutions and services for Liqid’s customers and partners. “Marc’s expertise will be invaluable as Liqid continues to expand our footprint from edge to cloud and everywhere in between, and we are excited to welcome Marc to the Liqid board, As a technology leader at VMware, Marc intimately understands the challenges IT is facing and how new solutions like CDI are being incorporated into the data center in tandem with virtualization, artificial intelligence (AI), and other high-value applications. We look forward to working with him as CDI becomes central to evolving data center architectures.” Sumit Puri, CEO & Cofounder, Liqid At VMware, Fleischmann is the CTO for business franchises within the organization such as VMware Managed Cloud (VMC - on AWS, Azure, and GCP), the VMware Cloud Provider Program (VCPP, a $10B ecosystem), Cloud Foundation (VCF) private clouds, HCI, vSphere, and vSAN. Before joining VMware as Cloud CTO, Fleischman was founder and CEO for storage software company Datera, and social gaming company Smeet. Fleischmann is also a founder of Europe's largest open-source ecosystem hub, the Open Source Business Alliance, which has more than 150 active members across the continent. He has also held leadership positions at Innotek, Microsoft, Pixelworks, Transmeta, and HPE. “As AI is infused into every element of the enterprise, organizations need innovative new ways to approach infrastructure that are more dynamic and flexible, while also making responsible choices when weaving together solutions for sustainable data center ecosystems that can answer the proliferation of data,” Fleischmann said. “I look forward to working with the Liqid team to identify growth opportunities for their composable disaggregated infrastructure solutions, forge powerful industry alliances, and better understand how CDI thrives in an edge-to-cloud world.” Liqid Matrix software enables IT users to configure and scale bare-metal servers in seconds from pools of disaggregated compute, accelerator, storage, and networking resource pools to address business needs in real-time. Resources can be released when no longer needed, for use by other applications. This new approach to infrastructure management helps avoid the costly overprovisioning, power and cooling challenges, unlocking new levels of efficiency and sustainability. To learn more about how Liqid Matrix software solutions seamlessly integrate with VMware virtualization technologies, read this solutions brief. Schedule an appointment with an expert on solutions based on Liqid Matrix CDI software-based and set up a free infrastructure evaluation by going here. Follow Liqid on Twitter and LinkedIn to stay up to date with the latest Liqid news and industry insights. About Liqid Liqid’s composable infrastructure software platform, Liqid Matrix ™, unlocks cloud-like speed and flexibility plus higher efficiency from on-prem infrastructure. Now IT professionals can configure, deploy, and scale physical, bare-metal servers in seconds, then reallocate valuable accelerator and storage resources via software as needs evolve. Dynamically provision previously impossible systems or scale existing investments, and then redeploy resources where needed in real-time. Unlock cloud-like datacenter agility at any scale and experience new levels of resource and operational efficiency with Liqid.

Read More

Spotlight

The history of virtualization explained by Aruba Cloud and Microsoft Hyper-V is sometimes mistaken for a Type 2 virtualization solution. This is because unless you are using the stand-alone version, you require an operating system. Without the Hyper-V role installed, the operating system uses a HAL (Hardware Abstraction Layer) to access the hardware. When Hyper-V is installed, the HAL is replaced by a Hypervisor and the operating system and virtual machines access the hardware through the hypervisor. So essentially the hypervisor is a HAL with additional features required for virtual machines.

Resources