Hyper-V Networking Best Practices

In most cases when you deploy virtual machines (VMs) on Hyper-V, you need to configure network access for VMs according to your tasks. The correct Hyper-V network configuration ensures a high level of performance and security for your virtual machines, Hyper-V hosts, and the entire infrastructure while improving the stability of the whole datacenter. A physical network adapter (network interface controller – NIC) and a virtual switch are the two main components required for Hyper-V networking. You need at least one of them to establish network connections for VMs. Today’s blog post explores Hyper-V networking best practices including: Use the latest drivers for your physical network adapters on a Hyper-V host. Using the latest network drivers, you get the latest available features provided by a network adapter driver, the maximum network speed and stability. Known bugs are usually fixed in the latest drivers and firmware versions. Even if your Windows operating system recognizes your physical network adapters and automatically uses built-in drivers, according to Hyper-V networking best practices it is recommended to install native drivers from the network adapter manufacturer.

Spotlight

Charter Global

Charter Global drives innovation in IT projects and business operations by defining strategy and providing consulting, digital solutions, custom development, and skilled resources. With an established customer base of Fortune 1000 industry leaders and over 100 successful project implementations, our experience and proven methodologies enable our professionals to deliver industry leading solutions in cloud technologies, open source, DevOps, mobility, CRM, ecommerce, SAP, and Oracle JD Edwards platforms. Let's talk about your project and build a solution together.

OTHER ARTICLES
Server Hypervisors

Addressing Multi-Cloud Complexity with VMware Tanzu

Article | September 9, 2022

Introduction With cloud computing on the path to becoming the mother of all transformations, particularly in IT's ways of development and operations, we are once again confronted with the problem of conversion errors, this time a hundredfold higher than previous moves to dispersed computing and the web. While the issue is evident, the remedies are not so obvious. Cloud complexity is the outcome of the fast acceleration of cloud migration and net-new innovation without consideration of the complexity this introduces in operations. Almost all businesses are already working in a multi-cloud or hybrid-cloud environment. According to an IDC report, 93% of enterprises utilize multiple clouds. The decision could have stemmed from a desire to save money and avoid vendor lock-in, increase resilience, or businesses might have found themselves with several clouds as a result of the compounding activities of different teams. When it comes to strategic technology choices, relatively few businesses begin by asking, "How can we secure and control our technology?" Must-Follow Methods for Multi-Cloud and Hybrid Cloud Success Data Analysis at Any Size, from Any Source: To proactively recognize, warn, and guide investigations, teams should be able to utilize all data throughout the cloud and on-premises. Insights in Real-Time: Considering the temporary nature of containerized operations and functions as a service, businesses cannot wait minutes to determine whether they are experiencing infrastructure difficulties. Only a scalable streaming architecture can ingest, analyze, and alert rapidly enough to discover and investigate problems before they have a major impact on consumers. Analytics That Enables Teams to Act: Because multi-cloud and hybrid-cloud strategies do not belong in a single team, businesses must be able to evaluate data inside and across teams in order to make decisions and take action swiftly. How Can VMware Help in Solving Multi-Cloud and Hybrid-Cloud Complexity? VMware made several announcements indicating a new strategy focused on modern applications. Their approach focuses on two VMware products: vSphere with Kubernetes and Tanzu. Since then, much has been said about VMware's modern app approach, and several products have launched. Let's focus on VMware Tanzu. VMware Tanzu Tanzu is a product that enables organizations to upgrade both their apps and the infrastructure that supports them. In the same way that VMware wants vRealize to be known for cloud management and automation, Tanzu wants to be known for modern business applications. Tanzu uses Kubernetes to build and manage modern applications. In Tanzu, there is just one development environment and one deployment process. VMware Tanzu is compatible with both private and public cloud infrastructures. Closing Lines The important point is that the Tanzu portfolio offers a great deal of flexibility in terms of where applications operate and how they are controlled. We observe an increase in demand for operating an application on any cloud, and how VMware Tanzu assists us in streamlining the multi-cloud operation for MLOps pipeline. Apart from multi-cloud operation, it is critical to monitor and alarm each component throughout the MLOps lifecycle, from Kubernetes pods and inference services to data and model performance.

Read More
Virtual Desktop Tools, Virtual Desktop Strategies

The Business Benefits of Embracing Virtualization on Virtual Machines

Article | June 8, 2023

Neglecting virtualization on VMs hampers productivity of firms. Operations become complex and resource usage is suboptimal. Leverage virtualization to empower with enhanced efficiency and scalability. Contents 1. Introduction 2. Types of Virtualization on VMs 2.1 Server virtualization 2.2 Storage virtualization 2.3 Network virtualization 2.3.1 Software-defined networking 2.3.2 Network function virtualization 2.4 Data virtualization 2.5 Application virtualization 2.6 Desktop virtualization 3. Impact of Virtualized VMs on Business Enterprises 3.1 Virtualization as a Game-Changer for Business Models 3.2 Evaluating IT Infrastructure Reformation 3.3 Virtualization Impact on Business Agility 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: 5.2 VM Sprawl: 5.3 Backward Compatibility 5.4 Conditional Network Monitoring 5.5 Interoperability: 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: 6.2 Effective techniques for Avoiding VM Sprawl: 6.3 Backward Compatibility: A Comprehensive Solution: 6.4 Performance Metrics: 6.5 Solutions for Interoperability in a Connected World: 7. Five Leading Providers for Virtualization of VMs Parallels Aryaka Aryaka Liquidware Azul 8. Conclusion 1. Introduction Virtualization on virtual machines (VMs) is a technology that enables multiple operating systems and applications to run on a single physical server or host. It has become essential to modern IT infrastructures, allowing businesses to optimize resource utilization, increase flexibility, and reduce costs. Embracing virtualization on VMs offers many business benefits, including improved disaster recovery, increased efficiency, enhanced security, and better scalability. In this digital age, where businesses rely heavily on technology to operate and compete, virtualization on VMs has become a crucial strategy for staying competitive and achieving business success. Organizations need to be agile and responsive to changing customer demands and market trends. Rather than focusing on consolidating resources, the emphasis now lies on streamlining operations, maximizing productivity, and optimizing convenience. 2. Types of Virtualization on VMs 2.1 Server virtualization The server virtualization process involves dividing a physical server into several virtual servers. This allows organizations to consolidate multiple physical servers onto a single physical server, which leads to cost savings, improved efficiency, and easier management. Server virtualization is one of the most common types of virtualization used on VMs. Consistent stability/reliability is the most critical product attributes IT decision-makers look for when evaluating server virtualization solutions. Other important factors include robust disaster recovery capabilities and advanced security features. Server Virtualization Market was valued at USD 5.7 Billion in 2018 and is projected to reach USD 9.04 Billion by 2026, growing at a CAGR of 5.9% from 2019 to 2026. (Source: Verified Market Research) 2.2 Storage virtualization Combining multiple network storage devices into an integrated virtual storage device, storage virtualization facilitates a cohesive and efficient approach to data management within a data center. IT administrators can allocate and manage the virtual storage unit with the help of management software, which facilitates streamlined storage tasks like backup, archiving, and recovery. There are three types of storage virtualization: file-level, block-level, and object-level. File-level consolidates multiple file systems into one virtualized system for easier management. Block-level abstracts physical storage into logical volumes allocated to VMs. Object-level creates a logical storage pool for more flexible and scalable storage services to VMs. The storage virtualization segment held an industry share of more than 10.5% in 2021 and is likely to observe considerable expansion through 2030 (Source: Global Market Insights) 2.3 Network virtualization Any computer network has hardware elements such as switches, routers, load balancers and firewalls. With network virtualization, virtual machines can communicate with each other across virtual networks, even if they are on different physical hosts. Network virtualization can also enable the creation of isolated virtual networks, which can be helpful for security purposes or for creating test environments. The following are two approaches to network virtualization: 2.3.1 Software-defined networking Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in the physical environment. For example, programming the system to prioritize video call traffic over application traffic to ensure consistent call quality in all online meetings. 2.3.2 Network function virtualization Network function virtualization technology combines the functions of network appliances, such as firewalls, load balancers, and traffic analyzers, that work together to improve network performance. The global Network function virtualization market size was valued at USD 12.9 billion in 2019 and is projected to reach USD 36.3 billion by 2024, at a CAGR of 22.9%, during the forecast period(2019-2024). (Source: MarketsandMarkets) 2.4 Data virtualization Data virtualization is the process of abstracting, organizing, and presenting data in a unified view that applications and users can access without regard to the data's physical location or format. Using virtualization techniques, data virtualization platforms can create a logical data layer that provides a single access point to multiple data sources, whether on-premises or in the cloud. This logical data layer is then presented to users as a single, virtual database, making it easier for applications and users to access and work with data from multiple sources and support cross-functional data analysis. Data Virtualization Market size was valued at USD 2.37 Billion in 2021 and is projected to reach USD 13.53 Billion by 2030, growing at a CAGR of 20.2% from 2023 to 2030. (Source: Verified Market Research) 2.5 Application virtualization In this approach, the applications are separated from the underlying hardware and operating system and encapsulated in a virtual environment, which can run on any compatible hardware and operating system. With application virtualization, the application is installed and configured on a virtual machine, which can then be replicated and distributed to multiple end-users. For example, users can run a Microsoft Windows application on a Linux machine without changing the machine configuration. According to a report, the global application virtualization market size is predicted to grow from USD 2.2 billion in 2020 to USD 4.4 billion by 2025, at a CAGR of 14.7% during the period of 2020-2025. (Source: MarketsandMarkets) 2.6 Desktop virtualization In desktop virtualization, a single physical machine can host multiple virtual machines, each with its own operating system and desktop environment. Users can access these virtual desktops remotely through a network connection, allowing them to work from anywhere and on any device. Desktop virtualization is commonly used in enterprise settings to provide employees with a secure and flexible way to access their work environment. The desktop virtualization market is anticipated to register a CAGR of 10.6% over the forecast period (2018-28). (Source: Mordor Intelligence) 3. Impact of Virtualized VMs on Business Enterprises Virtualization can increase the adaptability of business processes. The servers can support different operating systems (OS) and applications as the software is decoupled from the hardware. Business processes can be run on virtual computers, with each virtual machine running its own OS, applications, softwares and set of programs. 3.1 Virtualization as a Game-Changer for Business Models The one server, one application model can be abolished using virtualization, which was inefficient because most servers were underutilized. Instead, one server can become many virtual machines using virtualization software, each running on a different operating system such as Windows, Linux, or Apache. Virtualization has made it possible for companies to fit more virtual servers onto fewer physical devices, saving them space, power, and time spent managing them. The adoption of virtualization services is significantly increased by industrial automation systems. Industrial automation suppliers offer new-generation devices to virtualize VMs and software-driven industrial automation operations. This will solve problems with important automation equipment like Programmable Logic Controller (PLCs) and Distributed Control Systems (DCS), leading to more virtualized goods and services in industrial automation processes. 3.2 Evaluating IT Infrastructure Reformation IT infrastructure evaluation for virtualization needs to look at existing systems and processes along with finding opportunities and shortcomings. Cloud computing, mobile workforces, and app compatibility cause this growth. Over the last decade, these areas have shifted from conventional to virtual infrastructure. • Capacity on Demand: It is a concept that refers to the ability to quickly and easily deploy virtual servers, either on-premise or through a hosting provider. This is made possible through the use of virtualization technologies. These technologies allow businesses to create multiple virtual instances of servers that can be easily scaled up or down as per the requirement, providing businesses with access to IT capacity on demand. • Disaster Recovery (DR): DR is a critical consideration in evaluating IT infrastructure reformation for virtualization. Virtualization technology enables businesses to create virtual instances of servers that run multiple applications, which eliminates the need for robust DR solutions that can be expensive and time-consuming to implement. As a result, businesses can save costs by leveraging the virtual infrastructure for DR purposes. • Consumerization of IT: The consumerization of IT refers to the increasing trend of employees using personal devices and applications in their work environments. This has resulted in a need for businesses to ensure that their IT infrastructure can support a diverse range of devices and applications. Virtual machines enable businesses to create virtual desktop environments that can be accessed from any device with an internet connection, thereby providing employees with a consistent and secure work environment regardless of their device. 3.3 Virtualization Impact on Business Agility Virtualization has emerged as a valuable tool for enhancing business agility by allowing firms to respond quickly, efficiently, and cost-effectively to market changes. By enabling rapid installation and migration of applications and services across systems, the migration to the virtualized systems has allowed companies to achieve significant operational flexibility, responsiveness, and scalability gains. According to a poll conducted by Tech Target, 66% of the firms have reported an increase in agility due to virtualization adoption. This trend is expected to rise, driven by growing demand for cost-effective and efficient IT solutions across various industries. In line with this, a comprehensive analysis has projected that the market for virtualization software was estimated to be worth USD 45.51 billion in 2021. It is anticipated to grow to USD 223.35 billion by 2029, with a CAGR of 22.00% predicted for the forecast period of 2022–2029, including application, network, and hardware virtualization. (Source: Data Bridge) This is primarily attributed to the growing need for businesses to improve their agility and competitiveness by leveraging advanced virtualization technologies and solutions for applications and servers. 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? Businesses looking to boost their ROI have gradually shifted to Virtualizing VMs, in the past years. According to a recent study, VM virtualization helps businesses reduce their hardware and maintenance costs by up to 50%, significantly impacting their bottom line. Server consolidation helps reduce hardware costs and improve resource utilization, as businesses allocate resources, operating systems, and applications dynamically based on workload demand. Utilizing application virtualization, in particular, can assist businesses in optimizing resource utilization by as much as 80%. Software-defined Networking (SDN) allows new devices, some with previously unsupported operating systems, to be more easily incorporated into an enterprise’s IT environment. The telecom industry can greatly benefit from the emergence of Network Functions Virtualization (NFV), SDN, and Network Virtualization, as these technologies provide significant advantages. The NFV idea virtualizes and effectively joins service provider network elements on multi-tenant industry-standard servers, switches, and storage. To leverage the benefits of NFV, telecom service providers have heavily invested in NFV services. By deploying NFV and application virtualization together, organizations can create a more flexible and scalable IT infrastructure that responds to changing business needs more effectively. 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: Resource availability is crucial when running applications in a virtual machine, as it leads to increased resource consumption. The resource distribution in VMs is typically managed by a hypervisor or virtual machine manager responsible for allocating resources to the VMs based on their specific requirements. A study found that poor resource management can lead to overprovisioning, increasing cloud costs by up to 70%. (Source: Gartner) 5.2 VM Sprawl: 82% of companies experienced VM sprawl, with the average organization having 115% more VMs than they need, as per a survey. (Source: Veeam) VM sprawl can occur in virtualization when an excessive proliferation of virtual machines is not effectively managed or utilized, leading to many underutilized or inactive VMs. This can lead to increased resource consumption, higher costs, and reduced performance. 5.3 Backward Compatibility: Backward compatibility can be particularly challenging in virtualized systems, where applications may run on multiple operating systems than they were designed for. A recent study showed that 87% of enterprises have encountered software compatibility issues during their migration to the cloud for app virtualization. (Source: Flexera) 5.4 Conditional Network Monitoring: A study found that misconfigurations, hardware problems, and human error account for over 60% of network outages. (Source: SolarWinds) Network monitoring tools can help organizations monitor virtual network traffic and identify potential network issues affecting application performance in VMs. These tools also provide visibility into network traffic patterns, enabling IT teams to identify areas for optimization and improvement. 5.5 Interoperability: Interoperability issues are common when implementing cloud-based virtualization when integrating the virtualized environment with other on-premises or cloud-based systems. According to a report, around 50% of virtualization projects encounter interoperability issues that require extensive troubleshooting and debugging. (Source: Gartner) 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: By breaking up large, monolithic applications into smaller, more manageable components, virtualizing allows organizations to distribute resources effectively, enabling its users with varying needs to utilize the resources with optimum efficiency. With prioritizing resource distribution, resources such as CPU, memory, and storage can be dynamically allocated to virtual machines as needed. Businesses must frequently monitor and evaluate resource utilization data to better resource allocation and management. 6.2 Effective techniques for Avoiding VM Sprawl: VM sprawl can be addressed through a variety of techniques, including VM lifecycle management, automated provisioning, and regular audits of virtual machine usage. Tools such as virtualization management software, cloud management platforms, and monitoring tools can help organizations gain better visibility and control over their virtual infrastructure. Monitoring applications and workload requirements as well as establishing policies and procedures for virtual machine provisioning & decommissioning are crucial for businesses to avoid VM sprawl. 6.3 Backward Compatibility: A Comprehensive Solution: One of the solutions to backward compatibility challenges is to use virtualization technologies, such as containers or hypervisors, that allow older applications to run on newer hardware and software. Another solution is to use compatibility testing tools that can identify potential compatibility issues before they become problems. To ensure that virtual machines can run on different hypervisors or cloud platforms, businesses can implement standardized virtualization architectures that support a wide range of hardware and software configurations. 6.4 Performance Metrics: Businesses employing cloud-based virtualization must have reliable network monitoring in order to guarantee the best possible performance of their virtual workloads and to promptly detect and resolve any problems that may affect the performance. Businesses can improve their customers' experience in VMs by implementing a network monitoring solution that helps them locate slow spots, boost speed, and avoid interruptions. 6.5 Solutions for Interoperability in a Connected World: Standardized communication protocols and APIs help cloud-based virtualization setups to interoperate. Integrating middleware like enterprise service buses (ESBs) can consolidate system and application management. In addition, businesses can use cloud-native tools and services like Kubernetes for container orchestration or cloud-native databases for interoperability in virtual machines. 7. Five Leading Providers for Virtualization of VMs Aryaka Aryaka is a pioneer of a cloud-first architecture for the delivery of SD-WAN and, more recently, SASE. Using their proprietary, integrated technology and services, they ensure safe connectivity for businesses. They are named a Gartner ‘Voice of the Customer leader’ for simplifying the adoption of network and network security solutions with organization standards for shifting from legacy IT infrastructure to various modern deployments. Gigamon Gigamon provides a comprehensive network observability solution that enhances observability tools' capabilities. The solution helps IT organizations ensure security and compliance governance, accelerate the root-cause analysis of performance issues, and reduce the operational overhead of managing complex hybrid and multi-cloud IT infrastructures. Gigamon's solution offers a deep observability pipeline that harnesses actionable network-level intelligence to amplify the power of observability tools. Liquidware Liquidware is a software company that offers desktop and application virtualization solutions. Their services include user environment management, application layering, desktop virtualization, monitoring and analytics, and migration services. Using these services, businesses can improve user productivity, reduce complexity in managing applications, lower hardware costs, troubleshoot issues quickly, and migrate to virtualized environments efficiently. Azul Azul offers businesses Java runtime solutions. Azul Platform Prime is a cloud-based Java runtime platform that provides enhanced performance, scalability, and security. Azul provides 24/7 technical support and upgrades for Java applications. Their services improve Java application performance, dependability, and security for enterprises. Azul also provides Java application development and deployment training and consultancy. 8. Conclusion Virtualization of VMs in businesses boosts their ROI significantly. The integration of virtualization with DevOps practices could allow for more streamlined application delivery and deployment, with greater automation and continuous integration, thus achieving greater success in current competitive business landscape. We expect to see more advancements in developing new hypervisors and management tools in the coming years. Additionally, there will likely be an increased focus on security and data protection in virtualized environments, as well as greater integration with other emerging technologies like containerization and edge computing. Virtualization is set to transform the business landscape in future by facilitating the effective and safe deployment and management of applications as technology advances and new trends emerge. The future of virtualization looks promising as it continues to adapt to and revolutionize the changing needs of organizations, streamlining their operations, reducing carbon footprint, and improving overall sustainability. As such, virtualization will continue to be a crucial technology for businesses seeking to thrive in the digital age.

Read More
Virtual Desktop Tools

VMware Tanzu Kubernetes Grid Integrated: A Year in Review

Article | August 12, 2022

The modern application world is advancing at an unprecedented rate. However, the new possibilities these transformations make available don’t come without complexities. IT teams often find themselves under pressure to keep up with the speed of innovation. That’s why VMware provides a production-ready container platform for customers that aligns to upstream Kubernetes, VMware Tanzu Kubernetes Grid Integrated (formerly known as VMware Enterprise PKS). By working with VMware, customers can move at the speed their businesses demand without the headache of trying to run their operations alone. Our offerings help customers stay current with the open source community's innovations while having access to the support they need to move forward confidently. Many changes have been made to Tanzu Kubernetes Grid Integrated edition over the past year that are designed to help customers keep up with Kubernetes advancements, move faster, and enhance security. Kubernetes updates The latest version, Tanzu Kubernetes Grid Integrated 1.13, bumped to Kubernetes version 1.22 and removed beta APIs in favor of stable APIs that have since evolved from the betas. Over time, some APIs will evolve. Beta APIs typically evolve more often than stable APIs and should therefore be checked before updates occur. The APIs listed below will not be served with v1.22 as they have been replaced by more stable API versions: Beta versions of the ValidatingWebhookConfiguration and MutatingWebhookConfiguration API (the admissionregistration.k8s.io/v1beta1 API versions) The beta CustomResourceDefinition API (apiextensions.k8s.io/v1beta1) The beta APIService API (apiregistration.k8s.io/v1beta1) The beta TokenReview API (authentication.k8s.io/v1beta1) Beta API versions of SubjectAccessReview, LocalSubjectAccessReview, SelfSubjectAccessReview (API versions from authorization.k8s.io/v1beta1) The beta CertificateSigningRequest API (certificates.k8s.io/v1beta1) The beta Lease API (coordination.k8s.io/v1beta1) All beta Ingress APIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions) Containerd support Tanzu Kubernetes Grid Integrated helps customers eliminate lengthy deployment and management processes with on-demand provisioning, scaling, patching, and updating of Kubernetes clusters. To stay in alignment with the Kubernetes community, Containerd will be used as the default container runtime, although Docker can still be selected using the command-line interface (CLI) if needed. Networking Several updates have been made in regards to networking as well including support of Antrea and NSX-T enhancements. Antrea support With Tanzu Kubernetes Grid Integrated version 1.10 and later, customers can leverage Antrea on install or upgrade to use Kubernetes network policies. This enables enterprises to get the best of both worlds: access to the latest innovation from Antrea and world-class support from VMware. NSX-T enhancements NSX-T was integrated with Tanzu Kubernetes Grid Integrated to simplify container networking and increase security. This has been enhanced so customers can now choose the policy API as an option on a fresh installation of Tanzu Kubernetes Grid Integrated. This means that users will have access to new features available only through NSX-T policy API. This feature is currently in beta. In addition, more NSX-T and NSX Container Plug-in (NCP) configuration is possible through the network profiles. This operator command provides the benefit of being able to set configurations through the CLI, and this is persistent across lifecycle events. Storage enhancements We’ve made storage operations in our customers’ container native environments easier, too. Customers were seeking a simpler and more secure way to manage Container Storage Interface (CSI), and we introduced automatic installation of the vSphere CSI driver as a BOSH process beginning with Tanzu Kubernetes Grid Integrated 1.11. Also, as VCP will be deprecated, customers are advised to use the CSI driver. VCP-to-CSI migration is a part of Tanzu Kubernetes Grid Integrated 1.12 and is designed to help customers move forward faster. Enhanced security Implementing new technologies provides users with new capabilities, but it can also lead to new security vulnerabilities if not done correctly. VMware’s goal is to help customers move forward with ease and the confidence of knowing that enhancements don’t compromise core security needs. CIS benchmarks This year, Tanzu Kubernetes Grid Integrated continued to see improvements that help meet today’s high security standards. Meeting the Center for Internet Security (CIS) benchmarks standards is vital for Tanzu Kubernetes Grid Integrated. In recent Tanzu Kubernetes Grid Integrated releases, a few Kubernetes-related settings have been adjusted to ensure compliance with CIS requirements: Kube-apiserver with --kubelet-certificate-authority settings (v1.12) Kube-apiserver with --authorization-mode argument includes Node (v1.12) Kube-apiserver with proper --audit-log-maxage argument (v1.13) Kube-apiserver with proper --audit-log-maxbackup argument (v1.13) Kube-apiserver with proper --audit-log-maxsize argument (v1.13) Certificate rotations Tanzu Kubernetes Grid Integrated secures all communication between its control plane components and the Kubernetes clusters it manages, using TLS validated by certificates. The certificate rotations have been simplified in recent releases. Customers can now list and simply update certificates on a cluster-by-cluster basis through the “tkgi rotate-certificates” command. The multistep, manual process was replaced with a single CLI command to rotate NSX-T certificates (available since Tanzu Kubernetes Grid Integrated 1.10) and cluster-by-cluster certificates (available since Tanzu Kubernetes Grid Integrated 1.12). Hardening of images Tanzu Kubernetes Grid Integrated keeps OS images, container base images, and software library versions updated to remediate the CVEs reported by customers and in the industry. It also continues to use the latest Ubuntu Xenial Stemcell latest versions for node virtual machines. With recent releases and patch versions, the version of dockerd, containerd, runc, telegraf, nfs-utils had been bumped to the latest stable and secure versions as well. By using Harbor as a private registry management service, customers could also leverage the built-in vulnerability scan features to discover the application images CVEs. VMware is dedicated to supporting customers with production readiness by enhancing the user experience. Tanzu Kubernetes Grid Integrated Edition has stayed up to date with the Kubernetes community and provides customers with the support and resources they need to innovate rapidly.

Read More
Virtual Desktop Tools

Evaluating the Impact of Application Virtualization

Article | August 12, 2022

The emergence of the notion of virtualization in today's digital world has turned the tables. It has assisted the sector in increasing production and making every activity easy and effective. One of the most remarkable innovations is the virtualization of applications, which allows users to access and utilize applications even if they are not installed on the system on which they are working. As a result, the cost of obtaining software and installing it on specific devices is reduced. Application virtualization is a technique that separates an application from the operating system on which it runs. It provides access to a program without requiring it to be installed on the target device. The program functions and interacts with the user as if it were native to the device. The program window can be resized, moved, or minimized, and the user can utilize normal keyboard and mouse movements. There might be minor differences from time to time, but the user gets a seamless experience. Let’s have a look at the ways in which application virtualization helps businesses. The Impact of Application Virtualization • Remote-Safe Approach Application virtualization enables remote access to essential programs from any end device in a safe and secure manner. With remote work culture developing as an increasingly successful global work paradigm, the majority of businesses have adapted to remote work-from-home practice. This state-of-the-art technology is the best option for remote working environments because it combines security and convenience of access. • Expenditure Limitations If you have a large end-user base that is always growing, acquiring and operating separate expensive devices for each individual user would definitely exhaust your budget. In such situations, virtualization will undoubtedly come in handy because it has the potential to offer all necessary applications to any target device. • Rolling Out Cloud Applications Application virtualization can aid in the development and execution of a sophisticated and controlled strategy to manage and assure a seamless cloud transition of an application that is presently used as an on-premise version in portions of the same enterprise. In such cases, it is vital to guarantee that the application continues to work properly while being rolled out to cloud locations. You can assure maximum continuity and little impact on your end customers by adopting a cutting-edge virtualization platform. These platforms will help to ensure that both the on-premise and cloud versions of the application are delivered smoothly to diverse groups sitting inside the same workspace. • Implementation of In-House Applications Another prominent case in which virtualization might be beneficial is the deployment and execution of in-house applications. Developers often update such programs on a regular basis. Application virtualization enables extensive remote updates, installation, and distribution of critical software. As a result, this technology is crucial for enterprises that build and employ in-house applications. Closing Lines There is no doubt about the efficiency and advantages of application virtualization. You do not need to be concerned with installing the programs on your system. Moreover, you do not need to maintain the minimum requirements for running such programs since they will operate on the hosted server, giving you the impression that the application is operating on your system. There will be no performance concerns when the program runs. There will not be any overload on your system, and you will not encounter any compatibility issues as a result of your system's underlying operating system.

Read More

Spotlight

Charter Global

Charter Global drives innovation in IT projects and business operations by defining strategy and providing consulting, digital solutions, custom development, and skilled resources. With an established customer base of Fortune 1000 industry leaders and over 100 successful project implementations, our experience and proven methodologies enable our professionals to deliver industry leading solutions in cloud technologies, open source, DevOps, mobility, CRM, ecommerce, SAP, and Oracle JD Edwards platforms. Let's talk about your project and build a solution together.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Events