Fifth Generation Computing: Virtualization And How To Manage It

Imagine we are in a time machine and decide to go back more than 50 years to the year 1964. While there will be many differences, some things aren’t as different as we may initially think. During the period from 1964 to 1971, computer users were interacting with computers through keyboards and monitors. They also interfaced with an operating system, allowing the device to run many different applications at one time with a central program that monitored the memory. Sound familiar? Now, let’s fast-forward back to present day.

Spotlight

Thales

The people we all rely on to make the world go round – they rely on Thales. Our customers come to us with big ambitions: to make life better, to keep us safer. Combining a unique diversity of expertise, talents and cultures, our architects design and deliver extraordinary high technology solutions. Solutions that make tomorrow possible, today.

OTHER ARTICLES
Virtual Desktop Strategies

The Business Benefits of Embracing Virtualization on Virtual Machines

Article | July 26, 2022

Neglecting virtualization on VMs hampers productivity of firms. Operations become complex and resource usage is suboptimal. Leverage virtualization to empower with enhanced efficiency and scalability. Contents 1. Introduction 2. Types of Virtualization on VMs 2.1 Server virtualization 2.2 Storage virtualization 2.3 Network virtualization 2.3.1 Software-defined networking 2.3.2 Network function virtualization 2.4 Data virtualization 2.5 Application virtualization 2.6 Desktop virtualization 3. Impact of Virtualized VMs on Business Enterprises 3.1 Virtualization as a Game-Changer for Business Models 3.2 Evaluating IT Infrastructure Reformation 3.3 Virtualization Impact on Business Agility 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: 5.2 VM Sprawl: 5.3 Backward Compatibility 5.4 Conditional Network Monitoring 5.5 Interoperability: 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: 6.2 Effective techniques for Avoiding VM Sprawl: 6.3 Backward Compatibility: A Comprehensive Solution: 6.4 Performance Metrics: 6.5 Solutions for Interoperability in a Connected World: 7. Five Leading Providers for Virtualization of VMs Parallels Aryaka Aryaka Liquidware Azul 8. Conclusion 1. Introduction Virtualization on virtual machines (VMs) is a technology that enables multiple operating systems and applications to run on a single physical server or host. It has become essential to modern IT infrastructures, allowing businesses to optimize resource utilization, increase flexibility, and reduce costs. Embracing virtualization on VMs offers many business benefits, including improved disaster recovery, increased efficiency, enhanced security, and better scalability. In this digital age, where businesses rely heavily on technology to operate and compete, virtualization on VMs has become a crucial strategy for staying competitive and achieving business success. Organizations need to be agile and responsive to changing customer demands and market trends. Rather than focusing on consolidating resources, the emphasis now lies on streamlining operations, maximizing productivity, and optimizing convenience. 2. Types of Virtualization on VMs 2.1 Server virtualization The server virtualization process involves dividing a physical server into several virtual servers. This allows organizations to consolidate multiple physical servers onto a single physical server, which leads to cost savings, improved efficiency, and easier management. Server virtualization is one of the most common types of virtualization used on VMs. Consistent stability/reliability is the most critical product attributes IT decision-makers look for when evaluating server virtualization solutions. Other important factors include robust disaster recovery capabilities and advanced security features. Server Virtualization Market was valued at USD 5.7 Billion in 2018 and is projected to reach USD 9.04 Billion by 2026, growing at a CAGR of 5.9% from 2019 to 2026. (Source: Verified Market Research) 2.2 Storage virtualization Combining multiple network storage devices into an integrated virtual storage device, storage virtualization facilitates a cohesive and efficient approach to data management within a data center. IT administrators can allocate and manage the virtual storage unit with the help of management software, which facilitates streamlined storage tasks like backup, archiving, and recovery. There are three types of storage virtualization: file-level, block-level, and object-level. File-level consolidates multiple file systems into one virtualized system for easier management. Block-level abstracts physical storage into logical volumes allocated to VMs. Object-level creates a logical storage pool for more flexible and scalable storage services to VMs. The storage virtualization segment held an industry share of more than 10.5% in 2021 and is likely to observe considerable expansion through 2030 (Source: Global Market Insights) 2.3 Network virtualization Any computer network has hardware elements such as switches, routers, load balancers and firewalls. With network virtualization, virtual machines can communicate with each other across virtual networks, even if they are on different physical hosts. Network virtualization can also enable the creation of isolated virtual networks, which can be helpful for security purposes or for creating test environments. The following are two approaches to network virtualization: 2.3.1 Software-defined networking Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in the physical environment. For example, programming the system to prioritize video call traffic over application traffic to ensure consistent call quality in all online meetings. 2.3.2 Network function virtualization Network function virtualization technology combines the functions of network appliances, such as firewalls, load balancers, and traffic analyzers, that work together to improve network performance. The global Network function virtualization market size was valued at USD 12.9 billion in 2019 and is projected to reach USD 36.3 billion by 2024, at a CAGR of 22.9%, during the forecast period(2019-2024). (Source: MarketsandMarkets) 2.4 Data virtualization Data virtualization is the process of abstracting, organizing, and presenting data in a unified view that applications and users can access without regard to the data's physical location or format. Using virtualization techniques, data virtualization platforms can create a logical data layer that provides a single access point to multiple data sources, whether on-premises or in the cloud. This logical data layer is then presented to users as a single, virtual database, making it easier for applications and users to access and work with data from multiple sources and support cross-functional data analysis. Data Virtualization Market size was valued at USD 2.37 Billion in 2021 and is projected to reach USD 13.53 Billion by 2030, growing at a CAGR of 20.2% from 2023 to 2030. (Source: Verified Market Research) 2.5 Application virtualization In this approach, the applications are separated from the underlying hardware and operating system and encapsulated in a virtual environment, which can run on any compatible hardware and operating system. With application virtualization, the application is installed and configured on a virtual machine, which can then be replicated and distributed to multiple end-users. For example, users can run a Microsoft Windows application on a Linux machine without changing the machine configuration. According to a report, the global application virtualization market size is predicted to grow from USD 2.2 billion in 2020 to USD 4.4 billion by 2025, at a CAGR of 14.7% during the period of 2020-2025. (Source: MarketsandMarkets) 2.6 Desktop virtualization In desktop virtualization, a single physical machine can host multiple virtual machines, each with its own operating system and desktop environment. Users can access these virtual desktops remotely through a network connection, allowing them to work from anywhere and on any device. Desktop virtualization is commonly used in enterprise settings to provide employees with a secure and flexible way to access their work environment. The desktop virtualization market is anticipated to register a CAGR of 10.6% over the forecast period (2018-28). (Source: Mordor Intelligence) 3. Impact of Virtualized VMs on Business Enterprises Virtualization can increase the adaptability of business processes. The servers can support different operating systems (OS) and applications as the software is decoupled from the hardware. Business processes can be run on virtual computers, with each virtual machine running its own OS, applications, softwares and set of programs. 3.1 Virtualization as a Game-Changer for Business Models The one server, one application model can be abolished using virtualization, which was inefficient because most servers were underutilized. Instead, one server can become many virtual machines using virtualization software, each running on a different operating system such as Windows, Linux, or Apache. Virtualization has made it possible for companies to fit more virtual servers onto fewer physical devices, saving them space, power, and time spent managing them. The adoption of virtualization services is significantly increased by industrial automation systems. Industrial automation suppliers offer new-generation devices to virtualize VMs and software-driven industrial automation operations. This will solve problems with important automation equipment like Programmable Logic Controller (PLCs) and Distributed Control Systems (DCS), leading to more virtualized goods and services in industrial automation processes. 3.2 Evaluating IT Infrastructure Reformation IT infrastructure evaluation for virtualization needs to look at existing systems and processes along with finding opportunities and shortcomings. Cloud computing, mobile workforces, and app compatibility cause this growth. Over the last decade, these areas have shifted from conventional to virtual infrastructure. • Capacity on Demand: It is a concept that refers to the ability to quickly and easily deploy virtual servers, either on-premise or through a hosting provider. This is made possible through the use of virtualization technologies. These technologies allow businesses to create multiple virtual instances of servers that can be easily scaled up or down as per the requirement, providing businesses with access to IT capacity on demand. • Disaster Recovery (DR): DR is a critical consideration in evaluating IT infrastructure reformation for virtualization. Virtualization technology enables businesses to create virtual instances of servers that run multiple applications, which eliminates the need for robust DR solutions that can be expensive and time-consuming to implement. As a result, businesses can save costs by leveraging the virtual infrastructure for DR purposes. • Consumerization of IT: The consumerization of IT refers to the increasing trend of employees using personal devices and applications in their work environments. This has resulted in a need for businesses to ensure that their IT infrastructure can support a diverse range of devices and applications. Virtual machines enable businesses to create virtual desktop environments that can be accessed from any device with an internet connection, thereby providing employees with a consistent and secure work environment regardless of their device. 3.3 Virtualization Impact on Business Agility Virtualization has emerged as a valuable tool for enhancing business agility by allowing firms to respond quickly, efficiently, and cost-effectively to market changes. By enabling rapid installation and migration of applications and services across systems, the migration to the virtualized systems has allowed companies to achieve significant operational flexibility, responsiveness, and scalability gains. According to a poll conducted by Tech Target, 66% of the firms have reported an increase in agility due to virtualization adoption. This trend is expected to rise, driven by growing demand for cost-effective and efficient IT solutions across various industries. In line with this, a comprehensive analysis has projected that the market for virtualization software was estimated to be worth USD 45.51 billion in 2021. It is anticipated to grow to USD 223.35 billion by 2029, with a CAGR of 22.00% predicted for the forecast period of 2022–2029, including application, network, and hardware virtualization. (Source: Data Bridge) This is primarily attributed to the growing need for businesses to improve their agility and competitiveness by leveraging advanced virtualization technologies and solutions for applications and servers. 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? Businesses looking to boost their ROI have gradually shifted to Virtualizing VMs, in the past years. According to a recent study, VM virtualization helps businesses reduce their hardware and maintenance costs by up to 50%, significantly impacting their bottom line. Server consolidation helps reduce hardware costs and improve resource utilization, as businesses allocate resources, operating systems, and applications dynamically based on workload demand. Utilizing application virtualization, in particular, can assist businesses in optimizing resource utilization by as much as 80%. Software-defined Networking (SDN) allows new devices, some with previously unsupported operating systems, to be more easily incorporated into an enterprise’s IT environment. The telecom industry can greatly benefit from the emergence of Network Functions Virtualization (NFV), SDN, and Network Virtualization, as these technologies provide significant advantages. The NFV idea virtualizes and effectively joins service provider network elements on multi-tenant industry-standard servers, switches, and storage. To leverage the benefits of NFV, telecom service providers have heavily invested in NFV services. By deploying NFV and application virtualization together, organizations can create a more flexible and scalable IT infrastructure that responds to changing business needs more effectively. 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: Resource availability is crucial when running applications in a virtual machine, as it leads to increased resource consumption. The resource distribution in VMs is typically managed by a hypervisor or virtual machine manager responsible for allocating resources to the VMs based on their specific requirements. A study found that poor resource management can lead to overprovisioning, increasing cloud costs by up to 70%. (Source: Gartner) 5.2 VM Sprawl: 82% of companies experienced VM sprawl, with the average organization having 115% more VMs than they need, as per a survey. (Source: Veeam) VM sprawl can occur in virtualization when an excessive proliferation of virtual machines is not effectively managed or utilized, leading to many underutilized or inactive VMs. This can lead to increased resource consumption, higher costs, and reduced performance. 5.3 Backward Compatibility: Backward compatibility can be particularly challenging in virtualized systems, where applications may run on multiple operating systems than they were designed for. A recent study showed that 87% of enterprises have encountered software compatibility issues during their migration to the cloud for app virtualization. (Source: Flexera) 5.4 Conditional Network Monitoring: A study found that misconfigurations, hardware problems, and human error account for over 60% of network outages. (Source: SolarWinds) Network monitoring tools can help organizations monitor virtual network traffic and identify potential network issues affecting application performance in VMs. These tools also provide visibility into network traffic patterns, enabling IT teams to identify areas for optimization and improvement. 5.5 Interoperability: Interoperability issues are common when implementing cloud-based virtualization when integrating the virtualized environment with other on-premises or cloud-based systems. According to a report, around 50% of virtualization projects encounter interoperability issues that require extensive troubleshooting and debugging. (Source: Gartner) 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: By breaking up large, monolithic applications into smaller, more manageable components, virtualizing allows organizations to distribute resources effectively, enabling its users with varying needs to utilize the resources with optimum efficiency. With prioritizing resource distribution, resources such as CPU, memory, and storage can be dynamically allocated to virtual machines as needed. Businesses must frequently monitor and evaluate resource utilization data to better resource allocation and management. 6.2 Effective techniques for Avoiding VM Sprawl: VM sprawl can be addressed through a variety of techniques, including VM lifecycle management, automated provisioning, and regular audits of virtual machine usage. Tools such as virtualization management software, cloud management platforms, and monitoring tools can help organizations gain better visibility and control over their virtual infrastructure. Monitoring applications and workload requirements as well as establishing policies and procedures for virtual machine provisioning & decommissioning are crucial for businesses to avoid VM sprawl. 6.3 Backward Compatibility: A Comprehensive Solution: One of the solutions to backward compatibility challenges is to use virtualization technologies, such as containers or hypervisors, that allow older applications to run on newer hardware and software. Another solution is to use compatibility testing tools that can identify potential compatibility issues before they become problems. To ensure that virtual machines can run on different hypervisors or cloud platforms, businesses can implement standardized virtualization architectures that support a wide range of hardware and software configurations. 6.4 Performance Metrics: Businesses employing cloud-based virtualization must have reliable network monitoring in order to guarantee the best possible performance of their virtual workloads and to promptly detect and resolve any problems that may affect the performance. Businesses can improve their customers' experience in VMs by implementing a network monitoring solution that helps them locate slow spots, boost speed, and avoid interruptions. 6.5 Solutions for Interoperability in a Connected World: Standardized communication protocols and APIs help cloud-based virtualization setups to interoperate. Integrating middleware like enterprise service buses (ESBs) can consolidate system and application management. In addition, businesses can use cloud-native tools and services like Kubernetes for container orchestration or cloud-native databases for interoperability in virtual machines. 7. Five Leading Providers for Virtualization of VMs Aryaka Aryaka is a pioneer of a cloud-first architecture for the delivery of SD-WAN and, more recently, SASE. Using their proprietary, integrated technology and services, they ensure safe connectivity for businesses. They are named a Gartner ‘Voice of the Customer leader’ for simplifying the adoption of network and network security solutions with organization standards for shifting from legacy IT infrastructure to various modern deployments. Gigamon Gigamon provides a comprehensive network observability solution that enhances observability tools' capabilities. The solution helps IT organizations ensure security and compliance governance, accelerate the root-cause analysis of performance issues, and reduce the operational overhead of managing complex hybrid and multi-cloud IT infrastructures. Gigamon's solution offers a deep observability pipeline that harnesses actionable network-level intelligence to amplify the power of observability tools. Liquidware Liquidware is a software company that offers desktop and application virtualization solutions. Their services include user environment management, application layering, desktop virtualization, monitoring and analytics, and migration services. Using these services, businesses can improve user productivity, reduce complexity in managing applications, lower hardware costs, troubleshoot issues quickly, and migrate to virtualized environments efficiently. Azul Azul offers businesses Java runtime solutions. Azul Platform Prime is a cloud-based Java runtime platform that provides enhanced performance, scalability, and security. Azul provides 24/7 technical support and upgrades for Java applications. Their services improve Java application performance, dependability, and security for enterprises. Azul also provides Java application development and deployment training and consultancy. 8. Conclusion Virtualization of VMs in businesses boosts their ROI significantly. The integration of virtualization with DevOps practices could allow for more streamlined application delivery and deployment, with greater automation and continuous integration, thus achieving greater success in current competitive business landscape. We expect to see more advancements in developing new hypervisors and management tools in the coming years. Additionally, there will likely be an increased focus on security and data protection in virtualized environments, as well as greater integration with other emerging technologies like containerization and edge computing. Virtualization is set to transform the business landscape in future by facilitating the effective and safe deployment and management of applications as technology advances and new trends emerge. The future of virtualization looks promising as it continues to adapt to and revolutionize the changing needs of organizations, streamlining their operations, reducing carbon footprint, and improving overall sustainability. As such, virtualization will continue to be a crucial technology for businesses seeking to thrive in the digital age.

Read More
Virtual Desktop Tools

Why Are Businesses Tilting Towards VDI for Remote Employees?

Article | August 12, 2022

Although remote working or working from home became popular during the COVID era, did you know that the technology that gives the best user experience (UX) for remote work was developed more than three decades ago? Citrix was founded in 1989 as one of the first software businesses to provide the ability to execute any program on any device over any connection. In 2006, VMware coined the term "virtual desktop infrastructure (VDI)" to designate their virtualization products. Many organizations created remote work arrangements in response to the COVID-19 pandemic, and the phenomenon will continue even in 2022. Organizations have used a variety of methods to facilitate remote work over the years. For businesses, VDI has been one of the most effective, allowing businesses to centralize their IT resources and give users remote access to a consolidated pool of computing capacity. Reasons Why Businesses Should Use VDI for their Remote Employees? Companies can find it difficult to scale their operations and grow while operating remotely. VDI, on the other hand, can assist in enhancing these efforts by eliminating some of the downsides of remote work. Device Agnostic As long as employees have sufficient internet connectivity, virtual desktops can accompany them across the world. They can use a tablet, phone, laptop, client side, or Mac to access the virtual desktop. Reduced Support Costs Since VDI setups can often be handled by a smaller IT workforce than traditional PC settings, support expenses automatically go down. Enhanced Security Data security is raised since data never leaves the datacenter. There's no need to be concerned about every hard disk in every computer containing sensitive data. Nothing is stored on the end machine while using the VDI workspace. It also safeguards intellectual property while dealing with contractors, partners, or a worldwide workforce. Comply with Regulations With virtual desktops, organizational data never leaves the data center. Remote employees that have regulatory duties to preserve client/patient data like function because there is no risk of data leaking out from a lost or stolen laptop or retired PC. Enhanced User Experience With a solid user experience (UX), employees can work from anywhere. They can connect to all of their business applications and tools from anywhere they want to call your workplace, exactly like sitting at their office desk, and even answer the phone if they really want to. Closing Lines One of COVID-19's lessons has been to be prepared for almost anything. IT leaders were probably not planning their investments with a pandemic in mind. Irrespective of how the pandemic plays out in the future, the rise of remote work is here to stay. If VDI at scale is to become a permanent feature of business IT strategies, now is the moment to assess where, when, and how your organization can implement the appropriate solutions. Moreover, businesses that use VDI could find that the added flexibility extends their computing refresh cycles.

Read More
Virtual Desktop Strategies, Server Hypervisors

ProtonVPN iOS app now supports the OpenVPN protocol

Article | April 27, 2023

Your ProtonVPN iOS app is now better equipped to fight censorship and offers more flexible connection options with the launch of OpenVPN for iOS. The OpenVPN protocol is one of the best VPN protocols because of its flexibility, security, and because it is more resistant to blocks. You now have the option to switch between the faster IKEv2 protocol and the more stable and censorship-resistant OpenVPN protocol.

Read More
VMware

VMware Tanzu Kubernetes Grid Integrated: A Year in Review

Article | December 14, 2021

The modern application world is advancing at an unprecedented rate. However, the new possibilities these transformations make available don’t come without complexities. IT teams often find themselves under pressure to keep up with the speed of innovation. That’s why VMware provides a production-ready container platform for customers that aligns to upstream Kubernetes, VMware Tanzu Kubernetes Grid Integrated (formerly known as VMware Enterprise PKS). By working with VMware, customers can move at the speed their businesses demand without the headache of trying to run their operations alone. Our offerings help customers stay current with the open source community's innovations while having access to the support they need to move forward confidently. Many changes have been made to Tanzu Kubernetes Grid Integrated edition over the past year that are designed to help customers keep up with Kubernetes advancements, move faster, and enhance security. Kubernetes updates The latest version, Tanzu Kubernetes Grid Integrated 1.13, bumped to Kubernetes version 1.22 and removed beta APIs in favor of stable APIs that have since evolved from the betas. Over time, some APIs will evolve. Beta APIs typically evolve more often than stable APIs and should therefore be checked before updates occur. The APIs listed below will not be served with v1.22 as they have been replaced by more stable API versions: Beta versions of the ValidatingWebhookConfiguration and MutatingWebhookConfiguration API (the admissionregistration.k8s.io/v1beta1 API versions) The beta CustomResourceDefinition API (apiextensions.k8s.io/v1beta1) The beta APIService API (apiregistration.k8s.io/v1beta1) The beta TokenReview API (authentication.k8s.io/v1beta1) Beta API versions of SubjectAccessReview, LocalSubjectAccessReview, SelfSubjectAccessReview (API versions from authorization.k8s.io/v1beta1) The beta CertificateSigningRequest API (certificates.k8s.io/v1beta1) The beta Lease API (coordination.k8s.io/v1beta1) All beta Ingress APIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions) Containerd support Tanzu Kubernetes Grid Integrated helps customers eliminate lengthy deployment and management processes with on-demand provisioning, scaling, patching, and updating of Kubernetes clusters. To stay in alignment with the Kubernetes community, Containerd will be used as the default container runtime, although Docker can still be selected using the command-line interface (CLI) if needed. Networking Several updates have been made in regards to networking as well including support of Antrea and NSX-T enhancements. Antrea support With Tanzu Kubernetes Grid Integrated version 1.10 and later, customers can leverage Antrea on install or upgrade to use Kubernetes network policies. This enables enterprises to get the best of both worlds: access to the latest innovation from Antrea and world-class support from VMware. NSX-T enhancements NSX-T was integrated with Tanzu Kubernetes Grid Integrated to simplify container networking and increase security. This has been enhanced so customers can now choose the policy API as an option on a fresh installation of Tanzu Kubernetes Grid Integrated. This means that users will have access to new features available only through NSX-T policy API. This feature is currently in beta. In addition, more NSX-T and NSX Container Plug-in (NCP) configuration is possible through the network profiles. This operator command provides the benefit of being able to set configurations through the CLI, and this is persistent across lifecycle events. Storage enhancements We’ve made storage operations in our customers’ container native environments easier, too. Customers were seeking a simpler and more secure way to manage Container Storage Interface (CSI), and we introduced automatic installation of the vSphere CSI driver as a BOSH process beginning with Tanzu Kubernetes Grid Integrated 1.11. Also, as VCP will be deprecated, customers are advised to use the CSI driver. VCP-to-CSI migration is a part of Tanzu Kubernetes Grid Integrated 1.12 and is designed to help customers move forward faster. Enhanced security Implementing new technologies provides users with new capabilities, but it can also lead to new security vulnerabilities if not done correctly. VMware’s goal is to help customers move forward with ease and the confidence of knowing that enhancements don’t compromise core security needs. CIS benchmarks This year, Tanzu Kubernetes Grid Integrated continued to see improvements that help meet today’s high security standards. Meeting the Center for Internet Security (CIS) benchmarks standards is vital for Tanzu Kubernetes Grid Integrated. In recent Tanzu Kubernetes Grid Integrated releases, a few Kubernetes-related settings have been adjusted to ensure compliance with CIS requirements: Kube-apiserver with --kubelet-certificate-authority settings (v1.12) Kube-apiserver with --authorization-mode argument includes Node (v1.12) Kube-apiserver with proper --audit-log-maxage argument (v1.13) Kube-apiserver with proper --audit-log-maxbackup argument (v1.13) Kube-apiserver with proper --audit-log-maxsize argument (v1.13) Certificate rotations Tanzu Kubernetes Grid Integrated secures all communication between its control plane components and the Kubernetes clusters it manages, using TLS validated by certificates. The certificate rotations have been simplified in recent releases. Customers can now list and simply update certificates on a cluster-by-cluster basis through the “tkgi rotate-certificates” command. The multistep, manual process was replaced with a single CLI command to rotate NSX-T certificates (available since Tanzu Kubernetes Grid Integrated 1.10) and cluster-by-cluster certificates (available since Tanzu Kubernetes Grid Integrated 1.12). Hardening of images Tanzu Kubernetes Grid Integrated keeps OS images, container base images, and software library versions updated to remediate the CVEs reported by customers and in the industry. It also continues to use the latest Ubuntu Xenial Stemcell latest versions for node virtual machines. With recent releases and patch versions, the version of dockerd, containerd, runc, telegraf, nfs-utils had been bumped to the latest stable and secure versions as well. By using Harbor as a private registry management service, customers could also leverage the built-in vulnerability scan features to discover the application images CVEs. VMware is dedicated to supporting customers with production readiness by enhancing the user experience. Tanzu Kubernetes Grid Integrated Edition has stayed up to date with the Kubernetes community and provides customers with the support and resources they need to innovate rapidly.

Read More

Spotlight

Thales

The people we all rely on to make the world go round – they rely on Thales. Our customers come to us with big ambitions: to make life better, to keep us safer. Combining a unique diversity of expertise, talents and cultures, our architects design and deliver extraordinary high technology solutions. Solutions that make tomorrow possible, today.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Events