Citrix celebrates Windows Virtual Desktop Public Preview

Today in Amsterdam, Citrix joins Microsoft at Ignite | The Tour for the unveiling of the Windows Virtual Desktop public preview. Organizations can leverage Citrix Workspace, including Citrix Virtual Apps and Desktops service, to extend the benefits of Windows Virtual Desktop, adding robust management capabilities to this new platform. Available exclusively on Azure, Windows Virtual Desktop is the only cloud-based service that delivers a multi-session Windows 10 optimized for Office 365 ProPlus. Administrators can integrate Citrix Virtual Apps and Desktops service with Windows Virtual Desktop, taking advantage of advanced networking capabilities, robust management tools, and high-definition user experience optimizations. Using Citrix, you can manage these new app and desktop workloads alongside existing on-premises deployments for maximum flexibility in your cloud adoption.

Spotlight

Force10 Networks

Force10 Networks develops high-performance data center solutions powered by the industry’s most innovative line of open, standards-based, networking hardware and software. The company’s Open Cloud Networking framework grants Web 2.0/portal operators, cloud and hosting providers, enterprise and special-purpose data center customers new levels of flexibility, performance, scale and automation—fundamentally changing the economics of data center networking. Force10 Networks operates globally, providing 24x7 service and support to its customer base in more than 60 countries worldwide. For more information, visit www.force10networks.com.

OTHER ARTICLES
Virtual Desktop Tools, Server Hypervisors

VMware NSX 3.2 Delivers New, Advanced Security Capabilities

Article | April 28, 2023

It’s an impactful release focused on significant NSX Security enhancements Putting a hard shell around a soft core is not a recipe for success in security, but somehow legacy security architectures for application protection have often looked exactly like that: a hard perimeter firewall layer for an application infrastructure that was fundamentally not built with security as a primary concern. VMware NSX Distributed Firewall pioneered the micro-segmentation concept for granular access controls for cloud applications with the initial launch of the product in 2013. The promise of Zero Trust security for applications, the simplicity of deployment of the solution, and the ease of achieving internal security objectives made NSX an instant success for security-sensitive customers. Our newest release — NSX-T 3.2 — establishes a new marker for securing application infrastructure by introducing significant new features to identify and respond to malware and ransomware attacks in the network, to enhance user identification and L7 application identification capabilities, and, at the same time, to simplify deployment of the product for our customers. Modern day security teams need to secure mission-critical infrastructure from both external and internal attacks. By providing unprecedented threat visibility leveraging IDS, NTA, and Network Detection and Response (NDR) capabilities along with granular controls leveraging L4-L7 Firewall, IPS, and Malware Prevention capabilities, NSX 3.2 delivers an incredible security solution for our customers“ Umesh Mahajan, SVP, GM (Networking and Security Business Unit) Distributed Advanced Threat Prevention (ATP) Attackers often use multiple sophisticated techniques to penetrate the network, move laterally within the network in a stealthy manner, and exfiltrate critical data at an appropriate time. Micro-segmentation solutions focused solely on access control can reduce the attack surface — but cannot provide the detection and prevention technologies needed to thwart modern attacks. NSX-T 3.2 introduces several new capabilities focused on detection and prevention of attacks inside the network. Of critical note is that these advanced security solutions do not need network taps, separate monitoring networks, or agents inside each and every workload. Distributed Malware Prevention Lastline’s highly reputed dynamic malware technology is now integrated with NSX Distributed Firewall to deliver an industry-first Distributed Malware Prevention solution. Leveraging the integration with Lastline, a Distributed Firewall embedded within the hypervisor kernel can now identify both “known malicious” as well as “zero day” malware Distributed Behavioral IDS Whereas earlier versions of NSX Distributed IDPS (Intrusion Detection and Prevention System) delivered primarily signature-based detection of intrusions, NSX 3.2 introduces “behavioral” intrusion detection capabilities as well. Even if specific IDS signatures are not triggered, this capability helps customers know whether a workload is seeing any behavioral anomalies, like DNS tunneling or beaconing, for example, that could be a cause for concern. Network Traffic Analysis (NTA) For customers interested in baselining network-wide behavior and identifying anomalous behavior at the aggregated network level, NSX-T 3.2 introduces Distributed Network Traffic Analysis (NTA). Network-wide anomalies like lateral movement, suspicious RDP traffic, and malicious interactions with the Active Directory server, for example, can alert security teams about attacks underway and help them take quick remediation actions. Network Detection and Response (NDR) Alert overload, and resulting fatigue, is a real challenge among security teams. Leveraging advanced AI/ML techniques, the NSX-T 3.2 Network Detection and Response solution consolidates security IOCs from different detection systems like IDS, NTA, malware detection. etc., to provide a ”campaign view” that shows specific attacks in play at that point in time. MITRE ATT&CK visualization helps customers see the specific stage in the kill chain of individual attacks, and the ”time sequence” view helps understand the sequence of events that contributed to the attack on the network. Key Firewall Enhancements While delivering new Advanced Threat Prevention capabilities is one key emphasis for the NSX-T 3.2 release, providing meaningful enhancements for core firewalling capabilities is an equally critical area of innovation. Distributed Firewall for VDS Switchports While NSX-T has thus far supported workloads connected to both overlay-based N-VDS switchports as well as VLAN-based switchports, customers had to move the VLAN switchports from VDS to N-VDS before a Distributed Firewall could be enforced. With NSX-T 3.2, native VLAN DVPGs are supported as-is, without having to move to N-VDS. Effectively, Distributed Security can be achieved in a completely seamless manner without having to modify any networking constructs. Distributed Firewall workflows in vCenter With NSX-T 3.2, we are introducing the ability to create and modify Distributed Firewall rules natively within vCenter. For small- to medium-sized VMware customers, this feature simplifies the user experience by eliminating the need to leverage a separate NSX Manager interface. Advanced User Identification for Distributed and Gateway Firewalls NSX supported user identity-based access control in earlier releases. With NSX-T 3.2, we’re introducing the ability to directly connect to Microsoft Active Directory to support user identity mapping. In addition, for customers who do not use Active Directory for user authentication, NSX also supports VMware vRealize LogInsight as an additional method to carry out user identity mapping. This feature enhancement is applicable for both NSX Distributed Firewall as well as NSX Gateway Firewall. Enhanced L7 Application Identification for Distributed and Gateway Firewalls NSX supported Layer-7 application identification-based access control in earlier releases. With NSX-T 3.2, we are enhancing the signature set to about 750 applications. While several perimeter firewall vendors claim a larger set of Layer-7 application signatures, they focus mostly on internet application identification (like Facebook, for example). Our focus with NSX at this time is on internal applications hosted by enterprises. This feature enhancement is applicable for both NSX Distributed Firewall as well as Gateway Firewalls. NSX Intelligence NSX Intelligence is geared towards delivering unprecedented visibility for all application traffic inside the network and enabling customers to create micro-segmentation policies to reduce the attack surface. It has a processing pipeline that de-dups, aggregates, and correlates East-West traffic to deliver in-depth visibility. Scalability enhancements for NSX Intelligence As application infrastructure grows rapidly, it is vital that one’s security analytics platform can grow with it. With the new release, we have rearchitected the application platform upon which NSX Intelligence runs — moving from a stand-alone appliance to a containerized micro-service architecture powered by Kubernetes. This architectural change future-proofs the Intelligence data lake and allows us to eventually scale out our solution to n-node Kubernetes clusters. Large Enterprise customers that need visibility for application traffic can confidently deploy NSX Intelligence and leverage the enhanced scale it supports. NSX Gateway Firewall While NSX Distributed Firewall focuses on east-west controls within the network, NSX Gateway Firewall is used for securing ingress and egress traffic into and out of a zone. Gateway Firewall Malware Detection NSX Gateway Firewall in the 3.2 release received significant Advanced Threat Detection capabilities. Gateway Firewall can now identify both known as well as zero-day malware ingressing or egressing the network. This new capability is based on the Gateway Firewall integration with Lastline’s highly reputed dynamic network sandbox technology. Gateway Firewall URL Filtering Internal users and applications reaching out to malicious websites is a huge security risk that must be addressed. In addition, enterprises need to limit internet access to comply with corporate internet usage policies. NSX Gateway Firewall in 3.2 introduces the capability to restrict access to internet sites. Access can be limited based on either the category the URL belongs to, or the “reputation” of the URL. The URL to category and reputation mapping is constantly updated by VMware so customer intent is enforced automatically even after many changes in the internet sites themselves.

Read More
Virtual Desktop Tools, Server Hypervisors

Network Virtualization: The Future of Businesses and Networks

Article | June 8, 2023

Network virtualization has emerged as the widely recommended solution for the networking paradigm's future. Virtualization has the potential to revolutionize networks in addition to providing a cost-effective, flexible, and secure means of communication. Network virtualization isn't an all-or-nothing concept. It can help several organizations with differing requirements, or it can provide a bunch of new advantages for a single enterprise. It is the process of combining a network's physical hardware into a single, virtual network. This is often accomplished by running several virtual guest machines in software containers on a single physical host system. Network virtualization is indeed the new gold standard for networking, and it is being embraced by enterprises of all kinds globally. By integrating their current network gear into a single virtual network, businesses can reduce operating expenses, automate network and security processes, and lay the groundwork for future growth. Network virtualization also enables organizations to simulate traditional hardware like servers, storage devices, and network resources. The physical network performs basic tasks like packet forwarding, while virtual versions handle more complex activities like networking service management and deployment. Addressing Network Virtualization Challenges Surprisingly, IT teams might encounter network virtualization challenges that are both technical and non-technical in nature. Let's look at some common challenges and discuss how to overcome them. Change in Network Architecture Practically, the first big challenge is shifting from an architecture that depends heavily on routers, switches, and firewalls. Instead, these services are detached from conventional hardware and put on hypervisors that virtualize these operations. Virtualized network services are shared, scaled, and moved as required. Migrating current LANs and data centers to a virtualized platform require careful planning. This migration involves the following tasks: Determine how much CPU, computation, and storage resources will be required to run virtualized network services. Determine the optimal approach for integrating network resilience and security services. Determine how the virtualized network services will be implemented in stages to avoid disrupting business operations. The key to a successful migration is meticulous preparation by architects who understand the business's network requirements. This involves a thorough examination of existing apps and services, as well as a clear knowledge of how data should move across the company most effectively. Moreover, a progressive approach to relocation is often the best solution. In this instance, IT teams can make changes to the virtualization platform without disrupting the whole corporate network. Network Visibility Network virtualization has the potential to considerably expand the number of logical technology layers that must collaborate. As a result, traditional network and data center monitoring technologies no longer have insight into some of these abstracted levels. In other circumstances, visibility can be established, but the tools fail to show the information correctly so that network operators can understand it. In either case, deploying and managing modern network visibility technologies is typically the best choice. When an issue arises, NetOps personnel are notified of the specific service layer. Automation and AI The enhanced level of automation and self-service operations that can be built into a platform is a fundamental aspect of network virtualization. While these activities can considerably increase the pace of network upgrades while decreasing management overhead, they need the documentation and implementation of a new set of standards and practices. Understand that prior network architectures were planned and implemented utilizing actual hardware appliances on a hop-by-hop basis. A virtualized network, on the other hand, employs a centralized control plane to govern and push policies to all sections of the network. Changes may occur more quickly in this aspect, but various components must be coordinated to accomplish their roles in harmony. As a result, network teams should move their attention away from network operations that are already automated. Rather, their new responsibility is to guarantee that the core automation processes and AI are in sync in order to fulfill those automated tasks. Driving Competitive Edge with Network Virtualization Virtualization in networking or virtual machines within an organization is not a new trend. Even small and medium businesses have realized the benefits of network virtualization, especially when combined with a hosted cloud service provider. Because of this, the demand for enterprise network virtualization is rising, driving higher end-user demands and the proliferation of devices and business tools. These network virtualization benefits can help boost business growth and gain a competitive edge. Gaining a Competitive Edge: Network Virtualization Benefits Cost-Savings on Hardware Faster Desktop and Server Provisioning and Deployment Improved Data Security and Disaster Recovery Increasing IT Operational Efficiency Small Footprint and Energy Saving Network Virtualization: The Path to Digital Transformation Business is at the center of digital transformation, but technology is needed to make it happen. Integrated clouds, highly modern data centers, digital workplaces, and increased data center security are all puzzle pieces, and putting them all together requires a variety of various products and services that are deployed cohesively. The cloud revolution is still having an influence on IT, transforming how digital content is consumed and delivered. This should come as no surprise that such a shift has influenced how we feel about current networking. When it boils down to it, the purpose of digital transformation for every company, irrespective of industry, is the same: to boost the speed with which you can respond to market changes and evolving business needs; to enhance your ability to embrace and adapt to new technology, and to improve overall security. As businesses realize that the underlying benefit of cloud adoption and enhanced virtualization isn't simply about cost savings, digital strategies are evolving, becoming more intelligent and successful in the process. Network virtualization is also a path toward the smooth digital transformation of any business. How does virtualization help in accelerating digital transformation? Combining public and private clouds, involving hardware-based computing, storage, and networking software definition. A hyper-converged infrastructure that integrates unified management with virtualized computing, storage, and networking could be included. Creating a platform for greater productivity by providing the apps and services consumers require when and when they utilize them. This should include simplifying application access and administration as well as unifying endpoint management. Improving network security and enhancing security flexibility to guarantee that quicker speed to market is matched by tighter security. Virtualization will also help businesses to move more quickly and safely, bringing products—and profits—to market faster. Enhancing Security with Network Virtualization Security has evolved as an essential component of every network architecture. However, since various areas of the network are often segregated from one another, it might be challenging for network teams to design and enforce network virtualization security standards that apply to the whole network. Zero trust can integrate such network parts and their accompanying virtualization activities. Throughout the network, the zero-trust architecture depends on the user and device authentication. If LAN users wish to access data center resources, they must first be authenticated. The secure connection required for endpoints to interact safely is provided by a zero-trust environment paired with network virtualization. To facilitate these interactions, virtual networks can be ramped up and down while retaining the appropriate degree of traffic segmentation. Access policies, which govern which devices can connect with one another, are a key part of this process. If a device is allowed to access a data center resource, the policy should be understood at both the WAN and campus levels. Some of the core network virtualization security features are: Isolation and multitenancy are critical features of network virtualization. Segmentation is related to isolation; however it is utilized in a multitier virtual network. A network virtualization platform's foundation includes firewalling technologies that enable segmentation inside virtual networks. Network virtualization enables automatic provisioning and context-sharing across virtual and physical security systems. Investigating the Role of Virtualization in Cloud Computing Virtualization in the cloud computing domain refers to the development of virtual resources (such as a virtual server, virtual storage device, virtual network switch, or even a virtual operating system) from a single resource of its type that also shows up as several personal isolated resources or environments that users can use as a separate individual physical resource. Virtualization enables the benefits of cloud computing, such as ease of scaling up, security, fluid or flexible resources, and so on. If another server is necessary, a virtual server will be immediately created, and a new server will be deployed. When we need more memory, we increase the virtual server configurations we currently have, and we now have the extra RAM we need. As a result, virtualization is the underlying technology of the cloud computing business model. The Benefits of Virtualization in Cloud Computing: Efficient hardware utilization Virtualization improves availability Disaster recovery is quick and simple Energy is saved by virtualization Setup is quick and simple Cloud migration has become simple Motivating Factors for the Adoption of Network Virtualization Demand for enterprise networks continues to climb, owing to rising end-user demands and the proliferation of devices and business software. Thanks to network virtualization, IT companies are gaining the ability to respond to shifting demands and match their networking capabilities with their virtualized storage and computing resources. In fact, according to a recent SDxCentral report, 88% of respondents believe it is "important" or "mission critical" to implement a network virtualization software over the next two to five years. Virtualization is also an excellent alternative for businesses that employ outsourced IT services, are planning mergers or acquisitions or must segregate IT teams owing to regulatory compliance. Reasons to Adopt Network Virtualization: A Business Needs Speed Security Requirements Are Rising Apps can Move Around Micro-segmentation IT Automation and Orchestration Reduce Hardware Dependency and CapEx: Adopt Multi-Tenancy Cloud Disaster Recovery mproved Scalability Wrapping-Up Network virtualization and cloud computing are emerging technologies of the future. As CIOs get actively involved in organizational systems, these new concepts will be implemented in more businesses. As consumer demand for real-time services expands, businesses will be driven to explore network virtualization as the best way to take their networks to the next level. The networking future is here. FAQ Why is network virtualization important for business? By integrating their current network gear into a single virtual network, businesses can reduce operating expenses, automate network and security processes, and set the stage for future growth. Where is network virtualization used? Network virtualization can be utilized in application development and testing to simulate hardware and system software realistically. Network virtualization in application performance engineering allows for the modeling of connections among applications, services, dependencies, and end users for software testing. How does virtualization work in cloud computing? Virtualization, in short, enables cloud providers to provide users alongside existing physical computer infrastructure. As a simple and direct process, it allows cloud customers to buy only the computing resources they require when they want them and to maintain those resources cost-effectively as the demand grows.

Read More
Server Hypervisors

Why Are Businesses Tilting Towards VDI for Remote Employees?

Article | September 9, 2022

Although remote working or working from home became popular during the COVID era, did you know that the technology that gives the best user experience (UX) for remote work was developed more than three decades ago? Citrix was founded in 1989 as one of the first software businesses to provide the ability to execute any program on any device over any connection. In 2006, VMware coined the term "virtual desktop infrastructure (VDI)" to designate their virtualization products. Many organizations created remote work arrangements in response to the COVID-19 pandemic, and the phenomenon will continue even in 2022. Organizations have used a variety of methods to facilitate remote work over the years. For businesses, VDI has been one of the most effective, allowing businesses to centralize their IT resources and give users remote access to a consolidated pool of computing capacity. Reasons Why Businesses Should Use VDI for their Remote Employees? Companies can find it difficult to scale their operations and grow while operating remotely. VDI, on the other hand, can assist in enhancing these efforts by eliminating some of the downsides of remote work. Device Agnostic As long as employees have sufficient internet connectivity, virtual desktops can accompany them across the world. They can use a tablet, phone, laptop, client side, or Mac to access the virtual desktop. Reduced Support Costs Since VDI setups can often be handled by a smaller IT workforce than traditional PC settings, support expenses automatically go down. Enhanced Security Data security is raised since data never leaves the datacenter. There's no need to be concerned about every hard disk in every computer containing sensitive data. Nothing is stored on the end machine while using the VDI workspace. It also safeguards intellectual property while dealing with contractors, partners, or a worldwide workforce. Comply with Regulations With virtual desktops, organizational data never leaves the data center. Remote employees that have regulatory duties to preserve client/patient data like function because there is no risk of data leaking out from a lost or stolen laptop or retired PC. Enhanced User Experience With a solid user experience (UX), employees can work from anywhere. They can connect to all of their business applications and tools from anywhere they want to call your workplace, exactly like sitting at their office desk, and even answer the phone if they really want to. Closing Lines One of COVID-19's lessons has been to be prepared for almost anything. IT leaders were probably not planning their investments with a pandemic in mind. Irrespective of how the pandemic plays out in the future, the rise of remote work is here to stay. If VDI at scale is to become a permanent feature of business IT strategies, now is the moment to assess where, when, and how your organization can implement the appropriate solutions. Moreover, businesses that use VDI could find that the added flexibility extends their computing refresh cycles.

Read More
Virtual Desktop Strategies, Server Hypervisors

Efficient Management of Virtual Machines using Orchestration

Article | April 27, 2023

Contents 1. Introduction 2. What is Orchestration? 3. How Orchestrating Help Optimize VMs Efficiency? 3.1. Resource Optimization 3.2 Dynamic Scaling 3.3 Faster Deployment 3.4 Improved Security 3.5 Multi-Cloud Management 3.6 Improved Collaboration 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs 4.2 Automated Backup and Restore for VMs 4.3 Ensure Replication for VMs 4.4 Setup Data Synchronization for VMs 5. Conclusion 1. Introduction Orchestration is a superset of automation. Cloud orchestration goes beyond automation, providing coordination between multiple automated activities. Cloud orchestration is increasingly essential due to the growth of containerization, which facilitates scaling applications across clouds, both public and private. The demand for both public cloud orchestration and hybrid cloud orchestration has increased as businesses increasingly adopt a hybrid cloud architecture. The quick adoption of containerized, micro-services-based apps that communicate over APIs has fueled the desire for automation in deploying and managing applications across the cloud. This increase in complexity has created a need for VM orchestration that can manage numerous dependencies across various clouds with policy-driven security and management capabilities. 2. What is Orchestration? Orchestration refers to the process of automating, coordinating, and managing complex systems, workflows, or processes. It typically entails the use of automation tools and platforms to streamline and coordinate the deployment, configuration, management of applications and services across different environments. This includes development, testing, staging, and production. Orchestration tools in cloud computing can be used to automate the deployment and administration of containerized applications across multiple servers or clusters. These tools can help automate tasks such as container provisioning, scaling, load balancing, and health monitoring, making it easier to manage complex application environments. Orchestration ensures organizations automate and streamline their workflows, reduce errors and downtime, and improve the efficacy and scalability of their operations. 3. How Orchestrating Help Optimize VMs Efficiency? Orchestration offers enhanced visibility into the resources and processes in use, which helps prevent VM sprawl and helps organizations trace resource usage by department, business unit, or individual user. Fig. Global Market for VNFO by Virtualization Methodology 2022-27($ million) (Source: Insight Research) The above figure shows, VMs have established a solid legacy that will continue to be relevant in the near to mid-term future. These are 6 ways, in which Orchestration helps vin efficient management of VMs: 3.1. Resource Optimization Orchestrating helps optimize resource utilization by automating the provisioning and de-provisioning of VMs, which allows for efficient use of computing resources. By using orchestration tools, IT teams can set up rules and policies for automatically scaling VMs based on criteria such as CPU utilization, memory usage, network traffic, and application performance metrics. Orchestration also enables advanced techniques such as predictive analytics, machine learning, and artificial intelligence to optimize resource utilization. These technologies can analyze historical data and identify patterns in workload demand, allowing the orchestration system to predict future resource needs and automatically provision or de-provision resources accordingly 3.2. Dynamic Scaling Orchestrating helps automate scaling of VMs, enabling organizations to quickly and easily adjust their computing resources based on demand. It enables IT teams to configure scaling policies and regulations for virtual machines based on resource utilization and network traffic along with performance metrics. When the workload demand exceeds a certain threshold, the orchestration system can autonomously provision additional virtual machines to accommodate the increased load. When workload demand decreases, the orchestration system can deprovision VMs to free up resources and reduce costs. 3.3. Faster Deployment Orchestrating can help automate VM deployment of VMs, reducing the time and effort required to provision new resources. By leveraging advanced technologies such as automation, scripting, and APIs, orchestration can further streamline the VM deployment process. It allows IT teams to define workflows and processes that can be automated using scripts, reducing the time and effort required to deploy new resources. In addition, orchestration can integrate with other IT management tools and platforms, such as cloud management platforms, configuration management tools, and monitoring systems. This enables IT teams to leverage various capabilities and services to streamline the VM deployment and improve efficiency. 3.4. Improved Security Orchestrating can help enhance the security of VMs by automating the deployment of security patches and updates. It also helps ensure VMs are deployed with the appropriate security configurations and settings, reducing the risk of misconfiguration and vulnerability. It enables IT teams to define standard security templates and configurations for VMs, which can be automatically applied during deployment. Furthermore, orchestration can integrate with other security tools and platforms, such as intrusion detection systems and firewalls, to provide a comprehensive security solution. It allows IT teams to automate the deployment of security policies and rules, ensuring that workloads remain protected against various security threats. 3.5. Multi-Cloud Management Orchestration helps provide a single pane of glass for VM management, enabling IT teams to monitor and manage VMs across multiple cloud environments from a single platform. This simplifies management and reduces complexity, enabling IT teams to respond more quickly and effectively to changing business requirements. In addition, orchestration also helps to ensure consistency and compliance across multiple cloud environments. Moreover, orchestration can also integrate with other multi-cloud management tools and platforms, such as cloud brokers and cloud management platforms, to provide a comprehensive solution for managing VMs across multiple clouds. 3.6. Improved Collaboration Orchestration helps streamline collaboration by providing a centralized repository for storing and sharing information related to VMs. Moreover, it also automates many of the routine tasks associated with VM management, reducing the workload for IT teams and freeing up time for more complex tasks. This can improve collaboration by enabling IT teams to focus on more strategic initiatives. In addition, orchestration provides advanced analytics and reporting capabilities, enabling IT teams to track performance, identify bottlenecks, and optimize resource utilization. This improves performance by providing a data-driven approach to VM management and allowing IT teams to work collaboratively to identify and address performance issues. 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs Containers and virtual machines exist together within a single infrastructure and are managed by the same platform. This allows for hosting various projects using a unified management point and the ability to adapt gradually based on current needs and opportunities. This provides greater flexibility for teams to host and administer applications using cutting-edge technologies and established standards and methods. Moreover, as there is no need to invest in distinct physical servers for virtual machines (VMs) and containers, this approach can be a great way to maximize infrastructure utilization, resulting in lower TCO and higher ROI. In addition, unified management drastically simplifies processes, requiring fewer human resources and less time. 4.2. Automated Backup and Restore for VMs --Minimize downtime and reduce risk of data loss Organizations should set up automated backup and restore processes for virtual machines, ensuring critical data and applications are protected during a disaster. This involves scheduling regular backups of virtual machines to a secondary location or cloud storage and setting up automated restore processes to recover virtual machines during an outage or disaster quickly. 4.3. Ensure Replication for VMs --Ensure data and applications are available and accessible in the event of a disaster Organizations should set up replication processes for their VMs, allowing them to be automatically copied to a secondary location or cloud infrastructure. This ensures that critical applications and data are available even during a catastrophic failure at the primary site. 4.4. Setup Data Synchronization for VMs --Improve overall resilience and availability of the system VM orchestration tools should be used to set up data synchronization processes between virtual machines, ensuring that data is consistent and up-to-date across multiple locations. This is particularly important in scenarios where data needs to be accessed quickly from various locations, such as in distributed environments. 5. Conclusion Orchestration provides disaster recovery and business continuity, automatic scalability of distributed systems, and inter-service configuration. Cloud orchestration is becoming significant due to the advent of containerization, which permits scaling applications across clouds, both public and private. We expect continued growth and innovation in the field of VM orchestration, with new technologies and tools emerging to support more efficient and effective management of virtual machines in distributed environments. In addition, as organizations increasingly rely on cloud-based infrastructures and distributed systems, VM orchestration will continue to play a vital role in enabling businesses to operate smoothly and recover quickly from disruptions. VM orchestration will remain a critical component of disaster recovery and high availability strategies for years as organizations continue relying on virtualization technologies to power their operations and drive innovation.

Read More

Spotlight

Force10 Networks

Force10 Networks develops high-performance data center solutions powered by the industry’s most innovative line of open, standards-based, networking hardware and software. The company’s Open Cloud Networking framework grants Web 2.0/portal operators, cloud and hosting providers, enterprise and special-purpose data center customers new levels of flexibility, performance, scale and automation—fundamentally changing the economics of data center networking. Force10 Networks operates globally, providing 24x7 service and support to its customer base in more than 60 countries worldwide. For more information, visit www.force10networks.com.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Events