Compuware's 'Mainframe Renaissance' Continues with Topaz Release and MVS Acquisition

Compuware stepped up its DevOps-on-the-mainframe campaign this week with the release of Topaz for Total Test, which brings Java-like unit testing to COBOL applications, and news that it has acquired MVS Solutions, which makes the ThruPut Manager mainframe batch automation technology.Compuware began defining what it is calling the "mainframe renaissance" last year with the acquisitions of the Itegration Mainframe SCM Practice, ISPW BenchMark Technologies mainframe source-code and release-automation solution, and Standardware COPE IMS virtualization technology, as well as a new partnership with Software Engineering of America.

Spotlight

Hashmap

At hashmap, our team of innovative technologists and domain experts help accelerate the value of data, cloud, iiot/iot, and ai/ml for the community and our clients by creating smart, flexible and high-value service offerings that work across industries. no matter what your needs are, our team of experts will go out of their way to customize our offerings to your demands. from start to finish, we will take you through every step of the process, and all the way to a stunning result. our service lines are listed below.

OTHER ARTICLES
Virtual Desktop Strategies, Server Hypervisors

VMware NSX 3.2 Delivers New, Advanced Security Capabilities

Article | April 27, 2023

It’s an impactful release focused on significant NSX Security enhancements Putting a hard shell around a soft core is not a recipe for success in security, but somehow legacy security architectures for application protection have often looked exactly like that: a hard perimeter firewall layer for an application infrastructure that was fundamentally not built with security as a primary concern. VMware NSX Distributed Firewall pioneered the micro-segmentation concept for granular access controls for cloud applications with the initial launch of the product in 2013. The promise of Zero Trust security for applications, the simplicity of deployment of the solution, and the ease of achieving internal security objectives made NSX an instant success for security-sensitive customers. Our newest release — NSX-T 3.2 — establishes a new marker for securing application infrastructure by introducing significant new features to identify and respond to malware and ransomware attacks in the network, to enhance user identification and L7 application identification capabilities, and, at the same time, to simplify deployment of the product for our customers. Modern day security teams need to secure mission-critical infrastructure from both external and internal attacks. By providing unprecedented threat visibility leveraging IDS, NTA, and Network Detection and Response (NDR) capabilities along with granular controls leveraging L4-L7 Firewall, IPS, and Malware Prevention capabilities, NSX 3.2 delivers an incredible security solution for our customers“ Umesh Mahajan, SVP, GM (Networking and Security Business Unit) Distributed Advanced Threat Prevention (ATP) Attackers often use multiple sophisticated techniques to penetrate the network, move laterally within the network in a stealthy manner, and exfiltrate critical data at an appropriate time. Micro-segmentation solutions focused solely on access control can reduce the attack surface — but cannot provide the detection and prevention technologies needed to thwart modern attacks. NSX-T 3.2 introduces several new capabilities focused on detection and prevention of attacks inside the network. Of critical note is that these advanced security solutions do not need network taps, separate monitoring networks, or agents inside each and every workload. Distributed Malware Prevention Lastline’s highly reputed dynamic malware technology is now integrated with NSX Distributed Firewall to deliver an industry-first Distributed Malware Prevention solution. Leveraging the integration with Lastline, a Distributed Firewall embedded within the hypervisor kernel can now identify both “known malicious” as well as “zero day” malware Distributed Behavioral IDS Whereas earlier versions of NSX Distributed IDPS (Intrusion Detection and Prevention System) delivered primarily signature-based detection of intrusions, NSX 3.2 introduces “behavioral” intrusion detection capabilities as well. Even if specific IDS signatures are not triggered, this capability helps customers know whether a workload is seeing any behavioral anomalies, like DNS tunneling or beaconing, for example, that could be a cause for concern. Network Traffic Analysis (NTA) For customers interested in baselining network-wide behavior and identifying anomalous behavior at the aggregated network level, NSX-T 3.2 introduces Distributed Network Traffic Analysis (NTA). Network-wide anomalies like lateral movement, suspicious RDP traffic, and malicious interactions with the Active Directory server, for example, can alert security teams about attacks underway and help them take quick remediation actions. Network Detection and Response (NDR) Alert overload, and resulting fatigue, is a real challenge among security teams. Leveraging advanced AI/ML techniques, the NSX-T 3.2 Network Detection and Response solution consolidates security IOCs from different detection systems like IDS, NTA, malware detection. etc., to provide a ”campaign view” that shows specific attacks in play at that point in time. MITRE ATT&CK visualization helps customers see the specific stage in the kill chain of individual attacks, and the ”time sequence” view helps understand the sequence of events that contributed to the attack on the network. Key Firewall Enhancements While delivering new Advanced Threat Prevention capabilities is one key emphasis for the NSX-T 3.2 release, providing meaningful enhancements for core firewalling capabilities is an equally critical area of innovation. Distributed Firewall for VDS Switchports While NSX-T has thus far supported workloads connected to both overlay-based N-VDS switchports as well as VLAN-based switchports, customers had to move the VLAN switchports from VDS to N-VDS before a Distributed Firewall could be enforced. With NSX-T 3.2, native VLAN DVPGs are supported as-is, without having to move to N-VDS. Effectively, Distributed Security can be achieved in a completely seamless manner without having to modify any networking constructs. Distributed Firewall workflows in vCenter With NSX-T 3.2, we are introducing the ability to create and modify Distributed Firewall rules natively within vCenter. For small- to medium-sized VMware customers, this feature simplifies the user experience by eliminating the need to leverage a separate NSX Manager interface. Advanced User Identification for Distributed and Gateway Firewalls NSX supported user identity-based access control in earlier releases. With NSX-T 3.2, we’re introducing the ability to directly connect to Microsoft Active Directory to support user identity mapping. In addition, for customers who do not use Active Directory for user authentication, NSX also supports VMware vRealize LogInsight as an additional method to carry out user identity mapping. This feature enhancement is applicable for both NSX Distributed Firewall as well as NSX Gateway Firewall. Enhanced L7 Application Identification for Distributed and Gateway Firewalls NSX supported Layer-7 application identification-based access control in earlier releases. With NSX-T 3.2, we are enhancing the signature set to about 750 applications. While several perimeter firewall vendors claim a larger set of Layer-7 application signatures, they focus mostly on internet application identification (like Facebook, for example). Our focus with NSX at this time is on internal applications hosted by enterprises. This feature enhancement is applicable for both NSX Distributed Firewall as well as Gateway Firewalls. NSX Intelligence NSX Intelligence is geared towards delivering unprecedented visibility for all application traffic inside the network and enabling customers to create micro-segmentation policies to reduce the attack surface. It has a processing pipeline that de-dups, aggregates, and correlates East-West traffic to deliver in-depth visibility. Scalability enhancements for NSX Intelligence As application infrastructure grows rapidly, it is vital that one’s security analytics platform can grow with it. With the new release, we have rearchitected the application platform upon which NSX Intelligence runs — moving from a stand-alone appliance to a containerized micro-service architecture powered by Kubernetes. This architectural change future-proofs the Intelligence data lake and allows us to eventually scale out our solution to n-node Kubernetes clusters. Large Enterprise customers that need visibility for application traffic can confidently deploy NSX Intelligence and leverage the enhanced scale it supports. NSX Gateway Firewall While NSX Distributed Firewall focuses on east-west controls within the network, NSX Gateway Firewall is used for securing ingress and egress traffic into and out of a zone. Gateway Firewall Malware Detection NSX Gateway Firewall in the 3.2 release received significant Advanced Threat Detection capabilities. Gateway Firewall can now identify both known as well as zero-day malware ingressing or egressing the network. This new capability is based on the Gateway Firewall integration with Lastline’s highly reputed dynamic network sandbox technology. Gateway Firewall URL Filtering Internal users and applications reaching out to malicious websites is a huge security risk that must be addressed. In addition, enterprises need to limit internet access to comply with corporate internet usage policies. NSX Gateway Firewall in 3.2 introduces the capability to restrict access to internet sites. Access can be limited based on either the category the URL belongs to, or the “reputation” of the URL. The URL to category and reputation mapping is constantly updated by VMware so customer intent is enforced automatically even after many changes in the internet sites themselves.

Read More
Server Hypervisors

Rising Importance of Network Virtualization

Article | May 18, 2023

Network virtualization combines network resources to integrate several physical networks, segment a network, or construct software networks among VMs. IT teams can construct numerous separate virtual networks using network virtualization. Virtual networks can be added and scaled without changing hardware. Teams can start up logical networks more rapidly in response to business needs using network virtualization. This adaptability improves service delivery, efficiency, and control. Importance of Network Virtualisation Network virtualization entails developing new rules for the delivery of network services. This involves software-defined data centers (SDDC), cloud computing, and edge computing. Virtualization assists in the transformation of networks from rigid, wasteful, and static to optimized, agile, and dynamic. To ensure agility and speed, modern virtual networks must keep up with the needs of cloud-hosted, decentralized applications while addressing cyberthreats. You can deploy and upgrade programs in minutes thanks to network virtualization. This eliminates the need to spend time setting up the infrastructure to accommodate the new applications. What is the Process of Network Virtualization? Several network functions that were previously done manually on hardware are now automated through network virtualisation. Network managers can construct, maintain, and provide networks programmatically in software while employing the hardware as a packet-forwarding backplane. Physical network resources, such as virtual private networks (VPNs), load balancing, firewalling, routing, and switching, are pooled and supplied in software. To do this, you merely require Internet Protocol (IP) packet forwarding from the hardware or physical network. Individual workloads, such as virtual machines, can access network services that have been distributed to a virtual layer. There are several kinds of virtual machines accessible. The finest virtual machines enable network administrators to access all parts of a network from a single point of access. Closing Lines Network virtualization will remain a critical component in both business and carrier network architectures. Network virtualization projects in the future will inevitably incorporate zero trust, automation, and edge and cloud computing.

Read More
Server Hypervisors

How to Start Small and Grow Big with Data Virtualization

Article | September 9, 2022

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More
VMware

VMware Tanzu Kubernetes Grid Integrated: A Year in Review

Article | December 14, 2021

The modern application world is advancing at an unprecedented rate. However, the new possibilities these transformations make available don’t come without complexities. IT teams often find themselves under pressure to keep up with the speed of innovation. That’s why VMware provides a production-ready container platform for customers that aligns to upstream Kubernetes, VMware Tanzu Kubernetes Grid Integrated (formerly known as VMware Enterprise PKS). By working with VMware, customers can move at the speed their businesses demand without the headache of trying to run their operations alone. Our offerings help customers stay current with the open source community's innovations while having access to the support they need to move forward confidently. Many changes have been made to Tanzu Kubernetes Grid Integrated edition over the past year that are designed to help customers keep up with Kubernetes advancements, move faster, and enhance security. Kubernetes updates The latest version, Tanzu Kubernetes Grid Integrated 1.13, bumped to Kubernetes version 1.22 and removed beta APIs in favor of stable APIs that have since evolved from the betas. Over time, some APIs will evolve. Beta APIs typically evolve more often than stable APIs and should therefore be checked before updates occur. The APIs listed below will not be served with v1.22 as they have been replaced by more stable API versions: Beta versions of the ValidatingWebhookConfiguration and MutatingWebhookConfiguration API (the admissionregistration.k8s.io/v1beta1 API versions) The beta CustomResourceDefinition API (apiextensions.k8s.io/v1beta1) The beta APIService API (apiregistration.k8s.io/v1beta1) The beta TokenReview API (authentication.k8s.io/v1beta1) Beta API versions of SubjectAccessReview, LocalSubjectAccessReview, SelfSubjectAccessReview (API versions from authorization.k8s.io/v1beta1) The beta CertificateSigningRequest API (certificates.k8s.io/v1beta1) The beta Lease API (coordination.k8s.io/v1beta1) All beta Ingress APIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions) Containerd support Tanzu Kubernetes Grid Integrated helps customers eliminate lengthy deployment and management processes with on-demand provisioning, scaling, patching, and updating of Kubernetes clusters. To stay in alignment with the Kubernetes community, Containerd will be used as the default container runtime, although Docker can still be selected using the command-line interface (CLI) if needed. Networking Several updates have been made in regards to networking as well including support of Antrea and NSX-T enhancements. Antrea support With Tanzu Kubernetes Grid Integrated version 1.10 and later, customers can leverage Antrea on install or upgrade to use Kubernetes network policies. This enables enterprises to get the best of both worlds: access to the latest innovation from Antrea and world-class support from VMware. NSX-T enhancements NSX-T was integrated with Tanzu Kubernetes Grid Integrated to simplify container networking and increase security. This has been enhanced so customers can now choose the policy API as an option on a fresh installation of Tanzu Kubernetes Grid Integrated. This means that users will have access to new features available only through NSX-T policy API. This feature is currently in beta. In addition, more NSX-T and NSX Container Plug-in (NCP) configuration is possible through the network profiles. This operator command provides the benefit of being able to set configurations through the CLI, and this is persistent across lifecycle events. Storage enhancements We’ve made storage operations in our customers’ container native environments easier, too. Customers were seeking a simpler and more secure way to manage Container Storage Interface (CSI), and we introduced automatic installation of the vSphere CSI driver as a BOSH process beginning with Tanzu Kubernetes Grid Integrated 1.11. Also, as VCP will be deprecated, customers are advised to use the CSI driver. VCP-to-CSI migration is a part of Tanzu Kubernetes Grid Integrated 1.12 and is designed to help customers move forward faster. Enhanced security Implementing new technologies provides users with new capabilities, but it can also lead to new security vulnerabilities if not done correctly. VMware’s goal is to help customers move forward with ease and the confidence of knowing that enhancements don’t compromise core security needs. CIS benchmarks This year, Tanzu Kubernetes Grid Integrated continued to see improvements that help meet today’s high security standards. Meeting the Center for Internet Security (CIS) benchmarks standards is vital for Tanzu Kubernetes Grid Integrated. In recent Tanzu Kubernetes Grid Integrated releases, a few Kubernetes-related settings have been adjusted to ensure compliance with CIS requirements: Kube-apiserver with --kubelet-certificate-authority settings (v1.12) Kube-apiserver with --authorization-mode argument includes Node (v1.12) Kube-apiserver with proper --audit-log-maxage argument (v1.13) Kube-apiserver with proper --audit-log-maxbackup argument (v1.13) Kube-apiserver with proper --audit-log-maxsize argument (v1.13) Certificate rotations Tanzu Kubernetes Grid Integrated secures all communication between its control plane components and the Kubernetes clusters it manages, using TLS validated by certificates. The certificate rotations have been simplified in recent releases. Customers can now list and simply update certificates on a cluster-by-cluster basis through the “tkgi rotate-certificates” command. The multistep, manual process was replaced with a single CLI command to rotate NSX-T certificates (available since Tanzu Kubernetes Grid Integrated 1.10) and cluster-by-cluster certificates (available since Tanzu Kubernetes Grid Integrated 1.12). Hardening of images Tanzu Kubernetes Grid Integrated keeps OS images, container base images, and software library versions updated to remediate the CVEs reported by customers and in the industry. It also continues to use the latest Ubuntu Xenial Stemcell latest versions for node virtual machines. With recent releases and patch versions, the version of dockerd, containerd, runc, telegraf, nfs-utils had been bumped to the latest stable and secure versions as well. By using Harbor as a private registry management service, customers could also leverage the built-in vulnerability scan features to discover the application images CVEs. VMware is dedicated to supporting customers with production readiness by enhancing the user experience. Tanzu Kubernetes Grid Integrated Edition has stayed up to date with the Kubernetes community and provides customers with the support and resources they need to innovate rapidly.

Read More

Spotlight

Hashmap

At hashmap, our team of innovative technologists and domain experts help accelerate the value of data, cloud, iiot/iot, and ai/ml for the community and our clients by creating smart, flexible and high-value service offerings that work across industries. no matter what your needs are, our team of experts will go out of their way to customize our offerings to your demands. from start to finish, we will take you through every step of the process, and all the way to a stunning result. our service lines are listed below.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Events