2 big mistakes to avoid in edge computing

More things are being pushed to the edge. Think of the edge as the space between the cloud and whatever device or system is tossing off data.The idea is to do most of the processing at the edge, close to where the data is produced. This approach, called edge computing, provides a much better response time, because there’s no need to send the data back to a central cloud-based data-storage system where it’s processed and then returned all the way back to the device. Edge computing is very useful. Which is why most enterprises are sold on the concept. I’m seeing interest in edge computing move from proofs of concepts to production. However, it’s not a substitute for a good architectural approach and old-fashioned pragmatism. Which is why I’m also seeing huge mistakes being made—mistakes that are avoidable. Here are two common mistakes to avoid.

Spotlight

RedEye, Inc

RedEye is an IT support services company that offers a complete range of computer support solutions to small and medium sized businesses in NYC, NJ and PA. We are differentiated by the consistently high levels of rapid response to the IT needs of our clients.

OTHER ARTICLES
Server Hypervisors

How to Start Small and Grow Big with Data Virtualization

Article | May 18, 2023

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More
Virtual Desktop Strategies, Server Hypervisors

Virtualizing Broadband Networks: Q&A with Tom Cloonan and David Grubb

Article | April 27, 2023

The future of broadband networks is fast, pervasive, reliable, and increasingly, virtual. Dell’Oro predicts that virtual CMTS/CCAP revenue will grow from $90 million in 2019 to $418 million worldwide in 2024. While network virtualization is still in its earliest stages of deployment, many operators have begun building their strategy for virtualizing one or more components of their broadband networks.

Read More
Virtual Desktop Tools, Server Hypervisors

Virtualization Can Help Substantially Reduce Computing Costs

Article | June 8, 2023

Businesses use a lot of technology to keep themselves competitive and Businesses use a lot of technology to keep themselves competitive and operationally efficient. One way that organizations use to make their technology infrastructure more accessible is through the use of virtualization. Let’s discuss what virtualization is, how it benefits businesses, and some examples of how you might consider leveraging virtualization to your company’s benefit. Virtualization for Hardware and Software Virtualization in its most basic sense is taking something and making it virtual. In regards to hardware and software, it involves taking these parts of your technology infrastructure and making them available in a virtual environment. Virtual applications and hardware solutions can be deployed to the cloud so that they can be accessed by any online device. Some examples of virtualization might include creating virtual machines, like workstations and server units, that are hosted in a virtual environment for as-needed access

Read More
Virtual Desktop Tools, Server Hypervisors

Virtual Machine Security Risks and Mitigation in Cloud Computing

Article | April 28, 2023

Analyzing risks and implementing advanced mitigation strategies: Safeguard critical data, fortify defenses, and stay ahead of emerging threats in the dynamic realm of virtual machines in cloud. Contents 1. Introduction 2. 10 Security Risks Associated with Virtual Machines in Cloud Computing 3. Best Practices to Avoid Security Compromise 4. Conclusion 1. Introduction Cloud computing has revolutionized the way businesses operate by providing flexible, scalable, and cost-effective infrastructure for running applications and services. Virtual machines (VMs) are a key component of cloud computing, allowing multiple virtual machines to run on a single physical machine. However, the use of virtual machines in cloud computing introduces new security risks that need to be addressed to ensure the confidentiality, integrity, and availability of data and services. Effective VM security in the cloud requires a comprehensive approach that involves cloud providers and users working together to identify and address potential virtual machine security threats. By implementing these best practices and maintaining a focus on security, cloud computing can provide a secure and reliable platform for businesses to run their applications and services. 2. 10 Security Risks Associated with Virtual Machines in Cloud Computing Denial of Service (DoS) attacks: These are attacks that aim to disrupt the availability of a VM or the entire cloud infrastructure by overwhelming the system with traffic or resource requests. Insecure APIs: Cloud providers often expose APIs that allow users to manage their VMs. If these APIs are not properly secured, attackers can exploit them to gain unauthorized access to VMs or manipulate their configurations. Data leakage: Virtual machines can store sensitive data such as customer information or intellectual property. If not secured, this data can be exposed to unauthorized access or leakage. Shared resources: VMs in cloud environments often share physical resources such as memory, CPU, and network interfaces. If these resources are not isolated, a compromised VM can potentially affect the security and performance of other VMs running on the same physical host. Lack of visibility: Virtual machines in cloud environments can be more difficult to monitor than physical machines. This can make it harder to detect security incidents or anomalous behavior. Insufficient logging and auditing: If cloud providers do not implement appropriate logging and auditing mechanisms, it can be difficult to determine the cause and scope of a security incident. VM escape: This is when an attacker gains access to the hypervisor layer and then escapes into the host operating system or other VMs running on the same physical host. Side-channel attacks: This is when an attacker exploits the physical characteristics of the hardware to gain unauthorized access to a VM. Examples of side-channel attacks include timing attacks, power analysis attacks, and electromagnetic attacks. Malware attacks: VMs can be infected with malware, just like physical machines. Malware can be used to steal data, launch attacks on other VMs or systems, or disrupt the functioning of the VM. Insider threats: Malicious insiders can exploit their access to VMs to steal data, modify configurations, or launch attacks. 3. Best Practices to Avoid Security Compromise To mitigate these risks, there are several virtual machine security guidelines that cloud service providers and users can follow: Keep software up-to-date: Regularly updating software and security patches for virtual machines is crucial in preventing known vulnerabilities from being exploited by hackers. Software updates fix bugs and security flaws that could allow unauthorized access, data breaches, or malware attacks. According to a study, 60% of data breaches are caused by vulnerabilities that were not patched or updated in a timely manner.(Source: Ponemon Institute) Use secure hypervisors: A hypervisor is a software layer that enables multiple virtual machines to run on a single physical server. Secure hypervisors are designed to prevent unauthorized access to virtual machines and protect them from potential security threats. When choosing a hypervisor, it is important to select one that has undergone rigorous testing and meets industry standards for security. In 2018, a group of researchers discovered a new type of attack called "Foreshadow" (also known as L1 Terminal Fault). The attack exploits vulnerabilities in Intel processors and can be used to steal sensitive data from virtual machines running on the same physical host. Secure hypervisors that have implemented hardware-based security features can provide protection against Foreshadow and similar attacks. (Source: Foreshadow) Implement strong access controls: Access control is the practice of restricting access to virtual machines to authorized users. Multi-factor authentication adds an extra layer of security by requiring users to provide more than one type of authentication method before accessing VMs. Strong access controls limit the risk of unauthorized access and can help prevent data breaches. According to a survey, organizations that implemented multi-factor authentication saw a 98% reduction in the risk of phishing-related account breaches. (Source: Duo Security) Monitor VMs for anomalous behavior: Monitoring virtual machines for unusual or unexpected behavior is an essential security practice. This includes monitoring network traffic, processes running on the VM, and other metrics that can help detect potential security incidents. By monitoring VMs, security teams can detect and respond to security threats before they can cause damage. A study found that 90% of organizations that implemented a virtualized environment experienced security benefits, such as improved visibility into security threats and faster incident response times. (Source: VMware) Use Encryption: Encryption is the process of encoding information in such a way that only authorized parties can access it. Encrypting data both in transit and at rest protects it from interception or theft by hackers. This can be achieved using industry-standard encryption protocols and technologies. According to a report by, the average cost of a data breach in 2020 was $3.86 million. The report also found that organizations that implemented encryption had a lower average cost of a data breach compared to those that did not (Source: IBM) Segregate VMs: Segregating virtual machines is the practice of keeping sensitive VMs separate from less sensitive ones. This reduces the risk of lateral movement, which is when a hacker gains access to one VM and uses it as a stepping stone to gain access to other VMs in the same environment. Segregating VMs helps to minimize the risk of data breaches and limit the potential impact of a security incident. A study found that organizations that implemented a virtualized environment without adequate segregation and access controls were more vulnerable to VM security breaches and data loss. (Source: Ponemon Institute) Regularly Back-up VMs: Regularly backing up virtual machines is a critical security practice that can help mitigate the impact of malware attacks, system failures, or other security incidents. Backups should be stored securely and tested regularly to ensure that they can be restored quickly in the event of a security incident. A survey conducted found that 42% of organizations experienced a data loss event in 2020 with the most common cause being accidental deletion by an employee (29%). (Source: Veeam) 4. Conclusion The complexity of cloud environments and the shared responsibility model for security require organizations to adopt a comprehensive security approach that spans multiple infrastructure layers, from the physical to the application layer. The future of virtual machine security concern in cloud computing will require continued innovation and adaptation to new threats and vulnerabilities. As a result, organizations must remain vigilant and proactive in their security efforts, leveraging the latest technologies and best practices to protect their virtual machines, the sensitive data and resources they contain.

Read More

Spotlight

RedEye, Inc

RedEye is an IT support services company that offers a complete range of computer support solutions to small and medium sized businesses in NYC, NJ and PA. We are differentiated by the consistently high levels of rapid response to the IT needs of our clients.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Events