National pen test execution standard would improve network security

As the number of cyber attacks increases, the demand for penetration tests – to determine the strength of a company’s defense – is also going up. People are worried about their companies’ networks and computer systems being hacked and data being stolen. Plus, many regulatory standards such PCI and HITRUST require these tests to be performed on at least an annual basis. The demand for these tests is only going to increase as attackers get more sophisticated. And it’s essential these tests catch all possible vulnerabilities.

Spotlight

Uberflip

Uberflip is the world’s #1 Content Experience Platform (CEP). With tools to aggregate all your marketing content, they empower B2B marketing and sales teams to create personalized content experiences to engage accounts, nurture prospects, and convert leads, without the help of IT. It’s their mission to put control back in the hands of marketing teams to deliver high-converting experiences, that put the customer front and center.

OTHER ARTICLES
Virtual Desktop Tools, Server Hypervisors

How to Start Small and Grow Big with Data Virtualization

Article | April 28, 2023

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More
Virtual Desktop Tools

Managing Multi-Cloud Complexities for a Seamless Experience

Article | August 12, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More
Virtual Desktop Tools, Server Hypervisors

Network Virtualization: The Future of Businesses and Networks

Article | June 8, 2023

Network virtualization has emerged as the widely recommended solution for the networking paradigm's future. Virtualization has the potential to revolutionize networks in addition to providing a cost-effective, flexible, and secure means of communication. Network virtualization isn't an all-or-nothing concept. It can help several organizations with differing requirements, or it can provide a bunch of new advantages for a single enterprise. It is the process of combining a network's physical hardware into a single, virtual network. This is often accomplished by running several virtual guest machines in software containers on a single physical host system. Network virtualization is indeed the new gold standard for networking, and it is being embraced by enterprises of all kinds globally. By integrating their current network gear into a single virtual network, businesses can reduce operating expenses, automate network and security processes, and lay the groundwork for future growth. Network virtualization also enables organizations to simulate traditional hardware like servers, storage devices, and network resources. The physical network performs basic tasks like packet forwarding, while virtual versions handle more complex activities like networking service management and deployment. Addressing Network Virtualization Challenges Surprisingly, IT teams might encounter network virtualization challenges that are both technical and non-technical in nature. Let's look at some common challenges and discuss how to overcome them. Change in Network Architecture Practically, the first big challenge is shifting from an architecture that depends heavily on routers, switches, and firewalls. Instead, these services are detached from conventional hardware and put on hypervisors that virtualize these operations. Virtualized network services are shared, scaled, and moved as required. Migrating current LANs and data centers to a virtualized platform require careful planning. This migration involves the following tasks: Determine how much CPU, computation, and storage resources will be required to run virtualized network services. Determine the optimal approach for integrating network resilience and security services. Determine how the virtualized network services will be implemented in stages to avoid disrupting business operations. The key to a successful migration is meticulous preparation by architects who understand the business's network requirements. This involves a thorough examination of existing apps and services, as well as a clear knowledge of how data should move across the company most effectively. Moreover, a progressive approach to relocation is often the best solution. In this instance, IT teams can make changes to the virtualization platform without disrupting the whole corporate network. Network Visibility Network virtualization has the potential to considerably expand the number of logical technology layers that must collaborate. As a result, traditional network and data center monitoring technologies no longer have insight into some of these abstracted levels. In other circumstances, visibility can be established, but the tools fail to show the information correctly so that network operators can understand it. In either case, deploying and managing modern network visibility technologies is typically the best choice. When an issue arises, NetOps personnel are notified of the specific service layer. Automation and AI The enhanced level of automation and self-service operations that can be built into a platform is a fundamental aspect of network virtualization. While these activities can considerably increase the pace of network upgrades while decreasing management overhead, they need the documentation and implementation of a new set of standards and practices. Understand that prior network architectures were planned and implemented utilizing actual hardware appliances on a hop-by-hop basis. A virtualized network, on the other hand, employs a centralized control plane to govern and push policies to all sections of the network. Changes may occur more quickly in this aspect, but various components must be coordinated to accomplish their roles in harmony. As a result, network teams should move their attention away from network operations that are already automated. Rather, their new responsibility is to guarantee that the core automation processes and AI are in sync in order to fulfill those automated tasks. Driving Competitive Edge with Network Virtualization Virtualization in networking or virtual machines within an organization is not a new trend. Even small and medium businesses have realized the benefits of network virtualization, especially when combined with a hosted cloud service provider. Because of this, the demand for enterprise network virtualization is rising, driving higher end-user demands and the proliferation of devices and business tools. These network virtualization benefits can help boost business growth and gain a competitive edge. Gaining a Competitive Edge: Network Virtualization Benefits Cost-Savings on Hardware Faster Desktop and Server Provisioning and Deployment Improved Data Security and Disaster Recovery Increasing IT Operational Efficiency Small Footprint and Energy Saving Network Virtualization: The Path to Digital Transformation Business is at the center of digital transformation, but technology is needed to make it happen. Integrated clouds, highly modern data centers, digital workplaces, and increased data center security are all puzzle pieces, and putting them all together requires a variety of various products and services that are deployed cohesively. The cloud revolution is still having an influence on IT, transforming how digital content is consumed and delivered. This should come as no surprise that such a shift has influenced how we feel about current networking. When it boils down to it, the purpose of digital transformation for every company, irrespective of industry, is the same: to boost the speed with which you can respond to market changes and evolving business needs; to enhance your ability to embrace and adapt to new technology, and to improve overall security. As businesses realize that the underlying benefit of cloud adoption and enhanced virtualization isn't simply about cost savings, digital strategies are evolving, becoming more intelligent and successful in the process. Network virtualization is also a path toward the smooth digital transformation of any business. How does virtualization help in accelerating digital transformation? Combining public and private clouds, involving hardware-based computing, storage, and networking software definition. A hyper-converged infrastructure that integrates unified management with virtualized computing, storage, and networking could be included. Creating a platform for greater productivity by providing the apps and services consumers require when and when they utilize them. This should include simplifying application access and administration as well as unifying endpoint management. Improving network security and enhancing security flexibility to guarantee that quicker speed to market is matched by tighter security. Virtualization will also help businesses to move more quickly and safely, bringing products—and profits—to market faster. Enhancing Security with Network Virtualization Security has evolved as an essential component of every network architecture. However, since various areas of the network are often segregated from one another, it might be challenging for network teams to design and enforce network virtualization security standards that apply to the whole network. Zero trust can integrate such network parts and their accompanying virtualization activities. Throughout the network, the zero-trust architecture depends on the user and device authentication. If LAN users wish to access data center resources, they must first be authenticated. The secure connection required for endpoints to interact safely is provided by a zero-trust environment paired with network virtualization. To facilitate these interactions, virtual networks can be ramped up and down while retaining the appropriate degree of traffic segmentation. Access policies, which govern which devices can connect with one another, are a key part of this process. If a device is allowed to access a data center resource, the policy should be understood at both the WAN and campus levels. Some of the core network virtualization security features are: Isolation and multitenancy are critical features of network virtualization. Segmentation is related to isolation; however it is utilized in a multitier virtual network. A network virtualization platform's foundation includes firewalling technologies that enable segmentation inside virtual networks. Network virtualization enables automatic provisioning and context-sharing across virtual and physical security systems. Investigating the Role of Virtualization in Cloud Computing Virtualization in the cloud computing domain refers to the development of virtual resources (such as a virtual server, virtual storage device, virtual network switch, or even a virtual operating system) from a single resource of its type that also shows up as several personal isolated resources or environments that users can use as a separate individual physical resource. Virtualization enables the benefits of cloud computing, such as ease of scaling up, security, fluid or flexible resources, and so on. If another server is necessary, a virtual server will be immediately created, and a new server will be deployed. When we need more memory, we increase the virtual server configurations we currently have, and we now have the extra RAM we need. As a result, virtualization is the underlying technology of the cloud computing business model. The Benefits of Virtualization in Cloud Computing: Efficient hardware utilization Virtualization improves availability Disaster recovery is quick and simple Energy is saved by virtualization Setup is quick and simple Cloud migration has become simple Motivating Factors for the Adoption of Network Virtualization Demand for enterprise networks continues to climb, owing to rising end-user demands and the proliferation of devices and business software. Thanks to network virtualization, IT companies are gaining the ability to respond to shifting demands and match their networking capabilities with their virtualized storage and computing resources. In fact, according to a recent SDxCentral report, 88% of respondents believe it is "important" or "mission critical" to implement a network virtualization software over the next two to five years. Virtualization is also an excellent alternative for businesses that employ outsourced IT services, are planning mergers or acquisitions or must segregate IT teams owing to regulatory compliance. Reasons to Adopt Network Virtualization: A Business Needs Speed Security Requirements Are Rising Apps can Move Around Micro-segmentation IT Automation and Orchestration Reduce Hardware Dependency and CapEx: Adopt Multi-Tenancy Cloud Disaster Recovery mproved Scalability Wrapping-Up Network virtualization and cloud computing are emerging technologies of the future. As CIOs get actively involved in organizational systems, these new concepts will be implemented in more businesses. As consumer demand for real-time services expands, businesses will be driven to explore network virtualization as the best way to take their networks to the next level. The networking future is here. FAQ Why is network virtualization important for business? By integrating their current network gear into a single virtual network, businesses can reduce operating expenses, automate network and security processes, and set the stage for future growth. Where is network virtualization used? Network virtualization can be utilized in application development and testing to simulate hardware and system software realistically. Network virtualization in application performance engineering allows for the modeling of connections among applications, services, dependencies, and end users for software testing. How does virtualization work in cloud computing? Virtualization, in short, enables cloud providers to provide users alongside existing physical computer infrastructure. As a simple and direct process, it allows cloud customers to buy only the computing resources they require when they want them and to maintain those resources cost-effectively as the demand grows.

Read More
Virtual Desktop Strategies

Digital Marketplace: The Future of E-commerce

Article | July 12, 2022

It is no surprise that e-commerce has grown dramatically in recent years. I don't want to be boring, but certainly the pandemic and a few other market factors have had a role. From ancient times, marketplaces of all shapes and sizes have served as the foundation for all types of business. As the world transforms and becomes more digital, the rise of digital marketplaces, e-commerce, and other types of online business is exploding. E-commerce marketplace platforms are rapidly expanding in the digital environment and are expected to acquire momentum as the future of e-commerce. This increase is because of the fact that online marketplaces combine user demand and provide customers with a broader selection of products. Digital Marketplaces Are the Way to the Future of E-Commerce Without a doubt, online marketplaces will dominate the e-commerce business in the coming years. According to Coresight Research, marketplace platform revenue will more than double, reaching around $40 billion in 2022. This means that by 2022, online marketplaces will account for 67% of worldwide e-Commerce revenues (Forrester). Today, the issue is not whether you sell online but how far you can reach. E-commerce offers limitless opportunities, and all you need to do is keep pace with the trends. What are you doing right now? How far can you go? Have you already made the transition from local to global? Digital marketplaces are indeed the way of the future of e-commerce. The earlier you realize this and integrate it into your sales and marketing approach, the better. I really mean it. The world is changing, and your competitors are not sleeping. You cannot overlook this trend if you really want to stay ahead. It's all about the people in business, as it has always been. Understanding who you're pitching to is critical to your success. You should be aware. Everything you do in business should get you closer to your target audience. Closing Lines: Digital marketplaces are indeed the future of commerce. People will inevitably start shopping online even more in the future. That implies methods and means will be developed to make such transactions easier for the common individual. Explore how your business might profit from these markets and trends that suggest the future of physical and online shopping.

Read More

Spotlight

Uberflip

Uberflip is the world’s #1 Content Experience Platform (CEP). With tools to aggregate all your marketing content, they empower B2B marketing and sales teams to create personalized content experiences to engage accounts, nurture prospects, and convert leads, without the help of IT. It’s their mission to put control back in the hands of marketing teams to deliver high-converting experiences, that put the customer front and center.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Events