Current Trends in Distributed Control Systems

Major trends in the architecture of distributed control systems are not adopted overnight. At present, ARC Advisory Group is following several ongoing trends in DCS. These range from highly mature ones, such as remote I/O enclosures to newly emerging trends such as the virtualization of control functionality. Different DCS suppliers will be at different stages in adopting these trends in their own product lines. However, looking over the entire market, we see several key trends that build on each other. Several others are now barely visible but should become quite important in the market over the next three to five years. Overall, these trends are driven by very few factors. One factor is the growing capability of remote I/O and enclosures. These have evolved from simple field junction boxes to complete control systems. Another technological driver is growing use of virtualization in different areas of DCS products and markets.

Spotlight

Helmes AS

Helmes is an international software house with headquarters in Tallinn, clients across all Europe and development centres in Estonia and Belarus. Our clients are the leading telecom operators, banks and insurance companies, logistics corporations and government institutions like TeliaSonera, SEB, NasdaqOMX, Audatex, Kuehne + Nagel and OECD to name a few. Our partners trust us the design, development and maintenance of their most business critical software solutions and the most complex system integration projects.

OTHER ARTICLES
Virtual Desktop Tools, Virtual Desktop Strategies

Boosting Productivity with Kubernetes and Docker

Article | June 8, 2023

Learn setting up a Docker and Kubernetes environment with the right considerations and choose the best-suited software from ten leading tools, softwares and platforms for your business needs. Contents The blog discusses how Kubernetes and Docker can boost software development and deployment productivity. In addition, it covers the benefits of the role of Kubernetes in orchestrating containerized applications and best practices for implementing these technologies to improve efficiency and streamline workflows. Docker and Kubernetes are both essential containerization ecosystem utilities. Kubernetes, an excellent DevOps solution, manages and automates containers' deployment and scaling, along with operating across clusters of hosts, whereas Docker is used for creating and operating containers. The blog covers tips to consider while choosing tools/platforms. It further enlists ten platforms providing Kubernetes and Docker, featuring their offerings. 1. Considerations While Setting Up a Development Environment with Kubernetes and Docker 1.1 Fluid app delivery A platform for application development must provide development teams with high velocity. Two factors contribute to high velocity: rapid application delivery and brief development cycles. Application platforms must support build processes that start with source code. The platforms must also facilitate the repetitive deployment of applications on any remote staging instance. 1.2 Polyglot support Consistency is the defining characteristic of an application platform. On-demand, repetitive, and reproducible builds must be supported by the platform. Extending a consistent experience across all languages and frameworks elevates the platform experience. The platform must support a native build process and the ability to develop and customize this build process. 1.3 Baked-in security Containerized environments are secured in a significantly different manner than conventional applications. A fundamental best practice is to utilize binaries compiled with all necessary dependencies. The build procedure should also include a directive to eliminate unnecessary components for the application's operation. Setting up a zero-trust architecture between platform components that orchestrate deployments significantly improves the workloads' security posture. 1.4 Adjustable abstractions A platform with paved paths and the flexibility to accommodate the requirements of software engineering teams has a greater chance of success. Open-source platforms score highly in this regard, particularly those with modular architectures that allow the team to swap out parts as they adjust. 2.Top Tips to Consider While Choosing Tools and Platforms for Kubernetes and Docker 2.1 Production-Readiness Configuring Kubernetes or Docker can be complex and resource-intensive. A production-ready platform will ensure having the necessary fully automated features without the need for configuration. Security is an essential aspect of production readiness. Additionally, automation is critical, as production readiness requires that the solution manage all cluster management duties. Automated backup, recovery, and restore capabilities must be considered. Also, ensure the high availability, scalability, and self-healing of the cluster's platform. 2.2 Future-Readiness As the cloud and software evolve, a system's hosting location may affect its efficacy. The current trend is a multi-cloud strategy. Ensure that the platform can support abstracting from cloud or data center providers and building a shared infrastructure across clouds, cloud regions, and data centers, as well as assist in configuring them if required. According to a recent study, nearly one-third of organizations are already collaborating with four or more cloud service providers. (Source: Microsoft and 451 Research) 2.3 Ease of Administration Managing a Docker or Kubernetes cluster is complex and requires various skill sets. Kubernetes generates a lot of unprocessed data, which must be interpreted to comprehend what's happening with the cluster. Early detection and intervention are crucial to disaster prevention. Identifying a platform that eliminates the issue of analyzing raw data is essential. By incorporating automated intelligent monitoring and alerts, such solutions can provide critical status, error, event, and warning data to take appropriate action. 2.4 Assistance and Training As the organization begins to acquire Kubernetesor Docker skills, it is essential to have a vendor that can provide 24/7 support and training to ensure a seamless transition. Incorrect implementation will add a layer of complexity to infrastructure management. Leverage automation tools that offer the support needed to use Kubernetes and Docker without the management burden. 3. 10 Tools and Platforms Providing Kubernetes and Docker 3.1 Aqua Cloud Native Security Platform: Aqua Security provides the Aqua Cloud Native Security Platform, a comprehensive security solution designed to protect cloud-native applications and microservices. Aqua offers end-to-end security for applications operating on Docker Enterprise Edition (Community Edition), protecting the DevOps pipeline and production workloads with complete visibility and control. It provides end-to-end security across the entire application lifecycle, from development to production, for both containerized and serverless workloads. In addition, it automates prevention, detection, and response across the whole application lifecycle to secure the build, cloud infrastructure, and operating workloads, regardless of where they are deployed. 3.2 Weave Gitops Enterprise Weave GitOps Enterprise, a full-stack, developer-centric operating model for Kubernetes, creates and contributes to several open-source projects. Its products and services enable teams to design, build, and operate their Kubernetes platform at scale. Built by the creators of Flux and Flagger, Weave GitOps allows users to deploy and manage Kubernetes clusters and applications in the public or private cloud or their own data center. Weave GitOps Enterprise helps simplify Kubernetes with fully automated continuous delivery pipelines that roll out changes from development to staging and production. Weaveworks has used Kubernetes in production for over eight years and has developed that expertise into Weave GitOps Enterprise. 3.3 Mirantis Kubernetes Engine Mirantis provides the Mirantis Kubernetes Engine, a platform designed to help organizations deploy, manage, and scale their Kubernetes clusters. It includes features such as container orchestration, automated deployment, monitoring, and high availability, all designed to help organizations build and run their applications at scale. Mirantis Kubernetes Engine also includes a set of tools for managing the lifecycle of Kubernetes clusters, including cluster deployment, upgrades, and patching. It also has security scanning and policy enforcement features, as well as integration with other enterprise IT systems such as Active Directory and LDAP. 3.4 Portworx by Pure Storage Portworx's deep integration into Docker gives Portworx container data services benefits directly through the Docker Swarm scheduler. Swarm service creation brings the management capability of Portworx to the Docker persistent storage layer to avoid complex tasks such as increasing the storage pool without container downtime and problems like stuck EBS drives. Portworx is also a multi-cloud-ready Kubernetes storage and administration platform designed to simplify and streamline data management in Kubernetes. The platform abstracts the complexity of data storage in Kubernetes. Additionally, it serves as a software-defined layer that aggregates Kubernetes nodes' data storage into a virtual reservoir. 3.5 Platform9 Platform9 provides a powerful IDE for developers for simplified in-context views of pods, logs, events, and more. Both development and operations teams can access the information they need in an instant, secured through SSO and Kubernetes RBAC. The industry’s first SaaS-managed approach combined with a best-in-class support and customer success organization with a 99.9% consistent CSAT rating delivers production-ready K8s to organizations of any size. It provides services to deploy a cluster instantly, achieve GitOps faster, and take care of every aspect of cluster management, including remote monitoring, self-healing, automatic troubleshooting, and proactive issue resolution, around the clock. 3.6 Kubernetes Network Security Sysdig provides Kubernetes Network Security, a solution that offers cloud security from source to run. The product provides network security for Kubernetes environments by monitoring and blocking suspicious traffic in real time. It helps organizations protect their Kubernetes clusters against advanced threats and attacks. The product and Sysdig Secure offer Kubernetes Network Monitoring to investigate suspicious traffic and connection attempts, Kubernetes-Native Microsegmentation to enable microsegmentation without breaking the application, and Automated Network Policies to save time by automating Kubernetes network policies. 3.7 Kubernetes Operations Platform for Edge Rafay delivers a production-ready Kubernetes Operations Platform for Edge, streamlining ongoing operations for edge applications. It provides centralized multi-cluster management to deploy, manage, and upgrade all Kubernetes clusters from a single console across all edge nodes. In addition, it offers comprehensive lifecycle management, with which users can quickly and easily provision Kubernetes clusters at the edge, where cluster updates and upgrades are seamless with no downtime. Furthermore, the KMC for Edge quickly integrates with enterprise-class SSO solutions such as Okta, Ping One, and Azure AD, among others. Other features include standardized clusters and workflows, integration and automation, and centralized logging and monitoring. 3.8 Opcito Technologies Opcito provides simplified container management with efficient provisioning, deployment, scaling, and networking. Its application containerization expertise helps containerize existing and new applications and dependencies. Opcito is well-versed in leading container orchestration platforms like Docker Swarm and Kubernetes. While it helps choose the container platform that best suits specific application needs, it also helps with the end-to-end management of containers so clients can release applications faster and focus on innovation and business. The container management and orchestration services include: building secured microservices, Enterprise-scale Container Management and Orchestration, Orchestration, and Container Monitoring. 3.9 D2iQ Kubernetes Platform (DKP) D2iQ (DKP) enables enterprises to take advantage of all the benefits of cloud-native Kubernetes while laying the groundwork for intelligent cloud-native innovation by simplifying Kubernetes deployment and maintenance. It simplifies and automates the most difficult parts of an enterprise Kubernetes deployment across all infrastructures. DKP helps enterprises easily overcome operational barriers and set them up in minutes and hours rather than weeks and months. In addition, DKP simplifies Kubernetes management through automation using GitOps workflow, observability, application catalog, real-time cost management, and more. 3.10 Spektra Spektra, by Diamanti, a multi-cluster management solution for DevOps and production teams, provides centralized multi-cluster management, a single control plane to deliver everything needed to provision and manage the lifecycle of multiple clusters. Spektra is built to cater to business needs, from air-gapped on-prem deployments to hybrid and multi-cloud infrastructures. It also enables stretching resources across different clusters within the tenant. Furthermore, it allows you to move workloads and their associated data from one cluster to another directly from its dashboard. Spektra integrates with lightweight directory access protocols (LDAP) and Active Directory (AD) to enable user authentication and streamline resource access. In addition, it offers application migration, data mobility, and reporting. 4. Conclusion It is evident that Kubernetes and Docker can significantly boost software development and deployment productivity. By adopting appropriate containerization platforms and leveraging Kubernetes for orchestration, organizations can streamline workflows, improve efficiency, and enhance the reliability of their applications. Furthermore, following the tips to choose the tools or platform carefully can further improve productivity.

Read More
Virtual Desktop Tools, Server Hypervisors

How virtualization helped Dell make a pandemic pivot

Article | April 28, 2023

Danny Cobb, fellow and vice president of engineering for Dell Technologies’ telco systems business, remembers his company cruising into early 2020: Kicking off a new fiscal year with its operating plan in place, supply chain nailed down and factories humming; people coming into the office each day to the usual routine of looking for parking spots and taking laptops down to the cafeteria. Then came March, and the first wave of the Covid-19 pandemic hit U.S. shores. In the course of one weekend, Dell pivoted to having more than 90% of its workforce working from home. That meant a dramatic shift in its network needs and operations – one that was only able to be accomplished so quickly because of virtualized infrastructure.

Read More
Server Hypervisors

Addressing Multi-Cloud Complexity with VMware Tanzu

Article | September 9, 2022

Introduction With cloud computing on the path to becoming the mother of all transformations, particularly in IT's ways of development and operations, we are once again confronted with the problem of conversion errors, this time a hundredfold higher than previous moves to dispersed computing and the web. While the issue is evident, the remedies are not so obvious. Cloud complexity is the outcome of the fast acceleration of cloud migration and net-new innovation without consideration of the complexity this introduces in operations. Almost all businesses are already working in a multi-cloud or hybrid-cloud environment. According to an IDC report, 93% of enterprises utilize multiple clouds. The decision could have stemmed from a desire to save money and avoid vendor lock-in, increase resilience, or businesses might have found themselves with several clouds as a result of the compounding activities of different teams. When it comes to strategic technology choices, relatively few businesses begin by asking, "How can we secure and control our technology?" Must-Follow Methods for Multi-Cloud and Hybrid Cloud Success Data Analysis at Any Size, from Any Source: To proactively recognize, warn, and guide investigations, teams should be able to utilize all data throughout the cloud and on-premises. Insights in Real-Time: Considering the temporary nature of containerized operations and functions as a service, businesses cannot wait minutes to determine whether they are experiencing infrastructure difficulties. Only a scalable streaming architecture can ingest, analyze, and alert rapidly enough to discover and investigate problems before they have a major impact on consumers. Analytics That Enables Teams to Act: Because multi-cloud and hybrid-cloud strategies do not belong in a single team, businesses must be able to evaluate data inside and across teams in order to make decisions and take action swiftly. How Can VMware Help in Solving Multi-Cloud and Hybrid-Cloud Complexity? VMware made several announcements indicating a new strategy focused on modern applications. Their approach focuses on two VMware products: vSphere with Kubernetes and Tanzu. Since then, much has been said about VMware's modern app approach, and several products have launched. Let's focus on VMware Tanzu. VMware Tanzu Tanzu is a product that enables organizations to upgrade both their apps and the infrastructure that supports them. In the same way that VMware wants vRealize to be known for cloud management and automation, Tanzu wants to be known for modern business applications. Tanzu uses Kubernetes to build and manage modern applications. In Tanzu, there is just one development environment and one deployment process. VMware Tanzu is compatible with both private and public cloud infrastructures. Closing Lines The important point is that the Tanzu portfolio offers a great deal of flexibility in terms of where applications operate and how they are controlled. We observe an increase in demand for operating an application on any cloud, and how VMware Tanzu assists us in streamlining the multi-cloud operation for MLOps pipeline. Apart from multi-cloud operation, it is critical to monitor and alarm each component throughout the MLOps lifecycle, from Kubernetes pods and inference services to data and model performance.

Read More
Virtual Desktop Tools

Rising Importance of Network Virtualization

Article | July 26, 2022

Network virtualization combines network resources to integrate several physical networks, segment a network, or construct software networks among VMs. IT teams can construct numerous separate virtual networks using network virtualization. Virtual networks can be added and scaled without changing hardware. Teams can start up logical networks more rapidly in response to business needs using network virtualization. This adaptability improves service delivery, efficiency, and control. Importance of Network Virtualisation Network virtualization entails developing new rules for the delivery of network services. This involves software-defined data centers (SDDC), cloud computing, and edge computing. Virtualization assists in the transformation of networks from rigid, wasteful, and static to optimized, agile, and dynamic. To ensure agility and speed, modern virtual networks must keep up with the needs of cloud-hosted, decentralized applications while addressing cyberthreats. You can deploy and upgrade programs in minutes thanks to network virtualization. This eliminates the need to spend time setting up the infrastructure to accommodate the new applications. What is the Process of Network Virtualization? Several network functions that were previously done manually on hardware are now automated through network virtualisation. Network managers can construct, maintain, and provide networks programmatically in software while employing the hardware as a packet-forwarding backplane. Physical network resources, such as virtual private networks (VPNs), load balancing, firewalling, routing, and switching, are pooled and supplied in software. To do this, you merely require Internet Protocol (IP) packet forwarding from the hardware or physical network. Individual workloads, such as virtual machines, can access network services that have been distributed to a virtual layer. There are several kinds of virtual machines accessible. The finest virtual machines enable network administrators to access all parts of a network from a single point of access. Closing Lines Network virtualization will remain a critical component in both business and carrier network architectures. Network virtualization projects in the future will inevitably incorporate zero trust, automation, and edge and cloud computing.

Read More

Spotlight

Helmes AS

Helmes is an international software house with headquarters in Tallinn, clients across all Europe and development centres in Estonia and Belarus. Our clients are the leading telecom operators, banks and insurance companies, logistics corporations and government institutions like TeliaSonera, SEB, NasdaqOMX, Audatex, Kuehne + Nagel and OECD to name a few. Our partners trust us the design, development and maintenance of their most business critical software solutions and the most complex system integration projects.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Backup and Disaster Recovery

Minimize the Cost and Downtime of Disaster With Scale Computing's Business Continuity/Disaster Recovery Planning Service

PR Newswire | October 25, 2023

Scale Computing, a market leader in edge computing, virtualization, and hyperconverged solutions, today announced its Business Continuity/Disaster Recovery (BCDR) Planning Service, designed to help organizations establish a comprehensive, regulated plan for responding to unforeseen downtime. The service provides Scale Computing customers and partners with the tools, guidance, and resources to create a playbook for data backup and recovery, enabling businesses to endure a disaster scenario with minimal loss. Scale Computing also recently announced that it is a finalist for the Business Continuity/Disaster Recovery Project of the Year in the 2023 SDC Awards for its work with Austrian managed service provider GiGaNet and its long-time partner the Zillertaler Gletscherbahn group. Voting for the SDC Awards is open at sdcawards.com/vote until November 10th, 2023. Data breaches are one of the biggest and most costly contributors to downtime for businesses. In 2023, the average cost of a data breach globally reached an all-time high of $4.45 million, a 15.3% increase from 2020. Simultaneously, the average length of business disruption following a ransomware attack in the United States reached 24 days last year, up 60% from just two years prior — a significant increase when downtime costs exceed $300,000 per hour for over 90% of mid-sized and large enterprises. For more than half of those businesses, the hourly outage costs range from $1 million to over $5 million. Recovery from an outage adds additional expense from which many enterprises are unable to bounce back. "Disaster can strike at any time, and every organization needs a consistently regulated playbook for how the business will respond — from action plans to recovery plans for bringing online the mission-critical servers businesses depend on," said Jeff Ready, CEO and co-founder, Scale Computing. "Knowing what systems need to be protected, planning for the ability to recover them, and having a full action plan for recovery should be at the forefront of every IT department's agenda, at the beginning of any infrastructure addition. With Scale Computing Platform, the plan for disaster recovery starts before equipment is even put into production, so IT leaders have a plan in place from day one that they can enact to ensure their business stays up and running, with minimal loss, should disaster strike. Our Business Continuity/Disaster Recovery Planning Service enables businesses to proactively classify systems based on their importance and implement a robust action plan, ensuring that our customers' and partners' critical systems are protected, validated, tested, and ready for recovery at any time." Whether a minor data loss or a business-wide shutdown, having a well-defined business continuity strategy is crucial to minimize financial impact, ensure continuous employee productivity, meet compliance and regulatory requirements, decrease liability obligations, reduce downtime, and minimize the risk of negative exposure. Scale Computing's BCDR Planning Service includes planning, deployment, documentation creation, and disaster recovery testing, covering every aspect to keep businesses prepared and resilient. The service is offered to Scale Computing Platform customers, which brings simplicity, high availability, and scalability together to replace existing infrastructure for running virtual machines with an easy-to-manage, fully integrated platform that allows organizations to run applications regardless of hardware requirements. About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide.

Read More

Server Virtualization, VMware

StorMagic Introduces Edge Control Software to Simplify SvSAN Monitoring and Management

Business Wire | October 18, 2023

StorMagic®, solving the world’s edge data problems, today announced the immediate availability of a new Software as a Service (SaaS) tool that allows users to easily monitor and manage all of their SvSAN clusters around the world. StorMagic Edge Control simplifies the process and tools required for day-to-day SvSAN cluster administration. SvSAN customers with multiple locations can significantly reduce the time spent managing their edge sites, whether they are using VMware, Microsoft or KVM hypervisors. “ESG research shows increasing demand for data storage at the edge which fuels an increased need for monitoring solutions that can help address the complexity of storage at the edge,” said Scott Sinclair, practice director at Enterprise Strategy Group. “SvSAN customers can greatly benefit by adding StorMagic Edge Control into their toolkits; the dashboard views and list formats will make centralized data management much easier and more accessible.” Edge Control delivers centralized administration for SvSAN environments of all sizes. Customers can now manage all SvSAN deployments in any location from a single pane of glass. Dashboard and system views provide a fast but comprehensive status of all of their virtual storage appliances (VSAs), allowing them to keep their environment up-to-date more easily and react faster as needed. “StorMagic customers of any size can now manage their entire SvSAN estate, whether it’s one site or thousands of sites around the world,” said Bruce Kornfeld, chief marketing and product officer, StorMagic. “Edge Control is particularly interesting for customers who are considering switching from VMware to Microsoft or Linux KVM because SvSAN and Edge Control are both hypervisor agnostic.” Pricing and Availability Edge Control version 1.0 is available today from StorMagic. SvSAN customers can download and begin using the software immediately, free of charge. About StorMagic StorMagic is solving the world’s edge data problems. We help organizations store, protect and use data at and from the edge. StorMagic’s solutions ensure data is always protected and available, no matter the type or location, to provide value anytime, anywhere. StorMagic’s storage and security products are flexible, robust, easy to use and cost-effective, without sacrificing enterprise-class features, for organizations with one to thousands of sites.

Read More

Events