Virtualization Can Help Substantially Reduce Computing Costs

Businesses use a lot of technology to keep themselves competitive and Businesses use a lot of technology to keep themselves competitive and operationally efficient. One way that organizations use to make their technology infrastructure more accessible is through the use of virtualization. Let’s discuss what virtualization is, how it benefits businesses, and some examples of how you might consider leveraging virtualization to your company’s benefit.

Virtualization for Hardware and Software
Virtualization in its most basic sense is taking something and making it virtual. In regards to hardware and software, it involves taking these parts of your technology infrastructure and making them available in a virtual environment. Virtual applications and hardware solutions can be deployed to the cloud so that they can be accessed by any online device. Some examples of virtualization might include creating virtual machines, like workstations and server units, that are hosted in a virtual environment for as-needed access

Spotlight

Neepa Trainings

Neepa Trainings, is a cyber security technical certification body. We are proud to have trained and certified over 50,000 information security professional

OTHER ARTICLES
Virtual Desktop Tools, Server Hypervisors

Virtual Machine Security Risks and Mitigation in Cloud Computing

Article | June 8, 2023

Analyzing risks and implementing advanced mitigation strategies: Safeguard critical data, fortify defenses, and stay ahead of emerging threats in the dynamic realm of virtual machines in cloud. Contents 1. Introduction 2. 10 Security Risks Associated with Virtual Machines in Cloud Computing 3. Best Practices to Avoid Security Compromise 4. Conclusion 1. Introduction Cloud computing has revolutionized the way businesses operate by providing flexible, scalable, and cost-effective infrastructure for running applications and services. Virtual machines (VMs) are a key component of cloud computing, allowing multiple virtual machines to run on a single physical machine. However, the use of virtual machines in cloud computing introduces new security risks that need to be addressed to ensure the confidentiality, integrity, and availability of data and services. Effective VM security in the cloud requires a comprehensive approach that involves cloud providers and users working together to identify and address potential virtual machine security threats. By implementing these best practices and maintaining a focus on security, cloud computing can provide a secure and reliable platform for businesses to run their applications and services. 2. 10 Security Risks Associated with Virtual Machines in Cloud Computing Denial of Service (DoS) attacks: These are attacks that aim to disrupt the availability of a VM or the entire cloud infrastructure by overwhelming the system with traffic or resource requests. Insecure APIs: Cloud providers often expose APIs that allow users to manage their VMs. If these APIs are not properly secured, attackers can exploit them to gain unauthorized access to VMs or manipulate their configurations. Data leakage: Virtual machines can store sensitive data such as customer information or intellectual property. If not secured, this data can be exposed to unauthorized access or leakage. Shared resources: VMs in cloud environments often share physical resources such as memory, CPU, and network interfaces. If these resources are not isolated, a compromised VM can potentially affect the security and performance of other VMs running on the same physical host. Lack of visibility: Virtual machines in cloud environments can be more difficult to monitor than physical machines. This can make it harder to detect security incidents or anomalous behavior. Insufficient logging and auditing: If cloud providers do not implement appropriate logging and auditing mechanisms, it can be difficult to determine the cause and scope of a security incident. VM escape: This is when an attacker gains access to the hypervisor layer and then escapes into the host operating system or other VMs running on the same physical host. Side-channel attacks: This is when an attacker exploits the physical characteristics of the hardware to gain unauthorized access to a VM. Examples of side-channel attacks include timing attacks, power analysis attacks, and electromagnetic attacks. Malware attacks: VMs can be infected with malware, just like physical machines. Malware can be used to steal data, launch attacks on other VMs or systems, or disrupt the functioning of the VM. Insider threats: Malicious insiders can exploit their access to VMs to steal data, modify configurations, or launch attacks. 3. Best Practices to Avoid Security Compromise To mitigate these risks, there are several virtual machine security guidelines that cloud service providers and users can follow: Keep software up-to-date: Regularly updating software and security patches for virtual machines is crucial in preventing known vulnerabilities from being exploited by hackers. Software updates fix bugs and security flaws that could allow unauthorized access, data breaches, or malware attacks. According to a study, 60% of data breaches are caused by vulnerabilities that were not patched or updated in a timely manner.(Source: Ponemon Institute) Use secure hypervisors: A hypervisor is a software layer that enables multiple virtual machines to run on a single physical server. Secure hypervisors are designed to prevent unauthorized access to virtual machines and protect them from potential security threats. When choosing a hypervisor, it is important to select one that has undergone rigorous testing and meets industry standards for security. In 2018, a group of researchers discovered a new type of attack called "Foreshadow" (also known as L1 Terminal Fault). The attack exploits vulnerabilities in Intel processors and can be used to steal sensitive data from virtual machines running on the same physical host. Secure hypervisors that have implemented hardware-based security features can provide protection against Foreshadow and similar attacks. (Source: Foreshadow) Implement strong access controls: Access control is the practice of restricting access to virtual machines to authorized users. Multi-factor authentication adds an extra layer of security by requiring users to provide more than one type of authentication method before accessing VMs. Strong access controls limit the risk of unauthorized access and can help prevent data breaches. According to a survey, organizations that implemented multi-factor authentication saw a 98% reduction in the risk of phishing-related account breaches. (Source: Duo Security) Monitor VMs for anomalous behavior: Monitoring virtual machines for unusual or unexpected behavior is an essential security practice. This includes monitoring network traffic, processes running on the VM, and other metrics that can help detect potential security incidents. By monitoring VMs, security teams can detect and respond to security threats before they can cause damage. A study found that 90% of organizations that implemented a virtualized environment experienced security benefits, such as improved visibility into security threats and faster incident response times. (Source: VMware) Use Encryption: Encryption is the process of encoding information in such a way that only authorized parties can access it. Encrypting data both in transit and at rest protects it from interception or theft by hackers. This can be achieved using industry-standard encryption protocols and technologies. According to a report by, the average cost of a data breach in 2020 was $3.86 million. The report also found that organizations that implemented encryption had a lower average cost of a data breach compared to those that did not (Source: IBM) Segregate VMs: Segregating virtual machines is the practice of keeping sensitive VMs separate from less sensitive ones. This reduces the risk of lateral movement, which is when a hacker gains access to one VM and uses it as a stepping stone to gain access to other VMs in the same environment. Segregating VMs helps to minimize the risk of data breaches and limit the potential impact of a security incident. A study found that organizations that implemented a virtualized environment without adequate segregation and access controls were more vulnerable to VM security breaches and data loss. (Source: Ponemon Institute) Regularly Back-up VMs: Regularly backing up virtual machines is a critical security practice that can help mitigate the impact of malware attacks, system failures, or other security incidents. Backups should be stored securely and tested regularly to ensure that they can be restored quickly in the event of a security incident. A survey conducted found that 42% of organizations experienced a data loss event in 2020 with the most common cause being accidental deletion by an employee (29%). (Source: Veeam) 4. Conclusion The complexity of cloud environments and the shared responsibility model for security require organizations to adopt a comprehensive security approach that spans multiple infrastructure layers, from the physical to the application layer. The future of virtual machine security concern in cloud computing will require continued innovation and adaptation to new threats and vulnerabilities. As a result, organizations must remain vigilant and proactive in their security efforts, leveraging the latest technologies and best practices to protect their virtual machines, the sensitive data and resources they contain.

Read More
Virtual Desktop Strategies, Server Hypervisors

Metasploitable: A Platform for Ethical Hacking and Penetration Testing

Article | April 27, 2023

Contents 1. Overview 2. Ethical Hacking and Penetration Testing 3. Metasploit Penetration Test 4. Why Choose Metasploit Framework for your Business? 5. Closing remarks 1. Overview Metasploitable refers to an intentionally vulnerable virtual machine that enables the learning and practice of Metasploit. Metasploit is one of the best penetration testing frameworks that helps businesses discover and shore up their systems' vulnerabilities before hackers exploit them. Security engineers use Metasploit as a penetration testing system and a development platform that allows the creation of security tools and exploits. Metasploit's various user interfaces, libraries, tools, and modules allow users to configure an exploit module, pair it with a payload, point it at a target, and launch it at the target system. In addition, Metasploit's extensive database houses hundreds of exploits and several payload options. 2. Ethical Hacking and Penetration Testing An ethical hacker is one who works within a security framework and checks for bugs that a malicious hacker might use to exploit networks. They use their experience and skills to render the cyber environment. To protect the infrastructure from the threat that hackers pose, ethical hacking is essential. The main purpose of an ethical hacking service is to report and assess the safety of the targeted systems and networks for the owner. Ethical hacking is performed with penetration test techniques to evaluate security loopholes. There are many techniques used to hack information, such as – Information gathering Vulnerability scanning Exploitation Test analysis Ethical hacking involves automatic methods. The hacking process without automated software is inefficient and time-consuming. There are several tools and methods that can be used for ethical hacking and penetration testing. The Metasploit framework eases the effort to exploit vulnerabilities in networks, operating systems, and applications and generates new exploits for new or unknown vulnerabilities. 3. Metasploit Penetration Test Reconnaissance: Integrate Metasploit with various reconnaissance tools to find the vulnerable spot in the system. Threat Modeling and Vulnerability Identification: Once a weakness is identified, choose an exploit and payload for penetration. Exploitation: The payload gets executed at the target if the exploit, a tool used to take advantage of system weakness, is successful, and the user gets a shell for interacting with the payload (a shellcode is a small piece of code used as the payload).The most popular payload, a set of malicious codes to attack Windows systems, is Meterpreter, an in-memory-only interactive shell. (Meterpreter is a Metasploit attack payload that provides an interactive shell for the attacker to explore the target machine and execute code.)Other payloads are: Static payloads (it enables port forwarding and communications between networks) Dynamic payloads (to evade antivirus software, it allows testers to generate unique payloads) Command shell payloads (enables users to run scripts or commands against a host) Post-Exploitation: Metasploit offers various exploitation tools for privilege escalation, packet sniffing, keyloggers, screen capture, and pivoting tools once on the target machine. Resolution and Re-Testing: Users set up a persistent backdoor if the target machine gets rebooted. These available features in Metasploit make it easy to configure as per the user's requirements. 4. Why Choose Metasploit Framework for your Business? Significant advantages of the Metasploit Framework are discussed below: Open-source: Metasploit Framework is actively developed as open-source software, so most companies prefer this to grow their businesses. Easy usage: It is very easy to use, defining an easy-naming conversation with the commands. This also facilitates the building of an extensive penetration test of the network. GUI Environment: It mainly provides third-party instances that are friendly. These interfaces ease the penetration testing projects by providing the facilities with services such as button clicks, over-the-fly vulnerability management, and easy-to-shift workspaces, among others. Cleaner Exits: Metasploit can cleanly exit without detection, even if the target system does not restart after a penetration test. Additionally, it offers various options for maintaining persistent access to the target system. Easy Switching Between Payloads: Metasploit allows testers to change payloads with the 'setpayload' command easily. It offers flexibility for system penetration through shell-based access or meterpreter. 5. Closing remarks From DevSecOps experts to hackers, everyone uses the Ruby-based open-source framework Metasploit, which allows testing via command-line alterations or GUI. Metasploitable is a vulnerable virtual machine ideally used for ethical hacking and penetration testing, in VM security. One trend likely to impact the future of Metasploitable is the increasing use of cloud-based environments for testing and production. It is possible that Metasploitable could be adapted to work in cloud environments or that new tools will be developed specifically for cloud-based penetration testing. Another trend that may impact the future of Metasploitable is the growing importance of automation in security testing. Thus, Metasploitable could be adapted to include more automation features. The future of Metasploitable looks bright as it continues to be a valuable tool for security professionals and enthusiasts. As the security landscape continues to evolve, it will be interesting to see how Metasploitable adapts to meet the community's changing needs.

Read More
Server Virtualization

Boosting Productivity with Kubernetes and Docker

Article | May 17, 2023

Learn setting up a Docker and Kubernetes environment with the right considerations and choose the best-suited software from ten leading tools, softwares and platforms for your business needs. Contents The blog discusses how Kubernetes and Docker can boost software development and deployment productivity. In addition, it covers the benefits of the role of Kubernetes in orchestrating containerized applications and best practices for implementing these technologies to improve efficiency and streamline workflows. Docker and Kubernetes are both essential containerization ecosystem utilities. Kubernetes, an excellent DevOps solution, manages and automates containers' deployment and scaling, along with operating across clusters of hosts, whereas Docker is used for creating and operating containers. The blog covers tips to consider while choosing tools/platforms. It further enlists ten platforms providing Kubernetes and Docker, featuring their offerings. 1. Considerations While Setting Up a Development Environment with Kubernetes and Docker 1.1 Fluid app delivery A platform for application development must provide development teams with high velocity. Two factors contribute to high velocity: rapid application delivery and brief development cycles. Application platforms must support build processes that start with source code. The platforms must also facilitate the repetitive deployment of applications on any remote staging instance. 1.2 Polyglot support Consistency is the defining characteristic of an application platform. On-demand, repetitive, and reproducible builds must be supported by the platform. Extending a consistent experience across all languages and frameworks elevates the platform experience. The platform must support a native build process and the ability to develop and customize this build process. 1.3 Baked-in security Containerized environments are secured in a significantly different manner than conventional applications. A fundamental best practice is to utilize binaries compiled with all necessary dependencies. The build procedure should also include a directive to eliminate unnecessary components for the application's operation. Setting up a zero-trust architecture between platform components that orchestrate deployments significantly improves the workloads' security posture. 1.4 Adjustable abstractions A platform with paved paths and the flexibility to accommodate the requirements of software engineering teams has a greater chance of success. Open-source platforms score highly in this regard, particularly those with modular architectures that allow the team to swap out parts as they adjust. 2.Top Tips to Consider While Choosing Tools and Platforms for Kubernetes and Docker 2.1 Production-Readiness Configuring Kubernetes or Docker can be complex and resource-intensive. A production-ready platform will ensure having the necessary fully automated features without the need for configuration. Security is an essential aspect of production readiness. Additionally, automation is critical, as production readiness requires that the solution manage all cluster management duties. Automated backup, recovery, and restore capabilities must be considered. Also, ensure the high availability, scalability, and self-healing of the cluster's platform. 2.2 Future-Readiness As the cloud and software evolve, a system's hosting location may affect its efficacy. The current trend is a multi-cloud strategy. Ensure that the platform can support abstracting from cloud or data center providers and building a shared infrastructure across clouds, cloud regions, and data centers, as well as assist in configuring them if required. According to a recent study, nearly one-third of organizations are already collaborating with four or more cloud service providers. (Source: Microsoft and 451 Research) 2.3 Ease of Administration Managing a Docker or Kubernetes cluster is complex and requires various skill sets. Kubernetes generates a lot of unprocessed data, which must be interpreted to comprehend what's happening with the cluster. Early detection and intervention are crucial to disaster prevention. Identifying a platform that eliminates the issue of analyzing raw data is essential. By incorporating automated intelligent monitoring and alerts, such solutions can provide critical status, error, event, and warning data to take appropriate action. 2.4 Assistance and Training As the organization begins to acquire Kubernetesor Docker skills, it is essential to have a vendor that can provide 24/7 support and training to ensure a seamless transition. Incorrect implementation will add a layer of complexity to infrastructure management. Leverage automation tools that offer the support needed to use Kubernetes and Docker without the management burden. 3. 10 Tools and Platforms Providing Kubernetes and Docker 3.1 Aqua Cloud Native Security Platform: Aqua Security provides the Aqua Cloud Native Security Platform, a comprehensive security solution designed to protect cloud-native applications and microservices. Aqua offers end-to-end security for applications operating on Docker Enterprise Edition (Community Edition), protecting the DevOps pipeline and production workloads with complete visibility and control. It provides end-to-end security across the entire application lifecycle, from development to production, for both containerized and serverless workloads. In addition, it automates prevention, detection, and response across the whole application lifecycle to secure the build, cloud infrastructure, and operating workloads, regardless of where they are deployed. 3.2 Weave Gitops Enterprise Weave GitOps Enterprise, a full-stack, developer-centric operating model for Kubernetes, creates and contributes to several open-source projects. Its products and services enable teams to design, build, and operate their Kubernetes platform at scale. Built by the creators of Flux and Flagger, Weave GitOps allows users to deploy and manage Kubernetes clusters and applications in the public or private cloud or their own data center. Weave GitOps Enterprise helps simplify Kubernetes with fully automated continuous delivery pipelines that roll out changes from development to staging and production. Weaveworks has used Kubernetes in production for over eight years and has developed that expertise into Weave GitOps Enterprise. 3.3 Mirantis Kubernetes Engine Mirantis provides the Mirantis Kubernetes Engine, a platform designed to help organizations deploy, manage, and scale their Kubernetes clusters. It includes features such as container orchestration, automated deployment, monitoring, and high availability, all designed to help organizations build and run their applications at scale. Mirantis Kubernetes Engine also includes a set of tools for managing the lifecycle of Kubernetes clusters, including cluster deployment, upgrades, and patching. It also has security scanning and policy enforcement features, as well as integration with other enterprise IT systems such as Active Directory and LDAP. 3.4 Portworx by Pure Storage Portworx's deep integration into Docker gives Portworx container data services benefits directly through the Docker Swarm scheduler. Swarm service creation brings the management capability of Portworx to the Docker persistent storage layer to avoid complex tasks such as increasing the storage pool without container downtime and problems like stuck EBS drives. Portworx is also a multi-cloud-ready Kubernetes storage and administration platform designed to simplify and streamline data management in Kubernetes. The platform abstracts the complexity of data storage in Kubernetes. Additionally, it serves as a software-defined layer that aggregates Kubernetes nodes' data storage into a virtual reservoir. 3.5 Platform9 Platform9 provides a powerful IDE for developers for simplified in-context views of pods, logs, events, and more. Both development and operations teams can access the information they need in an instant, secured through SSO and Kubernetes RBAC. The industry’s first SaaS-managed approach combined with a best-in-class support and customer success organization with a 99.9% consistent CSAT rating delivers production-ready K8s to organizations of any size. It provides services to deploy a cluster instantly, achieve GitOps faster, and take care of every aspect of cluster management, including remote monitoring, self-healing, automatic troubleshooting, and proactive issue resolution, around the clock. 3.6 Kubernetes Network Security Sysdig provides Kubernetes Network Security, a solution that offers cloud security from source to run. The product provides network security for Kubernetes environments by monitoring and blocking suspicious traffic in real time. It helps organizations protect their Kubernetes clusters against advanced threats and attacks. The product and Sysdig Secure offer Kubernetes Network Monitoring to investigate suspicious traffic and connection attempts, Kubernetes-Native Microsegmentation to enable microsegmentation without breaking the application, and Automated Network Policies to save time by automating Kubernetes network policies. 3.7 Kubernetes Operations Platform for Edge Rafay delivers a production-ready Kubernetes Operations Platform for Edge, streamlining ongoing operations for edge applications. It provides centralized multi-cluster management to deploy, manage, and upgrade all Kubernetes clusters from a single console across all edge nodes. In addition, it offers comprehensive lifecycle management, with which users can quickly and easily provision Kubernetes clusters at the edge, where cluster updates and upgrades are seamless with no downtime. Furthermore, the KMC for Edge quickly integrates with enterprise-class SSO solutions such as Okta, Ping One, and Azure AD, among others. Other features include standardized clusters and workflows, integration and automation, and centralized logging and monitoring. 3.8 Opcito Technologies Opcito provides simplified container management with efficient provisioning, deployment, scaling, and networking. Its application containerization expertise helps containerize existing and new applications and dependencies. Opcito is well-versed in leading container orchestration platforms like Docker Swarm and Kubernetes. While it helps choose the container platform that best suits specific application needs, it also helps with the end-to-end management of containers so clients can release applications faster and focus on innovation and business. The container management and orchestration services include: building secured microservices, Enterprise-scale Container Management and Orchestration, Orchestration, and Container Monitoring. 3.9 D2iQ Kubernetes Platform (DKP) D2iQ (DKP) enables enterprises to take advantage of all the benefits of cloud-native Kubernetes while laying the groundwork for intelligent cloud-native innovation by simplifying Kubernetes deployment and maintenance. It simplifies and automates the most difficult parts of an enterprise Kubernetes deployment across all infrastructures. DKP helps enterprises easily overcome operational barriers and set them up in minutes and hours rather than weeks and months. In addition, DKP simplifies Kubernetes management through automation using GitOps workflow, observability, application catalog, real-time cost management, and more. 3.10 Spektra Spektra, by Diamanti, a multi-cluster management solution for DevOps and production teams, provides centralized multi-cluster management, a single control plane to deliver everything needed to provision and manage the lifecycle of multiple clusters. Spektra is built to cater to business needs, from air-gapped on-prem deployments to hybrid and multi-cloud infrastructures. It also enables stretching resources across different clusters within the tenant. Furthermore, it allows you to move workloads and their associated data from one cluster to another directly from its dashboard. Spektra integrates with lightweight directory access protocols (LDAP) and Active Directory (AD) to enable user authentication and streamline resource access. In addition, it offers application migration, data mobility, and reporting. 4. Conclusion It is evident that Kubernetes and Docker can significantly boost software development and deployment productivity. By adopting appropriate containerization platforms and leveraging Kubernetes for orchestration, organizations can streamline workflows, improve efficiency, and enhance the reliability of their applications. Furthermore, following the tips to choose the tools or platform carefully can further improve productivity.

Read More
Server Virtualization

How to Start Small and Grow Big with Data Virtualization

Article | June 9, 2022

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More

Spotlight

Neepa Trainings

Neepa Trainings, is a cyber security technical certification body. We are proud to have trained and certified over 50,000 information security professional

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Virtual Desktop Tools, Virtual Desktop Strategies, Server Virtualization

Netskope Delivers the Next Evolution in Digital Experience Management for SASE with Proactive DEM

PR Newswire | September 01, 2023

Netskope, a leader in Secure Access Service Edge (SASE), today announced the launch of Proactive Digital Experience Management (DEM) for SASE, elevating best practice from the current reactive monitoring tools to proactive user experience management. Proactive DEM provides experience management capabilities across the entire SASE architecture, including Netskope Intelligent SSE, Netskope Borderless SD-WAN and Netskope NewEdge global infrastructure. Digital Experience Management technology has become increasingly crucial amid digital business transformation, with organizations seeking to enhance customer experiences and improve employee engagement. With hybrid work and cloud infrastructure now the norm globally, organizations have struggled to ensure consistent and optimized experiences alongside stringent security requirements. Gartner predicts that "by 2026, at least 60% of I&O leaders will use DEM to measure application, services and endpoint performance from the user's viewpoint, up from less than 20% in 2021." However, monitoring applications, services, and networks is only part of a modern DEM experience, and so Netskope Proactive DEM goes beyond observation, providing Machine Learning (ML)-driven functionality to anticipate, and automatically remediate, problems. Sanjay Beri, CEO and co-founder of Netskope commented, "Ensuring a constantly optimized experience is essential for organizations looking to support the best productivity returns for hybrid workers and modern cloud infrastructure, but monitoring alone is not enough. Customers have told us of the challenges they face managing a multi-vendor cloud ecosystem and so we have yet again innovated beyond industry standards, providing experience management that can both monitor and proactively remediate." For issue identification, Netskope Proactive DEM uniquely combines Synthetic Monitoring with Real User monitoring, creating SMART monitoring (Synthetic Monitoring Augmentation for Real Traffic). This enables full end-to-end 'hop-by-hop' visibility of data, and the proactive identification of experience-impacting events. SMART monitoring enables organizations to anticipate potential events that might impact upon network and application experience. While most SASE vendors rely on "gray cloud" infrastructure - built on public cloud - which limits their ability to granularly identify and control any issues, Proactive DEM leverages Netskope NewEdge - the industry's largest private cloud infrastructure - to deliver 360 visibility and control of end-to-end user experience while providing mitigation of issues, including using various self-healing mechanisms, before the user recognizes their experience has degraded. About Netskope Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements.

Read More

Virtual Desktop Tools, Server Hypervisors

Meter Partners with Cloudflare to Launch DNS Security

Business Wire | August 31, 2023

Meter, Inc., a leader in Network as a Service (NaaS) for businesses, today announced DNS Security, built in partnership with Cloudflare, the security, performance, and reliability company. Meter DNS Security is now widely available for all Meter Network customers, expanding Meter’s existing NaaS offering and saving teams both time and money, while also improving overall network performance and security, powered by Cloudflare’s Zero Trust platform. “With the number of devices on a network expected to triple by 2030, modern businesses and organizations demand enterprise network controls to ensure safety and peak performance for business critical functions,” said Anil Varanasi, CEO and co-founder of Meter. “Meter DNS Security is the latest example of how we’re continuing to offer our customers enterprise level networks end-to-end. Through our partnership with Cloudflare, we’re enhancing our capabilities to meet the needs of IT professionals at industrial warehouses, educational institutions, security firms, and more.” Meter DNS Security eliminates the hassle of having multiple vendors, by providing content filtering at several layers to all customers within the Meter Dashboard in partnership with one of the best providers in the world. “We’re proud to have Meter leveraging Cloudflare’s Zero Trust platform in a new way, offering our DNS filtering feature natively built into their Meter Dashboard,” said John Graham-Cumming, CTO, Cloudflare. “By building on Cloudflare's platform, Meter enables customers to manage their team’s operations at scale, as well as effectively enforce global corporate policies across diverse corporate spaces, such as offices, schools, and warehouses.” In addition to the ease and scalability of Meter DNS Security, users are ensuring security through enhanced compliance by blocking access to known malicious websites and bad actors. The integration and partnership with Cloudflare provides customers with faster DNS response times, while optimizing network performance by limiting access to high-bandwidth websites and services. Real world examples of this process include, but are not limited to: Ensuring a safe browsing environment at schools by filtering out age inappropriate content Optimizing network performance for warehouses by filtering high bandwidth activities like video streaming Maintaining high security and compliance standards by filtering malicious or illegal content “Tishman Speyer has successfully partnered with Meter to streamline the networking and Wi-Fi experience for our customers,” said Simon Okunev, Managing Director and Chief Information Officer, Tishman Speyer. “The addition of Meter’s DNS Security feature, powered by Cloudflare, will further benefit our customers by providing an additional layer of security.” About Cloudflare Cloudflare, Inc. is on a mission to help build a better Internet. Cloudflare’s suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was awarded by Reuters Events for Global Responsible Business in 2020, named to Fast Company's Most Innovative Companies in 2021, and ranked among Newsweek's Top 100 Most Loved Workplaces in 2022.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Virtual Desktop Tools, Virtual Desktop Strategies, Server Virtualization

Netskope Delivers the Next Evolution in Digital Experience Management for SASE with Proactive DEM

PR Newswire | September 01, 2023

Netskope, a leader in Secure Access Service Edge (SASE), today announced the launch of Proactive Digital Experience Management (DEM) for SASE, elevating best practice from the current reactive monitoring tools to proactive user experience management. Proactive DEM provides experience management capabilities across the entire SASE architecture, including Netskope Intelligent SSE, Netskope Borderless SD-WAN and Netskope NewEdge global infrastructure. Digital Experience Management technology has become increasingly crucial amid digital business transformation, with organizations seeking to enhance customer experiences and improve employee engagement. With hybrid work and cloud infrastructure now the norm globally, organizations have struggled to ensure consistent and optimized experiences alongside stringent security requirements. Gartner predicts that "by 2026, at least 60% of I&O leaders will use DEM to measure application, services and endpoint performance from the user's viewpoint, up from less than 20% in 2021." However, monitoring applications, services, and networks is only part of a modern DEM experience, and so Netskope Proactive DEM goes beyond observation, providing Machine Learning (ML)-driven functionality to anticipate, and automatically remediate, problems. Sanjay Beri, CEO and co-founder of Netskope commented, "Ensuring a constantly optimized experience is essential for organizations looking to support the best productivity returns for hybrid workers and modern cloud infrastructure, but monitoring alone is not enough. Customers have told us of the challenges they face managing a multi-vendor cloud ecosystem and so we have yet again innovated beyond industry standards, providing experience management that can both monitor and proactively remediate." For issue identification, Netskope Proactive DEM uniquely combines Synthetic Monitoring with Real User monitoring, creating SMART monitoring (Synthetic Monitoring Augmentation for Real Traffic). This enables full end-to-end 'hop-by-hop' visibility of data, and the proactive identification of experience-impacting events. SMART monitoring enables organizations to anticipate potential events that might impact upon network and application experience. While most SASE vendors rely on "gray cloud" infrastructure - built on public cloud - which limits their ability to granularly identify and control any issues, Proactive DEM leverages Netskope NewEdge - the industry's largest private cloud infrastructure - to deliver 360 visibility and control of end-to-end user experience while providing mitigation of issues, including using various self-healing mechanisms, before the user recognizes their experience has degraded. About Netskope Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements.

Read More

Virtual Desktop Tools, Server Hypervisors

Meter Partners with Cloudflare to Launch DNS Security

Business Wire | August 31, 2023

Meter, Inc., a leader in Network as a Service (NaaS) for businesses, today announced DNS Security, built in partnership with Cloudflare, the security, performance, and reliability company. Meter DNS Security is now widely available for all Meter Network customers, expanding Meter’s existing NaaS offering and saving teams both time and money, while also improving overall network performance and security, powered by Cloudflare’s Zero Trust platform. “With the number of devices on a network expected to triple by 2030, modern businesses and organizations demand enterprise network controls to ensure safety and peak performance for business critical functions,” said Anil Varanasi, CEO and co-founder of Meter. “Meter DNS Security is the latest example of how we’re continuing to offer our customers enterprise level networks end-to-end. Through our partnership with Cloudflare, we’re enhancing our capabilities to meet the needs of IT professionals at industrial warehouses, educational institutions, security firms, and more.” Meter DNS Security eliminates the hassle of having multiple vendors, by providing content filtering at several layers to all customers within the Meter Dashboard in partnership with one of the best providers in the world. “We’re proud to have Meter leveraging Cloudflare’s Zero Trust platform in a new way, offering our DNS filtering feature natively built into their Meter Dashboard,” said John Graham-Cumming, CTO, Cloudflare. “By building on Cloudflare's platform, Meter enables customers to manage their team’s operations at scale, as well as effectively enforce global corporate policies across diverse corporate spaces, such as offices, schools, and warehouses.” In addition to the ease and scalability of Meter DNS Security, users are ensuring security through enhanced compliance by blocking access to known malicious websites and bad actors. The integration and partnership with Cloudflare provides customers with faster DNS response times, while optimizing network performance by limiting access to high-bandwidth websites and services. Real world examples of this process include, but are not limited to: Ensuring a safe browsing environment at schools by filtering out age inappropriate content Optimizing network performance for warehouses by filtering high bandwidth activities like video streaming Maintaining high security and compliance standards by filtering malicious or illegal content “Tishman Speyer has successfully partnered with Meter to streamline the networking and Wi-Fi experience for our customers,” said Simon Okunev, Managing Director and Chief Information Officer, Tishman Speyer. “The addition of Meter’s DNS Security feature, powered by Cloudflare, will further benefit our customers by providing an additional layer of security.” About Cloudflare Cloudflare, Inc. is on a mission to help build a better Internet. Cloudflare’s suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was awarded by Reuters Events for Global Responsible Business in 2020, named to Fast Company's Most Innovative Companies in 2021, and ranked among Newsweek's Top 100 Most Loved Workplaces in 2022.

Read More

Events