New AWS Collaborations Ease Health IT Cloud Storage Options

Amazon Web Services (AWS) is increasing storage and cloud migration options by collaborating with Qualcomm and Actifio to produce more tailored solutions for organizations looking to improve their IT infrastructure environment.

Spotlight

V & C InfoPlus Ltd

V & C InfoPlus Ltd (VC) is a local independent consultancy specializing in Information and Communications Technology (ICT) established in April 2014. A new player in the ICT Support Services industry in Papua New Guinean (PNG), VC is passionate about being meticulous with any and every evolving collaborative, on-premise or cloud driven technologies and determined with a long term perspective.

OTHER ARTICLES
Virtual Desktop Strategies

ProtonVPN iOS app now supports the OpenVPN protocol

Article | July 26, 2022

Your ProtonVPN iOS app is now better equipped to fight censorship and offers more flexible connection options with the launch of OpenVPN for iOS. The OpenVPN protocol is one of the best VPN protocols because of its flexibility, security, and because it is more resistant to blocks. You now have the option to switch between the faster IKEv2 protocol and the more stable and censorship-resistant OpenVPN protocol.

Read More
Virtual Desktop Tools, Virtual Desktop Strategies

Efficient Management of Virtual Machines using Orchestration

Article | June 8, 2023

Contents 1. Introduction 2. What is Orchestration? 3. How Orchestrating Help Optimize VMs Efficiency? 3.1. Resource Optimization 3.2 Dynamic Scaling 3.3 Faster Deployment 3.4 Improved Security 3.5 Multi-Cloud Management 3.6 Improved Collaboration 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs 4.2 Automated Backup and Restore for VMs 4.3 Ensure Replication for VMs 4.4 Setup Data Synchronization for VMs 5. Conclusion 1. Introduction Orchestration is a superset of automation. Cloud orchestration goes beyond automation, providing coordination between multiple automated activities. Cloud orchestration is increasingly essential due to the growth of containerization, which facilitates scaling applications across clouds, both public and private. The demand for both public cloud orchestration and hybrid cloud orchestration has increased as businesses increasingly adopt a hybrid cloud architecture. The quick adoption of containerized, micro-services-based apps that communicate over APIs has fueled the desire for automation in deploying and managing applications across the cloud. This increase in complexity has created a need for VM orchestration that can manage numerous dependencies across various clouds with policy-driven security and management capabilities. 2. What is Orchestration? Orchestration refers to the process of automating, coordinating, and managing complex systems, workflows, or processes. It typically entails the use of automation tools and platforms to streamline and coordinate the deployment, configuration, management of applications and services across different environments. This includes development, testing, staging, and production. Orchestration tools in cloud computing can be used to automate the deployment and administration of containerized applications across multiple servers or clusters. These tools can help automate tasks such as container provisioning, scaling, load balancing, and health monitoring, making it easier to manage complex application environments. Orchestration ensures organizations automate and streamline their workflows, reduce errors and downtime, and improve the efficacy and scalability of their operations. 3. How Orchestrating Help Optimize VMs Efficiency? Orchestration offers enhanced visibility into the resources and processes in use, which helps prevent VM sprawl and helps organizations trace resource usage by department, business unit, or individual user. Fig. Global Market for VNFO by Virtualization Methodology 2022-27($ million) (Source: Insight Research) The above figure shows, VMs have established a solid legacy that will continue to be relevant in the near to mid-term future. These are 6 ways, in which Orchestration helps vin efficient management of VMs: 3.1. Resource Optimization Orchestrating helps optimize resource utilization by automating the provisioning and de-provisioning of VMs, which allows for efficient use of computing resources. By using orchestration tools, IT teams can set up rules and policies for automatically scaling VMs based on criteria such as CPU utilization, memory usage, network traffic, and application performance metrics. Orchestration also enables advanced techniques such as predictive analytics, machine learning, and artificial intelligence to optimize resource utilization. These technologies can analyze historical data and identify patterns in workload demand, allowing the orchestration system to predict future resource needs and automatically provision or de-provision resources accordingly 3.2. Dynamic Scaling Orchestrating helps automate scaling of VMs, enabling organizations to quickly and easily adjust their computing resources based on demand. It enables IT teams to configure scaling policies and regulations for virtual machines based on resource utilization and network traffic along with performance metrics. When the workload demand exceeds a certain threshold, the orchestration system can autonomously provision additional virtual machines to accommodate the increased load. When workload demand decreases, the orchestration system can deprovision VMs to free up resources and reduce costs. 3.3. Faster Deployment Orchestrating can help automate VM deployment of VMs, reducing the time and effort required to provision new resources. By leveraging advanced technologies such as automation, scripting, and APIs, orchestration can further streamline the VM deployment process. It allows IT teams to define workflows and processes that can be automated using scripts, reducing the time and effort required to deploy new resources. In addition, orchestration can integrate with other IT management tools and platforms, such as cloud management platforms, configuration management tools, and monitoring systems. This enables IT teams to leverage various capabilities and services to streamline the VM deployment and improve efficiency. 3.4. Improved Security Orchestrating can help enhance the security of VMs by automating the deployment of security patches and updates. It also helps ensure VMs are deployed with the appropriate security configurations and settings, reducing the risk of misconfiguration and vulnerability. It enables IT teams to define standard security templates and configurations for VMs, which can be automatically applied during deployment. Furthermore, orchestration can integrate with other security tools and platforms, such as intrusion detection systems and firewalls, to provide a comprehensive security solution. It allows IT teams to automate the deployment of security policies and rules, ensuring that workloads remain protected against various security threats. 3.5. Multi-Cloud Management Orchestration helps provide a single pane of glass for VM management, enabling IT teams to monitor and manage VMs across multiple cloud environments from a single platform. This simplifies management and reduces complexity, enabling IT teams to respond more quickly and effectively to changing business requirements. In addition, orchestration also helps to ensure consistency and compliance across multiple cloud environments. Moreover, orchestration can also integrate with other multi-cloud management tools and platforms, such as cloud brokers and cloud management platforms, to provide a comprehensive solution for managing VMs across multiple clouds. 3.6. Improved Collaboration Orchestration helps streamline collaboration by providing a centralized repository for storing and sharing information related to VMs. Moreover, it also automates many of the routine tasks associated with VM management, reducing the workload for IT teams and freeing up time for more complex tasks. This can improve collaboration by enabling IT teams to focus on more strategic initiatives. In addition, orchestration provides advanced analytics and reporting capabilities, enabling IT teams to track performance, identify bottlenecks, and optimize resource utilization. This improves performance by providing a data-driven approach to VM management and allowing IT teams to work collaboratively to identify and address performance issues. 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs Containers and virtual machines exist together within a single infrastructure and are managed by the same platform. This allows for hosting various projects using a unified management point and the ability to adapt gradually based on current needs and opportunities. This provides greater flexibility for teams to host and administer applications using cutting-edge technologies and established standards and methods. Moreover, as there is no need to invest in distinct physical servers for virtual machines (VMs) and containers, this approach can be a great way to maximize infrastructure utilization, resulting in lower TCO and higher ROI. In addition, unified management drastically simplifies processes, requiring fewer human resources and less time. 4.2. Automated Backup and Restore for VMs --Minimize downtime and reduce risk of data loss Organizations should set up automated backup and restore processes for virtual machines, ensuring critical data and applications are protected during a disaster. This involves scheduling regular backups of virtual machines to a secondary location or cloud storage and setting up automated restore processes to recover virtual machines during an outage or disaster quickly. 4.3. Ensure Replication for VMs --Ensure data and applications are available and accessible in the event of a disaster Organizations should set up replication processes for their VMs, allowing them to be automatically copied to a secondary location or cloud infrastructure. This ensures that critical applications and data are available even during a catastrophic failure at the primary site. 4.4. Setup Data Synchronization for VMs --Improve overall resilience and availability of the system VM orchestration tools should be used to set up data synchronization processes between virtual machines, ensuring that data is consistent and up-to-date across multiple locations. This is particularly important in scenarios where data needs to be accessed quickly from various locations, such as in distributed environments. 5. Conclusion Orchestration provides disaster recovery and business continuity, automatic scalability of distributed systems, and inter-service configuration. Cloud orchestration is becoming significant due to the advent of containerization, which permits scaling applications across clouds, both public and private. We expect continued growth and innovation in the field of VM orchestration, with new technologies and tools emerging to support more efficient and effective management of virtual machines in distributed environments. In addition, as organizations increasingly rely on cloud-based infrastructures and distributed systems, VM orchestration will continue to play a vital role in enabling businesses to operate smoothly and recover quickly from disruptions. VM orchestration will remain a critical component of disaster recovery and high availability strategies for years as organizations continue relying on virtualization technologies to power their operations and drive innovation.

Read More
Server Virtualization

VM Applications for Software Development and Secure Testing

Article | May 17, 2023

Contents 1. Introduction 2. Software Development and Secure Testing 3. Using VMs in Software Development and Secure Testing 4. Conclusion 1. Introduction “Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.” —James Bach. Testing software is crucial for identifying and fixing security vulnerabilities. However, meeting quality standards for functionality and performance does not guarantee security. Thus, software testing nowadays is a must to identify and address application security vulnerabilities to maintain the following: Security of data history, databases, information, and servers Customers’ integrity and trust Web application protection from future attacks VMs provide a flexible and isolated environment for software development and security testing. They offer easy replication of complex configurations and testing scenarios, allowing efficient issue resolution. VMs also provide secure testing by isolating applications from the host system and enabling a reset to a previous state. In addition, they facilitate DevOps practices and streamline the development workflow. 2. Software Development and Secure Testing Software Secure Testing: The Approach The following approaches must be considered while preparing and planning for security tests: Architecture Study and Analysis: Understand whether the software meets the necessary requirements. Threat Classification: List all potential threats and risk factors that must be tested. Test Planning: Run the tests based on the identified threats, vulnerabilities, and security risks. Testing Tool Identification: For software security testing tools for web applications, the developer must identify the relevant security tools to test the software for specific use cases. Test-Case Execution: After performing a security test, the developer should fix it using any suitable open-source code or manually. Reports: Prepare a detailed test report of the security tests performed, containing a list of the vulnerabilities, threats, and issues resolved and the ones that are still pending. Ensuring the security of an application that handles essential functions is paramount. This may involve safeguarding databases against malicious attacks or implementing fraud detection mechanisms for incoming leads before integrating them into the platform. Maintaining security is crucial throughout the software development life cycle (SDLC) and must be at the forefront of developers' minds while executing the software's requirements. With consistent effort, the SDLC pipeline addresses security issues before deployment, reducing the risk of discovering application vulnerabilities while minimizing the damage they could cause. A secure SDLC makes developers responsible for critical security. Developers need to be aware of potential security concerns at each step of the process. This requires integrating security into the SDLC in ways that were not needed before. As anyone can potentially access source code, coding with potential vulnerabilities in mind is essential. As such, having a robust and secure SDLC process is critical to ensuring applications are not subject to attacks by hackers. 3. Using VMs in Software Development and Secure Testing: Snapshotting: Snapshotting allows developers to capture a VM's state at a specific point in time and restore it later. This feature is helpful for debugging and enables developers to roll back to a previous state when an error occurs. A virtual machine provides several operations for creating and managing snapshots and snapshot chains. These operations let users create snapshots, revert to any snapshots in the chain, and remove snapshots. In addition, extensive snapshot trees can be created to streamline the flow. Virtual Networking: It allows virtual machines to be connected to virtual networks that simulate complex network topologies, allowing developers to test their applications in different network environments. This allows expanding data centers to cover multiple physical locations, gaining access to a plethora of more efficient options. This empowers them to effortlessly modify the network as per changing requirements without any additional hardware. Moreover, providing the network for specific applications and needs offers greater flexibility. Additionally, it enables workloads to be moved seamlessly across the network infrastructure without compromising on service, security, or availability. Resource Allocation: VMs can be configured with specific resource allocations such as CPU, RAM, and storage, allowing developers to test their applications under different resource constraints. Maintaining a 1:1 ratio between the virtual machine processor and its host or core is highly recommended. It's crucial to refrain from over-subscribing virtual machine processors to a single core, as this could lead to stalled or delayed events, causing significant frustration and dissatisfaction among users. However, it is essential to acknowledge that IT administrators sometimes overallocate virtual machine processors. In such cases, a practical approach is to start with a 2:1 ratio and gradually move towards 4:1, 8:1, 12:1, and so on while bringing virtual allocation into IT infrastructure. This approach ensures a safe and seamless transition towards optimized virtual resource allocation. Containerization within VMs: Containerization within VMs provides an additional layer of isolation and security for applications. Enterprises are finding new use cases for VMs to utilize their in-house and cloud infrastructure to support heavy-duty application and networking workloads. This will also have a positive impact on the environment. DevOps teams use containerization with virtualization to improve software development flexibility. Containers allow multiple apps to run in one container with the necessary components, such as code, system tools, and libraries. For complex applications, both virtual machines and containers are used together. However, while containers are used for the front-end and middleware, VMs are used for the back-end. VM Templates: VM templates are pre-configured virtual machines that can be used as a base for creating new virtual machines, making it easier to set up development and testing environments. A VM template is an image of a virtual machine that serves as a master copy. It includes VM disks, virtual devices, and settings. By using a VM template, cloning a virtual machine multiple times can be achieved. When you clone a VM from a template, the clones are independent and not linked to the template. VM templates are handy when a large number of similar VMs need to be deployed. They preserve VM consistency. To edit a template, convert it to a VM, make the necessary changes, and then convert the edited VM back into a new template. Remote Access: VMs can be accessed remotely, allowing developers and testers to collaborate more effectively from anywhere worldwide. To manage a virtual machine, follow these steps: enable remote access, connect to the virtual machine, and then access the VNC or serial console. Once connected, full permission to manage the virtual machine is granted with the user's approval. Remote access provides a secure way to access VMs, as connections can be encrypted and authenticated to prevent unauthorized access. Additionally, remote access allows for easier management of VMs, as administrators can monitor and control virtual machines from a central location. DevOps Integration: DevOps is a collection of practices, principles, and tools that allow a team to release software quickly and efficiently. Virtualization is vital in DevOps when developing intricate cloud, API, and SOA systems. Virtual machines enable teams to simulate environments for creating, testing, and launching code, ultimately preserving computing resources. While commencing a bug search at the API layer, teams find that virtual machines are suitable for test-driven development (TDD). Virtualization providers handle updates, freeing up DevOps teams, to focus on other areas and increasing productivity by 50 –60%. In addition, VMs allow for simultaneous testing of multiple release and patch levels, improving product compatibility and interoperability. 4. Conclusion The outlook for virtual machine applications is highly promising in the development and testing fields. With the increasing complexity of development and testing processes, VMs can significantly simplify and streamline these operations. In the future, VMs are expected to become even more versatile and potent, providing developers and testers with a broader range of tools and capabilities to facilitate the development process. One potential future development is integrating machine learning and artificial intelligence into VMs. This would enable VMs to automate various tasks, optimize the allocation of resources, and generate recommendations based on performance data. Moreover, VMs may become more agile and lightweight, allowing developers and testers to spin up and spin down instances with greater efficiency. The future of VM applications for software development and security testing looks bright, with continued innovation and development expected to provide developers and testers with even more powerful and flexible tools to improve the software development process.

Read More
Server Virtualization

How to Start Small and Grow Big with Data Virtualization

Article | June 9, 2022

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More

Spotlight

V & C InfoPlus Ltd

V & C InfoPlus Ltd (VC) is a local independent consultancy specializing in Information and Communications Technology (ICT) established in April 2014. A new player in the ICT Support Services industry in Papua New Guinean (PNG), VC is passionate about being meticulous with any and every evolving collaborative, on-premise or cloud driven technologies and determined with a long term perspective.

Related News

Virtual Desktop Tools, Virtual Desktop Strategies, Server Virtualization

Netskope Delivers the Next Evolution in Digital Experience Management for SASE with Proactive DEM

PR Newswire | September 01, 2023

Netskope, a leader in Secure Access Service Edge (SASE), today announced the launch of Proactive Digital Experience Management (DEM) for SASE, elevating best practice from the current reactive monitoring tools to proactive user experience management. Proactive DEM provides experience management capabilities across the entire SASE architecture, including Netskope Intelligent SSE, Netskope Borderless SD-WAN and Netskope NewEdge global infrastructure. Digital Experience Management technology has become increasingly crucial amid digital business transformation, with organizations seeking to enhance customer experiences and improve employee engagement. With hybrid work and cloud infrastructure now the norm globally, organizations have struggled to ensure consistent and optimized experiences alongside stringent security requirements. Gartner predicts that "by 2026, at least 60% of I&O leaders will use DEM to measure application, services and endpoint performance from the user's viewpoint, up from less than 20% in 2021." However, monitoring applications, services, and networks is only part of a modern DEM experience, and so Netskope Proactive DEM goes beyond observation, providing Machine Learning (ML)-driven functionality to anticipate, and automatically remediate, problems. Sanjay Beri, CEO and co-founder of Netskope commented, "Ensuring a constantly optimized experience is essential for organizations looking to support the best productivity returns for hybrid workers and modern cloud infrastructure, but monitoring alone is not enough. Customers have told us of the challenges they face managing a multi-vendor cloud ecosystem and so we have yet again innovated beyond industry standards, providing experience management that can both monitor and proactively remediate." For issue identification, Netskope Proactive DEM uniquely combines Synthetic Monitoring with Real User monitoring, creating SMART monitoring (Synthetic Monitoring Augmentation for Real Traffic). This enables full end-to-end 'hop-by-hop' visibility of data, and the proactive identification of experience-impacting events. SMART monitoring enables organizations to anticipate potential events that might impact upon network and application experience. While most SASE vendors rely on "gray cloud" infrastructure - built on public cloud - which limits their ability to granularly identify and control any issues, Proactive DEM leverages Netskope NewEdge - the industry's largest private cloud infrastructure - to deliver 360 visibility and control of end-to-end user experience while providing mitigation of issues, including using various self-healing mechanisms, before the user recognizes their experience has degraded. About Netskope Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements.

Read More

Virtual Desktop Tools, Server Hypervisors

Meter Partners with Cloudflare to Launch DNS Security

Business Wire | August 31, 2023

Meter, Inc., a leader in Network as a Service (NaaS) for businesses, today announced DNS Security, built in partnership with Cloudflare, the security, performance, and reliability company. Meter DNS Security is now widely available for all Meter Network customers, expanding Meter’s existing NaaS offering and saving teams both time and money, while also improving overall network performance and security, powered by Cloudflare’s Zero Trust platform. “With the number of devices on a network expected to triple by 2030, modern businesses and organizations demand enterprise network controls to ensure safety and peak performance for business critical functions,” said Anil Varanasi, CEO and co-founder of Meter. “Meter DNS Security is the latest example of how we’re continuing to offer our customers enterprise level networks end-to-end. Through our partnership with Cloudflare, we’re enhancing our capabilities to meet the needs of IT professionals at industrial warehouses, educational institutions, security firms, and more.” Meter DNS Security eliminates the hassle of having multiple vendors, by providing content filtering at several layers to all customers within the Meter Dashboard in partnership with one of the best providers in the world. “We’re proud to have Meter leveraging Cloudflare’s Zero Trust platform in a new way, offering our DNS filtering feature natively built into their Meter Dashboard,” said John Graham-Cumming, CTO, Cloudflare. “By building on Cloudflare's platform, Meter enables customers to manage their team’s operations at scale, as well as effectively enforce global corporate policies across diverse corporate spaces, such as offices, schools, and warehouses.” In addition to the ease and scalability of Meter DNS Security, users are ensuring security through enhanced compliance by blocking access to known malicious websites and bad actors. The integration and partnership with Cloudflare provides customers with faster DNS response times, while optimizing network performance by limiting access to high-bandwidth websites and services. Real world examples of this process include, but are not limited to: Ensuring a safe browsing environment at schools by filtering out age inappropriate content Optimizing network performance for warehouses by filtering high bandwidth activities like video streaming Maintaining high security and compliance standards by filtering malicious or illegal content “Tishman Speyer has successfully partnered with Meter to streamline the networking and Wi-Fi experience for our customers,” said Simon Okunev, Managing Director and Chief Information Officer, Tishman Speyer. “The addition of Meter’s DNS Security feature, powered by Cloudflare, will further benefit our customers by providing an additional layer of security.” About Cloudflare Cloudflare, Inc. is on a mission to help build a better Internet. Cloudflare’s suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was awarded by Reuters Events for Global Responsible Business in 2020, named to Fast Company's Most Innovative Companies in 2021, and ranked among Newsweek's Top 100 Most Loved Workplaces in 2022.

Read More

Server Hypervisors, Vsphere

Napatech Leverages Latest Intel Agilex® FPGA to Launch Industry's first 400Gbps SmartNIC Solutions

prnewswire | August 29, 2023

Napatech™ (OSLO: NAPA.OL), a leading provider of programmable Smart Network Interface Cards (SmartNICs) and Infrastructure Processing Units (IPU) used in cloud, enterprise and telecom datacenter networks, today announced the availability of the Napatech's first 400Gbps programmable SmartNIC solutions, leveraging the latest Intel Agilex® 7 FPGAs to deliver best-in-class performance for applications in security, cloud services, network monitoring and recording. Enterprises and OEMs providing high-performance solutions for network monitoring and recording require NICs with a performance level that matches the high PCI Express (PCIe) bandwidth available in the latest servers such as those based on 4th Gen Intel Xeon® Scalable Processors. The new Napatech NT400 SmartNIC platform, based on the Intel® FPGA SmartNIC N6000-PL Platform, addresses this need through a PCIe Gen 4 16-lane host interface which enables full-duplex 2x100Gbps traffic between network ports and host applications. Similarly, for applications like the 5G packet core in telecom infrastructure that require high-bandwidth inline processing of network data, the NT400 platform sustains a total 400G of traffic over tens of millions of flows. The NT400 programmable SmartNIC platform includes two QSFP56 network ports, supporting up to 2x200G traffic with the flexibility to configure 10G, 25G, 40G, 50G, 100G and 200G network links. The SmartNIC hardware is complemented by Napatech's portfolio of production-grade software packages, including Link-Capture™ for use cases such as network monitoring and recording, Link-Virtualization™ that provides a virtualized data plane for cloud services and Link-Inline™ for inline applications such as 5G User Plane Function (UPF). These integrated solutions deliver a true "IT experience" whereby the user just installs the card and the software, immediately achieving seamless acceleration of their application with no requirement to directly program the SmartNIC itself. At the core of the NT400 platform is the Intel Agilex 7 FPGA F-Tile chiplet, which incorporates a configurable, hardened Ethernet protocol stack for supporting rates from 10G to 400G. Napatech chose the Intel Agilex 7 FPGA for a host of reasons, including scalability options that allow support for five different configurations that meet various price, performance, power and feature goals, tailored to specific customer applications and use cases. The F-Tile features are critical in enabling the NT400 to operate within the space and power limitations of standard servers deployed in network appliances, data centers and edge locations. "As the networking landscape continues to evolve, SmartNICs emerge as the predominant growth catalyst in the expansive NIC market, poised to reach $3.3 billion annually by 2025" said Manoj Sukumaran, Principal Analyst for Datacenter Compute and Networking at Omdia. "High bandwidth programmable Ethernet adapters require very fine optimization in hardware and software to ensure deterministic and predictable processing time and making them suitable for real-time networking applications. Napatech is among the very few vendors who could provide highly optimized SmartNICs and software solutions leveraging FPGAs from vendors like Intel, and deliver highly efficient network offload capabilities" he added. "The NT400 platform represents the latest generation within our portfolio of SmartNIC solutions," said Jarrod Siket, Chief Marketing Officer at Napatech. "We will deliver multiple SKUs based on this platform, providing products with memory configurations as well features like time synchronization and management ports that are precisely tuned to the requirements of our customers' applications, all packaged with the applicable production-grade software." "We are delighted to see Napatech choose the Intel Agilex 7 FPGA for their leading-edge SmartNIC solutions," said Mike Fitton, Vice President Programmable Solutions Group and General Manager, Network Business Division at Intel. "The combination of our FPGAs, which deliver high performance, and power efficiency plus a rich feature set for the most demanding applications, together with Napatech's production-grade hardware and software, helps ensure that customers can deliver leading solutions for a wide range of enterprise and telecom applications." About Napatech Napatech is the leading supplier of SmartNIC solutions used in cloud, enterprise, and telecom datacenters. Through commercial-grade software suites integrated with high-performance hardware, Napatech accelerates network infrastructure and security workloads to deliver best-in-class system-level performance while maximizing the availability of server compute resources for applications and services.

Read More

Virtual Desktop Tools, Virtual Desktop Strategies, Server Virtualization

Netskope Delivers the Next Evolution in Digital Experience Management for SASE with Proactive DEM

PR Newswire | September 01, 2023

Netskope, a leader in Secure Access Service Edge (SASE), today announced the launch of Proactive Digital Experience Management (DEM) for SASE, elevating best practice from the current reactive monitoring tools to proactive user experience management. Proactive DEM provides experience management capabilities across the entire SASE architecture, including Netskope Intelligent SSE, Netskope Borderless SD-WAN and Netskope NewEdge global infrastructure. Digital Experience Management technology has become increasingly crucial amid digital business transformation, with organizations seeking to enhance customer experiences and improve employee engagement. With hybrid work and cloud infrastructure now the norm globally, organizations have struggled to ensure consistent and optimized experiences alongside stringent security requirements. Gartner predicts that "by 2026, at least 60% of I&O leaders will use DEM to measure application, services and endpoint performance from the user's viewpoint, up from less than 20% in 2021." However, monitoring applications, services, and networks is only part of a modern DEM experience, and so Netskope Proactive DEM goes beyond observation, providing Machine Learning (ML)-driven functionality to anticipate, and automatically remediate, problems. Sanjay Beri, CEO and co-founder of Netskope commented, "Ensuring a constantly optimized experience is essential for organizations looking to support the best productivity returns for hybrid workers and modern cloud infrastructure, but monitoring alone is not enough. Customers have told us of the challenges they face managing a multi-vendor cloud ecosystem and so we have yet again innovated beyond industry standards, providing experience management that can both monitor and proactively remediate." For issue identification, Netskope Proactive DEM uniquely combines Synthetic Monitoring with Real User monitoring, creating SMART monitoring (Synthetic Monitoring Augmentation for Real Traffic). This enables full end-to-end 'hop-by-hop' visibility of data, and the proactive identification of experience-impacting events. SMART monitoring enables organizations to anticipate potential events that might impact upon network and application experience. While most SASE vendors rely on "gray cloud" infrastructure - built on public cloud - which limits their ability to granularly identify and control any issues, Proactive DEM leverages Netskope NewEdge - the industry's largest private cloud infrastructure - to deliver 360 visibility and control of end-to-end user experience while providing mitigation of issues, including using various self-healing mechanisms, before the user recognizes their experience has degraded. About Netskope Netskope, a global SASE leader, helps organizations apply zero trust principles and AI/ML innovations to protect data and defend against cyber threats. Fast and easy to use, the Netskope platform provides optimized access and real-time security for people, devices, and data anywhere they go. Netskope helps customers reduce risk, accelerate performance, and get unrivaled visibility into any cloud, web, and private application activity. Thousands of customers trust Netskope and its powerful NewEdge network to address evolving threats, new risks, technology shifts, organizational and network changes, and new regulatory requirements.

Read More

Virtual Desktop Tools, Server Hypervisors

Meter Partners with Cloudflare to Launch DNS Security

Business Wire | August 31, 2023

Meter, Inc., a leader in Network as a Service (NaaS) for businesses, today announced DNS Security, built in partnership with Cloudflare, the security, performance, and reliability company. Meter DNS Security is now widely available for all Meter Network customers, expanding Meter’s existing NaaS offering and saving teams both time and money, while also improving overall network performance and security, powered by Cloudflare’s Zero Trust platform. “With the number of devices on a network expected to triple by 2030, modern businesses and organizations demand enterprise network controls to ensure safety and peak performance for business critical functions,” said Anil Varanasi, CEO and co-founder of Meter. “Meter DNS Security is the latest example of how we’re continuing to offer our customers enterprise level networks end-to-end. Through our partnership with Cloudflare, we’re enhancing our capabilities to meet the needs of IT professionals at industrial warehouses, educational institutions, security firms, and more.” Meter DNS Security eliminates the hassle of having multiple vendors, by providing content filtering at several layers to all customers within the Meter Dashboard in partnership with one of the best providers in the world. “We’re proud to have Meter leveraging Cloudflare’s Zero Trust platform in a new way, offering our DNS filtering feature natively built into their Meter Dashboard,” said John Graham-Cumming, CTO, Cloudflare. “By building on Cloudflare's platform, Meter enables customers to manage their team’s operations at scale, as well as effectively enforce global corporate policies across diverse corporate spaces, such as offices, schools, and warehouses.” In addition to the ease and scalability of Meter DNS Security, users are ensuring security through enhanced compliance by blocking access to known malicious websites and bad actors. The integration and partnership with Cloudflare provides customers with faster DNS response times, while optimizing network performance by limiting access to high-bandwidth websites and services. Real world examples of this process include, but are not limited to: Ensuring a safe browsing environment at schools by filtering out age inappropriate content Optimizing network performance for warehouses by filtering high bandwidth activities like video streaming Maintaining high security and compliance standards by filtering malicious or illegal content “Tishman Speyer has successfully partnered with Meter to streamline the networking and Wi-Fi experience for our customers,” said Simon Okunev, Managing Director and Chief Information Officer, Tishman Speyer. “The addition of Meter’s DNS Security feature, powered by Cloudflare, will further benefit our customers by providing an additional layer of security.” About Cloudflare Cloudflare, Inc. is on a mission to help build a better Internet. Cloudflare’s suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was awarded by Reuters Events for Global Responsible Business in 2020, named to Fast Company's Most Innovative Companies in 2021, and ranked among Newsweek's Top 100 Most Loved Workplaces in 2022.

Read More

Server Hypervisors, Vsphere

Napatech Leverages Latest Intel Agilex® FPGA to Launch Industry's first 400Gbps SmartNIC Solutions

prnewswire | August 29, 2023

Napatech™ (OSLO: NAPA.OL), a leading provider of programmable Smart Network Interface Cards (SmartNICs) and Infrastructure Processing Units (IPU) used in cloud, enterprise and telecom datacenter networks, today announced the availability of the Napatech's first 400Gbps programmable SmartNIC solutions, leveraging the latest Intel Agilex® 7 FPGAs to deliver best-in-class performance for applications in security, cloud services, network monitoring and recording. Enterprises and OEMs providing high-performance solutions for network monitoring and recording require NICs with a performance level that matches the high PCI Express (PCIe) bandwidth available in the latest servers such as those based on 4th Gen Intel Xeon® Scalable Processors. The new Napatech NT400 SmartNIC platform, based on the Intel® FPGA SmartNIC N6000-PL Platform, addresses this need through a PCIe Gen 4 16-lane host interface which enables full-duplex 2x100Gbps traffic between network ports and host applications. Similarly, for applications like the 5G packet core in telecom infrastructure that require high-bandwidth inline processing of network data, the NT400 platform sustains a total 400G of traffic over tens of millions of flows. The NT400 programmable SmartNIC platform includes two QSFP56 network ports, supporting up to 2x200G traffic with the flexibility to configure 10G, 25G, 40G, 50G, 100G and 200G network links. The SmartNIC hardware is complemented by Napatech's portfolio of production-grade software packages, including Link-Capture™ for use cases such as network monitoring and recording, Link-Virtualization™ that provides a virtualized data plane for cloud services and Link-Inline™ for inline applications such as 5G User Plane Function (UPF). These integrated solutions deliver a true "IT experience" whereby the user just installs the card and the software, immediately achieving seamless acceleration of their application with no requirement to directly program the SmartNIC itself. At the core of the NT400 platform is the Intel Agilex 7 FPGA F-Tile chiplet, which incorporates a configurable, hardened Ethernet protocol stack for supporting rates from 10G to 400G. Napatech chose the Intel Agilex 7 FPGA for a host of reasons, including scalability options that allow support for five different configurations that meet various price, performance, power and feature goals, tailored to specific customer applications and use cases. The F-Tile features are critical in enabling the NT400 to operate within the space and power limitations of standard servers deployed in network appliances, data centers and edge locations. "As the networking landscape continues to evolve, SmartNICs emerge as the predominant growth catalyst in the expansive NIC market, poised to reach $3.3 billion annually by 2025" said Manoj Sukumaran, Principal Analyst for Datacenter Compute and Networking at Omdia. "High bandwidth programmable Ethernet adapters require very fine optimization in hardware and software to ensure deterministic and predictable processing time and making them suitable for real-time networking applications. Napatech is among the very few vendors who could provide highly optimized SmartNICs and software solutions leveraging FPGAs from vendors like Intel, and deliver highly efficient network offload capabilities" he added. "The NT400 platform represents the latest generation within our portfolio of SmartNIC solutions," said Jarrod Siket, Chief Marketing Officer at Napatech. "We will deliver multiple SKUs based on this platform, providing products with memory configurations as well features like time synchronization and management ports that are precisely tuned to the requirements of our customers' applications, all packaged with the applicable production-grade software." "We are delighted to see Napatech choose the Intel Agilex 7 FPGA for their leading-edge SmartNIC solutions," said Mike Fitton, Vice President Programmable Solutions Group and General Manager, Network Business Division at Intel. "The combination of our FPGAs, which deliver high performance, and power efficiency plus a rich feature set for the most demanding applications, together with Napatech's production-grade hardware and software, helps ensure that customers can deliver leading solutions for a wide range of enterprise and telecom applications." About Napatech Napatech is the leading supplier of SmartNIC solutions used in cloud, enterprise, and telecom datacenters. Through commercial-grade software suites integrated with high-performance hardware, Napatech accelerates network infrastructure and security workloads to deliver best-in-class system-level performance while maximizing the availability of server compute resources for applications and services.

Read More

Events