Managing Multi-Cloud Complexities for a Seamless Experience

Multi-cloud

Introduction

The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today.

Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions.

However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others.


Managing Multi-Cloud Complexities Like a Pro

Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below.


Managing Data with AI and ML

AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction.

AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control.


Integrated Management Structure

Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives.

Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds.


Developing Security Strategy

Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss.

Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications.


Skillset Management

Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies.


Closing Lines

Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Spotlight

Bluewolf

Bluewolf, an IBM Company, is the global Salesforce consulting agency committed to creating customer and employee experiences that drive a return on innovation. We enable companies of any size and industry to deliver deeper, personalized customer moments with Augmented Intelligence (AI) as a competitive advantage–now. Using our patented project delivery solution, Bluewolf Sightline™, we reduce deployment time and get results faster with less risk for customers worldwide, such as T-Mobile, Mercedes-Benz, and AmerisourceBergen.

OTHER ARTICLES
Virtual Desktop Tools

Virtualizing Broadband Networks: Q&A with Tom Cloonan and David Grubb

Article | June 24, 2022

The future of broadband networks is fast, pervasive, reliable, and increasingly, virtual. Dell’Oro predicts that virtual CMTS/CCAP revenue will grow from $90 million in 2019 to $418 million worldwide in 2024. While network virtualization is still in its earliest stages of deployment, many operators have begun building their strategy for virtualizing one or more components of their broadband networks.

Read More
Virtual Desktop Tools

Efficient Management of Virtual Machines using Orchestration

Article | August 12, 2022

Contents 1. Introduction 2. What is Orchestration? 3. How Orchestrating Help Optimize VMs Efficiency? 3.1. Resource Optimization 3.2 Dynamic Scaling 3.3 Faster Deployment 3.4 Improved Security 3.5 Multi-Cloud Management 3.6 Improved Collaboration 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs 4.2 Automated Backup and Restore for VMs 4.3 Ensure Replication for VMs 4.4 Setup Data Synchronization for VMs 5. Conclusion 1. Introduction Orchestration is a superset of automation. Cloud orchestration goes beyond automation, providing coordination between multiple automated activities. Cloud orchestration is increasingly essential due to the growth of containerization, which facilitates scaling applications across clouds, both public and private. The demand for both public cloud orchestration and hybrid cloud orchestration has increased as businesses increasingly adopt a hybrid cloud architecture. The quick adoption of containerized, micro-services-based apps that communicate over APIs has fueled the desire for automation in deploying and managing applications across the cloud. This increase in complexity has created a need for VM orchestration that can manage numerous dependencies across various clouds with policy-driven security and management capabilities. 2. What is Orchestration? Orchestration refers to the process of automating, coordinating, and managing complex systems, workflows, or processes. It typically entails the use of automation tools and platforms to streamline and coordinate the deployment, configuration, management of applications and services across different environments. This includes development, testing, staging, and production. Orchestration tools in cloud computing can be used to automate the deployment and administration of containerized applications across multiple servers or clusters. These tools can help automate tasks such as container provisioning, scaling, load balancing, and health monitoring, making it easier to manage complex application environments. Orchestration ensures organizations automate and streamline their workflows, reduce errors and downtime, and improve the efficacy and scalability of their operations. 3. How Orchestrating Help Optimize VMs Efficiency? Orchestration offers enhanced visibility into the resources and processes in use, which helps prevent VM sprawl and helps organizations trace resource usage by department, business unit, or individual user. Fig. Global Market for VNFO by Virtualization Methodology 2022-27($ million) (Source: Insight Research) The above figure shows, VMs have established a solid legacy that will continue to be relevant in the near to mid-term future. These are 6 ways, in which Orchestration helps vin efficient management of VMs: 3.1. Resource Optimization Orchestrating helps optimize resource utilization by automating the provisioning and de-provisioning of VMs, which allows for efficient use of computing resources. By using orchestration tools, IT teams can set up rules and policies for automatically scaling VMs based on criteria such as CPU utilization, memory usage, network traffic, and application performance metrics. Orchestration also enables advanced techniques such as predictive analytics, machine learning, and artificial intelligence to optimize resource utilization. These technologies can analyze historical data and identify patterns in workload demand, allowing the orchestration system to predict future resource needs and automatically provision or de-provision resources accordingly 3.2. Dynamic Scaling Orchestrating helps automate scaling of VMs, enabling organizations to quickly and easily adjust their computing resources based on demand. It enables IT teams to configure scaling policies and regulations for virtual machines based on resource utilization and network traffic along with performance metrics. When the workload demand exceeds a certain threshold, the orchestration system can autonomously provision additional virtual machines to accommodate the increased load. When workload demand decreases, the orchestration system can deprovision VMs to free up resources and reduce costs. 3.3. Faster Deployment Orchestrating can help automate VM deployment of VMs, reducing the time and effort required to provision new resources. By leveraging advanced technologies such as automation, scripting, and APIs, orchestration can further streamline the VM deployment process. It allows IT teams to define workflows and processes that can be automated using scripts, reducing the time and effort required to deploy new resources. In addition, orchestration can integrate with other IT management tools and platforms, such as cloud management platforms, configuration management tools, and monitoring systems. This enables IT teams to leverage various capabilities and services to streamline the VM deployment and improve efficiency. 3.4. Improved Security Orchestrating can help enhance the security of VMs by automating the deployment of security patches and updates. It also helps ensure VMs are deployed with the appropriate security configurations and settings, reducing the risk of misconfiguration and vulnerability. It enables IT teams to define standard security templates and configurations for VMs, which can be automatically applied during deployment. Furthermore, orchestration can integrate with other security tools and platforms, such as intrusion detection systems and firewalls, to provide a comprehensive security solution. It allows IT teams to automate the deployment of security policies and rules, ensuring that workloads remain protected against various security threats. 3.5. Multi-Cloud Management Orchestration helps provide a single pane of glass for VM management, enabling IT teams to monitor and manage VMs across multiple cloud environments from a single platform. This simplifies management and reduces complexity, enabling IT teams to respond more quickly and effectively to changing business requirements. In addition, orchestration also helps to ensure consistency and compliance across multiple cloud environments. Moreover, orchestration can also integrate with other multi-cloud management tools and platforms, such as cloud brokers and cloud management platforms, to provide a comprehensive solution for managing VMs across multiple clouds. 3.6. Improved Collaboration Orchestration helps streamline collaboration by providing a centralized repository for storing and sharing information related to VMs. Moreover, it also automates many of the routine tasks associated with VM management, reducing the workload for IT teams and freeing up time for more complex tasks. This can improve collaboration by enabling IT teams to focus on more strategic initiatives. In addition, orchestration provides advanced analytics and reporting capabilities, enabling IT teams to track performance, identify bottlenecks, and optimize resource utilization. This improves performance by providing a data-driven approach to VM management and allowing IT teams to work collaboratively to identify and address performance issues. 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs Containers and virtual machines exist together within a single infrastructure and are managed by the same platform. This allows for hosting various projects using a unified management point and the ability to adapt gradually based on current needs and opportunities. This provides greater flexibility for teams to host and administer applications using cutting-edge technologies and established standards and methods. Moreover, as there is no need to invest in distinct physical servers for virtual machines (VMs) and containers, this approach can be a great way to maximize infrastructure utilization, resulting in lower TCO and higher ROI. In addition, unified management drastically simplifies processes, requiring fewer human resources and less time. 4.2. Automated Backup and Restore for VMs --Minimize downtime and reduce risk of data loss Organizations should set up automated backup and restore processes for virtual machines, ensuring critical data and applications are protected during a disaster. This involves scheduling regular backups of virtual machines to a secondary location or cloud storage and setting up automated restore processes to recover virtual machines during an outage or disaster quickly. 4.3. Ensure Replication for VMs --Ensure data and applications are available and accessible in the event of a disaster Organizations should set up replication processes for their VMs, allowing them to be automatically copied to a secondary location or cloud infrastructure. This ensures that critical applications and data are available even during a catastrophic failure at the primary site. 4.4. Setup Data Synchronization for VMs --Improve overall resilience and availability of the system VM orchestration tools should be used to set up data synchronization processes between virtual machines, ensuring that data is consistent and up-to-date across multiple locations. This is particularly important in scenarios where data needs to be accessed quickly from various locations, such as in distributed environments. 5. Conclusion Orchestration provides disaster recovery and business continuity, automatic scalability of distributed systems, and inter-service configuration. Cloud orchestration is becoming significant due to the advent of containerization, which permits scaling applications across clouds, both public and private. We expect continued growth and innovation in the field of VM orchestration, with new technologies and tools emerging to support more efficient and effective management of virtual machines in distributed environments. In addition, as organizations increasingly rely on cloud-based infrastructures and distributed systems, VM orchestration will continue to play a vital role in enabling businesses to operate smoothly and recover quickly from disruptions. VM orchestration will remain a critical component of disaster recovery and high availability strategies for years as organizations continue relying on virtualization technologies to power their operations and drive innovation.

Read More
Virtual Desktop Tools

Network Virtualization: The Future of Businesses and Networks

Article | July 26, 2022

Network virtualization has emerged as the widely recommended solution for the networking paradigm's future. Virtualization has the potential to revolutionize networks in addition to providing a cost-effective, flexible, and secure means of communication. Network virtualization isn't an all-or-nothing concept. It can help several organizations with differing requirements, or it can provide a bunch of new advantages for a single enterprise. It is the process of combining a network's physical hardware into a single, virtual network. This is often accomplished by running several virtual guest machines in software containers on a single physical host system. Network virtualization is indeed the new gold standard for networking, and it is being embraced by enterprises of all kinds globally. By integrating their current network gear into a single virtual network, businesses can reduce operating expenses, automate network and security processes, and lay the groundwork for future growth. Network virtualization also enables organizations to simulate traditional hardware like servers, storage devices, and network resources. The physical network performs basic tasks like packet forwarding, while virtual versions handle more complex activities like networking service management and deployment. Addressing Network Virtualization Challenges Surprisingly, IT teams might encounter network virtualization challenges that are both technical and non-technical in nature. Let's look at some common challenges and discuss how to overcome them. Change in Network Architecture Practically, the first big challenge is shifting from an architecture that depends heavily on routers, switches, and firewalls. Instead, these services are detached from conventional hardware and put on hypervisors that virtualize these operations. Virtualized network services are shared, scaled, and moved as required. Migrating current LANs and data centers to a virtualized platform require careful planning. This migration involves the following tasks: Determine how much CPU, computation, and storage resources will be required to run virtualized network services. Determine the optimal approach for integrating network resilience and security services. Determine how the virtualized network services will be implemented in stages to avoid disrupting business operations. The key to a successful migration is meticulous preparation by architects who understand the business's network requirements. This involves a thorough examination of existing apps and services, as well as a clear knowledge of how data should move across the company most effectively. Moreover, a progressive approach to relocation is often the best solution. In this instance, IT teams can make changes to the virtualization platform without disrupting the whole corporate network. Network Visibility Network virtualization has the potential to considerably expand the number of logical technology layers that must collaborate. As a result, traditional network and data center monitoring technologies no longer have insight into some of these abstracted levels. In other circumstances, visibility can be established, but the tools fail to show the information correctly so that network operators can understand it. In either case, deploying and managing modern network visibility technologies is typically the best choice. When an issue arises, NetOps personnel are notified of the specific service layer. Automation and AI The enhanced level of automation and self-service operations that can be built into a platform is a fundamental aspect of network virtualization. While these activities can considerably increase the pace of network upgrades while decreasing management overhead, they need the documentation and implementation of a new set of standards and practices. Understand that prior network architectures were planned and implemented utilizing actual hardware appliances on a hop-by-hop basis. A virtualized network, on the other hand, employs a centralized control plane to govern and push policies to all sections of the network. Changes may occur more quickly in this aspect, but various components must be coordinated to accomplish their roles in harmony. As a result, network teams should move their attention away from network operations that are already automated. Rather, their new responsibility is to guarantee that the core automation processes and AI are in sync in order to fulfill those automated tasks. Driving Competitive Edge with Network Virtualization Virtualization in networking or virtual machines within an organization is not a new trend. Even small and medium businesses have realized the benefits of network virtualization, especially when combined with a hosted cloud service provider. Because of this, the demand for enterprise network virtualization is rising, driving higher end-user demands and the proliferation of devices and business tools. These network virtualization benefits can help boost business growth and gain a competitive edge. Gaining a Competitive Edge: Network Virtualization Benefits Cost-Savings on Hardware Faster Desktop and Server Provisioning and Deployment Improved Data Security and Disaster Recovery Increasing IT Operational Efficiency Small Footprint and Energy Saving Network Virtualization: The Path to Digital Transformation Business is at the center of digital transformation, but technology is needed to make it happen. Integrated clouds, highly modern data centers, digital workplaces, and increased data center security are all puzzle pieces, and putting them all together requires a variety of various products and services that are deployed cohesively. The cloud revolution is still having an influence on IT, transforming how digital content is consumed and delivered. This should come as no surprise that such a shift has influenced how we feel about current networking. When it boils down to it, the purpose of digital transformation for every company, irrespective of industry, is the same: to boost the speed with which you can respond to market changes and evolving business needs; to enhance your ability to embrace and adapt to new technology, and to improve overall security. As businesses realize that the underlying benefit of cloud adoption and enhanced virtualization isn't simply about cost savings, digital strategies are evolving, becoming more intelligent and successful in the process. Network virtualization is also a path toward the smooth digital transformation of any business. How does virtualization help in accelerating digital transformation? Combining public and private clouds, involving hardware-based computing, storage, and networking software definition. A hyper-converged infrastructure that integrates unified management with virtualized computing, storage, and networking could be included. Creating a platform for greater productivity by providing the apps and services consumers require when and when they utilize them. This should include simplifying application access and administration as well as unifying endpoint management. Improving network security and enhancing security flexibility to guarantee that quicker speed to market is matched by tighter security. Virtualization will also help businesses to move more quickly and safely, bringing products—and profits—to market faster. Enhancing Security with Network Virtualization Security has evolved as an essential component of every network architecture. However, since various areas of the network are often segregated from one another, it might be challenging for network teams to design and enforce network virtualization security standards that apply to the whole network. Zero trust can integrate such network parts and their accompanying virtualization activities. Throughout the network, the zero-trust architecture depends on the user and device authentication. If LAN users wish to access data center resources, they must first be authenticated. The secure connection required for endpoints to interact safely is provided by a zero-trust environment paired with network virtualization. To facilitate these interactions, virtual networks can be ramped up and down while retaining the appropriate degree of traffic segmentation. Access policies, which govern which devices can connect with one another, are a key part of this process. If a device is allowed to access a data center resource, the policy should be understood at both the WAN and campus levels. Some of the core network virtualization security features are: Isolation and multitenancy are critical features of network virtualization. Segmentation is related to isolation; however it is utilized in a multitier virtual network. A network virtualization platform's foundation includes firewalling technologies that enable segmentation inside virtual networks. Network virtualization enables automatic provisioning and context-sharing across virtual and physical security systems. Investigating the Role of Virtualization in Cloud Computing Virtualization in the cloud computing domain refers to the development of virtual resources (such as a virtual server, virtual storage device, virtual network switch, or even a virtual operating system) from a single resource of its type that also shows up as several personal isolated resources or environments that users can use as a separate individual physical resource. Virtualization enables the benefits of cloud computing, such as ease of scaling up, security, fluid or flexible resources, and so on. If another server is necessary, a virtual server will be immediately created, and a new server will be deployed. When we need more memory, we increase the virtual server configurations we currently have, and we now have the extra RAM we need. As a result, virtualization is the underlying technology of the cloud computing business model. The Benefits of Virtualization in Cloud Computing: Efficient hardware utilization Virtualization improves availability Disaster recovery is quick and simple Energy is saved by virtualization Setup is quick and simple Cloud migration has become simple Motivating Factors for the Adoption of Network Virtualization Demand for enterprise networks continues to climb, owing to rising end-user demands and the proliferation of devices and business software. Thanks to network virtualization, IT companies are gaining the ability to respond to shifting demands and match their networking capabilities with their virtualized storage and computing resources. In fact, according to a recent SDxCentral report, 88% of respondents believe it is "important" or "mission critical" to implement a network virtualization software over the next two to five years. Virtualization is also an excellent alternative for businesses that employ outsourced IT services, are planning mergers or acquisitions or must segregate IT teams owing to regulatory compliance. Reasons to Adopt Network Virtualization: A Business Needs Speed Security Requirements Are Rising Apps can Move Around Micro-segmentation IT Automation and Orchestration Reduce Hardware Dependency and CapEx: Adopt Multi-Tenancy Cloud Disaster Recovery mproved Scalability Wrapping-Up Network virtualization and cloud computing are emerging technologies of the future. As CIOs get actively involved in organizational systems, these new concepts will be implemented in more businesses. As consumer demand for real-time services expands, businesses will be driven to explore network virtualization as the best way to take their networks to the next level. The networking future is here. FAQ Why is network virtualization important for business? By integrating their current network gear into a single virtual network, businesses can reduce operating expenses, automate network and security processes, and set the stage for future growth. Where is network virtualization used? Network virtualization can be utilized in application development and testing to simulate hardware and system software realistically. Network virtualization in application performance engineering allows for the modeling of connections among applications, services, dependencies, and end users for software testing. How does virtualization work in cloud computing? Virtualization, in short, enables cloud providers to provide users alongside existing physical computer infrastructure. As a simple and direct process, it allows cloud customers to buy only the computing resources they require when they want them and to maintain those resources cost-effectively as the demand grows.

Read More
VMware

VMware NSX 3.2 Delivers New, Advanced Security Capabilities

Article | December 7, 2021

It’s an impactful release focused on significant NSX Security enhancements Putting a hard shell around a soft core is not a recipe for success in security, but somehow legacy security architectures for application protection have often looked exactly like that: a hard perimeter firewall layer for an application infrastructure that was fundamentally not built with security as a primary concern. VMware NSX Distributed Firewall pioneered the micro-segmentation concept for granular access controls for cloud applications with the initial launch of the product in 2013. The promise of Zero Trust security for applications, the simplicity of deployment of the solution, and the ease of achieving internal security objectives made NSX an instant success for security-sensitive customers. Our newest release — NSX-T 3.2 — establishes a new marker for securing application infrastructure by introducing significant new features to identify and respond to malware and ransomware attacks in the network, to enhance user identification and L7 application identification capabilities, and, at the same time, to simplify deployment of the product for our customers. Modern day security teams need to secure mission-critical infrastructure from both external and internal attacks. By providing unprecedented threat visibility leveraging IDS, NTA, and Network Detection and Response (NDR) capabilities along with granular controls leveraging L4-L7 Firewall, IPS, and Malware Prevention capabilities, NSX 3.2 delivers an incredible security solution for our customers“ Umesh Mahajan, SVP, GM (Networking and Security Business Unit) Distributed Advanced Threat Prevention (ATP) Attackers often use multiple sophisticated techniques to penetrate the network, move laterally within the network in a stealthy manner, and exfiltrate critical data at an appropriate time. Micro-segmentation solutions focused solely on access control can reduce the attack surface — but cannot provide the detection and prevention technologies needed to thwart modern attacks. NSX-T 3.2 introduces several new capabilities focused on detection and prevention of attacks inside the network. Of critical note is that these advanced security solutions do not need network taps, separate monitoring networks, or agents inside each and every workload. Distributed Malware Prevention Lastline’s highly reputed dynamic malware technology is now integrated with NSX Distributed Firewall to deliver an industry-first Distributed Malware Prevention solution. Leveraging the integration with Lastline, a Distributed Firewall embedded within the hypervisor kernel can now identify both “known malicious” as well as “zero day” malware Distributed Behavioral IDS Whereas earlier versions of NSX Distributed IDPS (Intrusion Detection and Prevention System) delivered primarily signature-based detection of intrusions, NSX 3.2 introduces “behavioral” intrusion detection capabilities as well. Even if specific IDS signatures are not triggered, this capability helps customers know whether a workload is seeing any behavioral anomalies, like DNS tunneling or beaconing, for example, that could be a cause for concern. Network Traffic Analysis (NTA) For customers interested in baselining network-wide behavior and identifying anomalous behavior at the aggregated network level, NSX-T 3.2 introduces Distributed Network Traffic Analysis (NTA). Network-wide anomalies like lateral movement, suspicious RDP traffic, and malicious interactions with the Active Directory server, for example, can alert security teams about attacks underway and help them take quick remediation actions. Network Detection and Response (NDR) Alert overload, and resulting fatigue, is a real challenge among security teams. Leveraging advanced AI/ML techniques, the NSX-T 3.2 Network Detection and Response solution consolidates security IOCs from different detection systems like IDS, NTA, malware detection. etc., to provide a ”campaign view” that shows specific attacks in play at that point in time. MITRE ATT&CK visualization helps customers see the specific stage in the kill chain of individual attacks, and the ”time sequence” view helps understand the sequence of events that contributed to the attack on the network. Key Firewall Enhancements While delivering new Advanced Threat Prevention capabilities is one key emphasis for the NSX-T 3.2 release, providing meaningful enhancements for core firewalling capabilities is an equally critical area of innovation. Distributed Firewall for VDS Switchports While NSX-T has thus far supported workloads connected to both overlay-based N-VDS switchports as well as VLAN-based switchports, customers had to move the VLAN switchports from VDS to N-VDS before a Distributed Firewall could be enforced. With NSX-T 3.2, native VLAN DVPGs are supported as-is, without having to move to N-VDS. Effectively, Distributed Security can be achieved in a completely seamless manner without having to modify any networking constructs. Distributed Firewall workflows in vCenter With NSX-T 3.2, we are introducing the ability to create and modify Distributed Firewall rules natively within vCenter. For small- to medium-sized VMware customers, this feature simplifies the user experience by eliminating the need to leverage a separate NSX Manager interface. Advanced User Identification for Distributed and Gateway Firewalls NSX supported user identity-based access control in earlier releases. With NSX-T 3.2, we’re introducing the ability to directly connect to Microsoft Active Directory to support user identity mapping. In addition, for customers who do not use Active Directory for user authentication, NSX also supports VMware vRealize LogInsight as an additional method to carry out user identity mapping. This feature enhancement is applicable for both NSX Distributed Firewall as well as NSX Gateway Firewall. Enhanced L7 Application Identification for Distributed and Gateway Firewalls NSX supported Layer-7 application identification-based access control in earlier releases. With NSX-T 3.2, we are enhancing the signature set to about 750 applications. While several perimeter firewall vendors claim a larger set of Layer-7 application signatures, they focus mostly on internet application identification (like Facebook, for example). Our focus with NSX at this time is on internal applications hosted by enterprises. This feature enhancement is applicable for both NSX Distributed Firewall as well as Gateway Firewalls. NSX Intelligence NSX Intelligence is geared towards delivering unprecedented visibility for all application traffic inside the network and enabling customers to create micro-segmentation policies to reduce the attack surface. It has a processing pipeline that de-dups, aggregates, and correlates East-West traffic to deliver in-depth visibility. Scalability enhancements for NSX Intelligence As application infrastructure grows rapidly, it is vital that one’s security analytics platform can grow with it. With the new release, we have rearchitected the application platform upon which NSX Intelligence runs — moving from a stand-alone appliance to a containerized micro-service architecture powered by Kubernetes. This architectural change future-proofs the Intelligence data lake and allows us to eventually scale out our solution to n-node Kubernetes clusters. Large Enterprise customers that need visibility for application traffic can confidently deploy NSX Intelligence and leverage the enhanced scale it supports. NSX Gateway Firewall While NSX Distributed Firewall focuses on east-west controls within the network, NSX Gateway Firewall is used for securing ingress and egress traffic into and out of a zone. Gateway Firewall Malware Detection NSX Gateway Firewall in the 3.2 release received significant Advanced Threat Detection capabilities. Gateway Firewall can now identify both known as well as zero-day malware ingressing or egressing the network. This new capability is based on the Gateway Firewall integration with Lastline’s highly reputed dynamic network sandbox technology. Gateway Firewall URL Filtering Internal users and applications reaching out to malicious websites is a huge security risk that must be addressed. In addition, enterprises need to limit internet access to comply with corporate internet usage policies. NSX Gateway Firewall in 3.2 introduces the capability to restrict access to internet sites. Access can be limited based on either the category the URL belongs to, or the “reputation” of the URL. The URL to category and reputation mapping is constantly updated by VMware so customer intent is enforced automatically even after many changes in the internet sites themselves.

Read More

Spotlight

Bluewolf

Bluewolf, an IBM Company, is the global Salesforce consulting agency committed to creating customer and employee experiences that drive a return on innovation. We enable companies of any size and industry to deliver deeper, personalized customer moments with Augmented Intelligence (AI) as a competitive advantage–now. Using our patented project delivery solution, Bluewolf Sightline™, we reduce deployment time and get results faster with less risk for customers worldwide, such as T-Mobile, Mercedes-Benz, and AmerisourceBergen.

Related News

Virtual Desktop Tools, Virtual Desktop Strategies

Leostream Enhances Security and Management of vSphere Hybrid Cloud Deployments

Business Wire | January 29, 2024

Leostream Corporation, the world's leading Remote Desktop Access Platform provider, today announced features to enhance security, management, and end-user productivity in vSphere-based hybrid cloud environments. The Leostream platform strengthens end-user computing (EUC) capabilities for vSphere users, including secure access to both on-premises and cloud environments, heterogeneous support, and reduced cloud costs. With the Leostream platform as the single pane of glass managing EUC environments, any hosted desktop environment, including individual virtual desktops, multi-user sessions, hosted physical workstations or desktops, and hosted applications, becomes simpler to manage, more secure, more flexible, and more cost-effective. Significant ways the Leostream platform expands vSphere’s capabilities include: Security The Leostream platform ensures data remains locked in the corporate network, and works across on-premises and cloud environments, providing even disparate infrastructures with the same levels of security and command over authorization, control, and access tracking. The Leostream platform supports multi-factor authentication and allows organizations to enforce strict access control rules, creating an EUC environment modeled on a zero-trust architecture. Multivendor/protocol support The Leostream platform was developed from the ground up for heterogeneous infrastructures and as the connection management layer of the EUC environment, the Leostream platform allows organizations to leverage vSphere today and other hypervisors or hyperconvergence platforms in the future as their needs evolve. The Leostream platform supports the industry’s broadest array of remote display protocols, including specialized protocols for mission-critical tasks. Consistent EUC experience The Leostream platform enables IT to make changes to the underlying environment while ensuring the end user experience is constant, and to incorporate AWS, Azure, Google Cloud, or OpenStack private clouds into their environment without disruptions in end-user productivity. By integrating with corporate Identity Providers (IdPs) that employees are already familiar with, and providing employees with a single portal they use to sign in, the Leostream platform offers simplicity to users too. Connectivity The Leostream Gateway securely connects to on-prem and cloud resources without virtual private networks (VPNs), and eliminates the need to manage and maintain security groups. End users get the same seamless login and high-performance connection across hybrid environments including corporate resources located off the internet. Controlling cloud costs The Leostream Connection Broker implements automated rules that control capacity and power state in the cloud, allowing organizations to optimize their cloud usage and minimize costs, such as ensuring cloud instances aren’t left running when they are no longer needed. The Connection Broker also intelligently pools and shares resources across groups of users, so organizations can invest in fewer systems, reducing overall cost of ownership. “These features deliver a streamlined experience with vSphere and hybrid or multi-cloud resources so end users remain productive, and corporate data and applications remain secure,” said Leostream CEO Karen Gondoly. “At a time when there is uncertainty about the future of support for VMware’s end-user computing, it’s important to bring these options to the market to show that organizations can extend vSphere’s capabilities and simultaneously plan for the future without disruption to the workforce.” About Leostream Corporation Leostream Corporation, the global leader in Remote Desktop Access Platforms, offers comprehensive solutions that enable seamless work-from-anywhere environments for individuals across diverse industries, regardless of organization size or location. The core of the Leostream platform is its commitment to simplicity and insight. It is driven by a unified administrative console that streamlines the management of users, cloud desktops, and IT assets while providing real-time dashboards for informed decision-making. The company continually monitors the evolving remote desktop landscape, anticipating future trends and challenges. This purposeful, proactive approach keeps clients well-prepared for the dynamic changes in remote desktop technology.

Read More

Virtual Desktop Tools, Virtual Desktop Strategies

Leostream Enhances Security and Management of vSphere Hybrid Cloud Deployments

Business Wire | January 29, 2024

Leostream Corporation, the world's leading Remote Desktop Access Platform provider, today announced features to enhance security, management, and end-user productivity in vSphere-based hybrid cloud environments. The Leostream platform strengthens end-user computing (EUC) capabilities for vSphere users, including secure access to both on-premises and cloud environments, heterogeneous support, and reduced cloud costs. With the Leostream platform as the single pane of glass managing EUC environments, any hosted desktop environment, including individual virtual desktops, multi-user sessions, hosted physical workstations or desktops, and hosted applications, becomes simpler to manage, more secure, more flexible, and more cost-effective. Significant ways the Leostream platform expands vSphere’s capabilities include: Security The Leostream platform ensures data remains locked in the corporate network, and works across on-premises and cloud environments, providing even disparate infrastructures with the same levels of security and command over authorization, control, and access tracking. The Leostream platform supports multi-factor authentication and allows organizations to enforce strict access control rules, creating an EUC environment modeled on a zero-trust architecture. Multivendor/protocol support The Leostream platform was developed from the ground up for heterogeneous infrastructures and as the connection management layer of the EUC environment, the Leostream platform allows organizations to leverage vSphere today and other hypervisors or hyperconvergence platforms in the future as their needs evolve. The Leostream platform supports the industry’s broadest array of remote display protocols, including specialized protocols for mission-critical tasks. Consistent EUC experience The Leostream platform enables IT to make changes to the underlying environment while ensuring the end user experience is constant, and to incorporate AWS, Azure, Google Cloud, or OpenStack private clouds into their environment without disruptions in end-user productivity. By integrating with corporate Identity Providers (IdPs) that employees are already familiar with, and providing employees with a single portal they use to sign in, the Leostream platform offers simplicity to users too. Connectivity The Leostream Gateway securely connects to on-prem and cloud resources without virtual private networks (VPNs), and eliminates the need to manage and maintain security groups. End users get the same seamless login and high-performance connection across hybrid environments including corporate resources located off the internet. Controlling cloud costs The Leostream Connection Broker implements automated rules that control capacity and power state in the cloud, allowing organizations to optimize their cloud usage and minimize costs, such as ensuring cloud instances aren’t left running when they are no longer needed. The Connection Broker also intelligently pools and shares resources across groups of users, so organizations can invest in fewer systems, reducing overall cost of ownership. “These features deliver a streamlined experience with vSphere and hybrid or multi-cloud resources so end users remain productive, and corporate data and applications remain secure,” said Leostream CEO Karen Gondoly. “At a time when there is uncertainty about the future of support for VMware’s end-user computing, it’s important to bring these options to the market to show that organizations can extend vSphere’s capabilities and simultaneously plan for the future without disruption to the workforce.” About Leostream Corporation Leostream Corporation, the global leader in Remote Desktop Access Platforms, offers comprehensive solutions that enable seamless work-from-anywhere environments for individuals across diverse industries, regardless of organization size or location. The core of the Leostream platform is its commitment to simplicity and insight. It is driven by a unified administrative console that streamlines the management of users, cloud desktops, and IT assets while providing real-time dashboards for informed decision-making. The company continually monitors the evolving remote desktop landscape, anticipating future trends and challenges. This purposeful, proactive approach keeps clients well-prepared for the dynamic changes in remote desktop technology.

Read More

Events