VSphere High Availability explained: Monitor VMs and applications

Search VMware | February 25, 2020

VMware vSphere High Availability is a utility that restarts failed VMs on alternative host servers to reduce downtime for critical applications. With vSphere HA, you can pool physical servers on the same network into a high availability cluster. HA can then detect VM failures and outages and restart them on other stable systems in the cluster in the event of a problem. HA automates VM failover and helps organizations improve availability. Using the Fault Domain Manager agent, it monitors ESXi host availability via a host's "heartbeat" -- a signal VMware Tools generates to monitor availability. You can define affinity and anti-affinity rules in HA to ensure VMs remain where they belong.

Spotlight

IBM and VMware have joined forces in a strategic partnership. Together, they have streamlined hybrid cloud adoption, helping you extend your existing workloads to the cloud through full access to the native VMware stack. Manage workloads in the cloud exactly as you would on-premise. Because nothing is changing, there’s nothing to migrate. No need to modify security, storage, or networking. You don’t have to convert virtual machines. And your IT staff can use their existing tools and skillsets.


Other News
VPN

Veracode Launches Container Security Offering That Secures Cloud-Native Application Development

Veracode | October 11, 2022

Veracode, a leading global provider of application security testing solutions, today announced the enhancement of its Continuous Software Security Platform to include container security. This early access program for Veracode Container Security is now underway for existing customers. The new Veracode Container Security offering, designed to meet the needs of cloud-native software engineering teams, addresses vulnerability scanning, secure configuration, and secrets management requirements for container images. Veracode Chief Product Officer, Brian Roche, said, “As developers embrace cloud-native computing practices, containers have become increasingly important for business efficiency. This launch helps close a substantial gap in the market for developer-friendly solutions that cover critical capabilities for container security. We are excited to bring this next enhancement of our platform to the market and empower customers to address security testing for more modern architectures and deployment styles.” The Requirement for Container Security is Rapidly on the Rise Containers are increasingly used to simplify software deployment and runtime environment configuration management. They comprise small, fast, portable units of software in which code is packaged so that an application can be run quickly and reliably in different computing environments—from the desktop to the cloud. They provide an ecosystem of repositories, orchestration technologies, and capabilities that address related issues, such as service-to-service communication and configuration management. Instantiated in pipelines from code, containers have the benefit of immutability, meaning they are not updated, reconfigured or patched in production. Instead, the underlying image is updated with new capabilities and redeployed, helping to improve efficiency in the production environment. Despite the benefits of containers, they are affected by many of the same problems that traditionally plague physical production or virtual server hardware, such as vulnerabilities introduced through additional software, poorly managed secrets (like Amazon Web Services keys and credentials in Dockerfiles), and security misconfigurations. This has resulted in increased demand for products that address these issues and related problems, with the Global Container Security Market size expected to reach $3.9 billion by 2027*. Container security scanning analyzes container images against organizational or industry-specific standards to identify insecure processes, misconfigurations that could lead to a vulnerability, and inadequate authentication and access control. Veracode Container Security Integrates into the Developer Environment Many products already in the market are aimed at securing containers in runtime and offer limited support for developers, posing a major challenge for early remediation. Veracode’s solution instead integrates into the CI/CD (continuous integration and continuous delivery) pipeline and is available at the command line interface. Providing coverage for vulnerability detection and remediation, secrets management, and security configuration issues on the most popular operating systems, it delivers remediation advice to developers early in the software development life cycle so that insecure containers don’t ship to production. Veracode Container Security results are available in a variety of formats based on the user’s choice, including text, JSON (JavaScript Object Notation), and Software Bill of Materials (CycloneDX, SWID [Software Identification Tagging], or SPDX [Software Packaging Data Exchange]), making them easy to integrate with other tools. Providing developers and their teams with the tools to meet their specific needs means they can find and fix vulnerabilities early in the lifecycle, giving them confidence that their containerized application environment is secure. “Veracode Container Security will be instrumental for our developers to ensure that the workloads they deploy into our cloud are secure,” said the Director of Information Security at an automotive company. “Without this tool, it would take our team weeks to receive and action container results and these would only have been available in limited formats. Now, we’re excited to integrate findings into the pipeline before they even move into production, creating time and cost efficiencies for our business.” About Veracode Veracode is a leading AppSec partner for creating secure software, reducing the risk of security breach, and increasing security and development teams’ productivity. As a result, companies using Veracode can move their business, and the world, forward. With its combination of process automation, integrations, speed, and responsiveness, Veracode helps companies get accurate and reliable results to focus their efforts on fixing, not just finding, potential vulnerabilities.

Read More

VIRTUAL DESKTOP STRATEGIES

VyOS Gets Featured as a Fast Mover and a Challenger in GigaOm Radar for Network Operating Systems

VyOS Networks Corporatio | September 21, 2022

VyOS Networks Corporation, the company that develops the VyOS Network Platform and provides support services, announced today that it was featured as a challenger and a fast mover in GigaOm Radar for network operating systems reports for SMB, enterprise and cloud/managed service provider segments. The full report is available to everyone interested upon request. "We are proud to get featured in GigaOm Radar reports this year again. We are glad that our effort to provide our customers with a stable and feature-rich platform is recognized," Yuriy Andamasov, VyOS Networks Corporation CEO "There is more to VyOS than market reports cover, however. GigaOm Radar makes a comparison of products where their use cases overlap. Still, VyOS also covers unique use cases that other vendors don't, such as custom embedded hardware support and modification of VyOS itself," adds Daniil Baturin, VyOS Networks Corporation CTO. About VyOS VyOS is an open-source network operating system. Its slogan is "a universal router" because it supports multiple deployment scenarios and roles: bare-metal hardware from small boards to large servers, all popular virtualization platforms including VMware, KVM, and Microsoft Hyper-V, and multi-cloud support for major hyperscalers like Amazon Web Services, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure. VyOS supports multiple dynamic routing protocols via FRRouting, various VPN protocols, and other network routing and security features, available through a unified stateful CLI and an HTTP API for management automation. About VyOS Networks Corporation VyOS Networks Corporation is a company started by the VyOS open-source project founders and maintainers to provide services for it and ensure sustainable development.

Read More

VIRTUAL DESKTOP TOOLS

Verge.io and Dallas Digital Offer Alternative Enterprise Virtualization Solutions

Verge.io | September 14, 2022

Verge.io, the company with a simpler way to virtualize data centers, and Dallas Digital Services, an IT solutions provider for enterprises and government agencies, today announced an agreement to offer Verge.io’s virtual cloud software stack as a simple, cost-effective alternative to build, deploy and manage virtual data centers. With Verge-OS software, Dallas Digital enables virtualized data centers for its clients with greater savings and efficiencies. Verge-OS abstracts compute, network, and storage from commodity servers and creates pools of raw resources that are simple to run and manage, creating feature-rich infrastructures for environments and workloads like clustered HPC, ultra-converged and hyperconverged data centers, DevOps and Test/Dev, compliant medical and healthcare, remote and edge compute including VDI, and multi-tenant private clouds. “Legacy virtualization platforms require many different SKUs, with complex pricing schemes and significant API integration to build out a virtualized data center, especially at scale,” said Howie Evans Vice President Dallas Digital. “We are pleased to be able to offer Verge-OS as a way to deliver a virtual data center experience but in a secure, hardware-efficient system that can scale compute, memory, and storage resources as needed.” “Recent M&A activity is causing enterprises to look for alternatives to legacy systems, and partnerships with solution providers like Dallas Digital are an ideal way to bring these customers a modernized virtualization platform for the way organizations work today, Verge-OS is not only simpler to configure and run, it’s simpler to buy, and simpler for Dallas Digital to support.” Yan Ness, CEO at Verge.io Verge-OS is an ultra-thin software—less than 300,000 lines of code—that is easy to install and scale on low-cost commodity hardware and self-manages based on AI/ML. A single license replaces separate hypervisor, networking, storage, data protection, and management tools to simplify operations and downsize complex technology stacks. Secure virtual data centers based on Verge-OS include all enterprise data services like global deduplication, disaster recovery, continuous data protection, snapshots, long-distance synch, and auto-failover. They are ideal for creating honeypots, sandboxes, cyber ranges, air-gapped computing, and secure compliance enclaves to meet regulations such as HIPAA, CUI, SOX, NIST, and PCI. Nested multi-tenancy gives service providers, departmental enterprises, and campuses the ability to assign resources and services to groups and sub-groups. About Dallas Digital Services Founded in 1996, Dallas Digital Services began as an on-site service provider for enterprise companies and has developed into a highly recognized solution provider for mission critical and high availability solutions. It is a relationship-driven IT solutions provider, offering best-of-breed technology and services for enterprise organizations as well as public-sector entities. It offers strategic services, technical expertise, and sales support to enable clients to maximize the value of their data center investments. Based on each customer's unique objectives and IT environment, Dallas Digital can assess, architect, implement, and manage solutions that improve current technology performance.

Read More

VIRTUAL SERVER INFRASTRUCTURE

Red Hat Introduces Lightweight Kubernetes Solution to Power the Next Evolution of Open Edge Computing

Red Hat | October 28, 2022

Red Hat, Inc., the world's leading provider of open source solutions, today introduced Red Hat Device Edge, a solution for flexibly deploying traditional or containerized workloads on small devices such as robots, IoT gateways, points of sale, public transport and more. Red Hat Device Edge delivers an enterprise-ready and supported distribution of the Red Hat-led open source community project MicroShift, a lightweight Kubernetes orchestration solution built from the edge capabilities of Red Hat OpenShift, along with an edge-optimized operating system built from Red Hat Enterprise Linux. This latest product in the Red Hat edge portfolio aims to provide a future-proof platform that allows organizations’ architecture to evolve as their workload strategy changes. As more and more companies deploy edge computing across a broader range of use cases, many new questions, operational needs and business challenges are poised to arise. In industries like automotive, manufacturing and more, organizations are up against different environmental, security and operational challenges that require an ability to work with small form-factor edge devices in these resource constrained environments. Ultimately, different devices have different requirements in terms of computing power, software compatibility and security footprint. With Red Hat Device Edge, organizations can have the flexibility to deploy containers at the edge in a small footprint, reducing compute requirements by up to 50% in comparison to traditional Kubernetes edge configurations. It also helps to address many of the emerging questions around large-scale edge computing at the device edge by incorporating: Kubernetes built for edge deployments, enabling IT teams to use familiar Kubernetes features in a new, smaller, lighter-weight footprint offered by MicroShift. This lowers the barrier of entry for teams building cloud-native applications for edge computing environments and enables them to use existing Kubernetes skills to achieve greater consistency of operations across the entirety of the hybrid cloud, from the datacenter to public clouds to the edge. An edge-optimized Linux OS built from the world’s leading enterprise Linux platform in Red Hat Enterprise Linux and tailored for small edge devices with intelligent updates that use minimum bandwidth. This helps organizations tackle the challenges of intermittent connectivity while mitigating the impact on edge innovation. Capabilities for centrally scaling and monitoring edge device fleets with Red Hat Smart Management. IT teams can use zero-touch provisioning, system health visibility and updates with automatic rollbacks to maintain a stronger edge management and application security posture. Red Hat Device Edge for far-flung, resource constrained use cases across different industries Red Hat Device Edge was created to help Red Hat customers and partners tackle their most challenging edge environments. For example, Lockheed Martin has been collaborating with Red Hat in the MicroShift project community and is also working to deploy Red Hat Device Edge to modernize and standardize its application delivery and AI workloads in extreme conditions including wildland fire management, contested military environments and space. Additionally, ABB is planning to use Red Hat Device Edge for ABB Ability™ Edgenius™ on resource constrained devices. Edgenius is a comprehensive edge platform for industrial software applications. Red Hat Device Edge will be aimed at organizations who require small factor edge devices with support for bare metal, virtualized or containerized applications, regardless of industry. Additional use cases include but are not limited to: Miniature, connected nodes on public transportation where edge devices are often in motion but still need faster processing via AI/ML to analyze data locally in real time (i.e. railways, mining, cars, drones). Resilient resource nodes at challenging locations like weather monitoring stations where, in spite of the harsh, tough-to-support environments, an edge device will still be capable of taking care of itself with the ability to perform automated software rollbacks, maintain a stronger security posture and better enforce sensitive data controls. Emerging edge constrained scenarios where thousands of edge devices may be running applications in locations that make weight, temperature and connectivity all major concerns. Red Hat Device Edge meets organizations wherever they are today in their edge computing journey, as it will run a wide variety of workloads using Podman for edge container management or MicroShift for a Kubernetes API. Customers will even be able to use legacy windows applications within a virtual machine. Availability Red Hat Device Edge is planned as a developer preview early next year, and expected to be generally available with full support later in 2023. Supporting Quotes Francis Chow, vice president and general manager, In-Vehicle Operating System and Edge, Red Hat “Innovation at the edge has introduced new benefits and use cases for organizations across all industries - but it has also created new challenges. Working with our customers and partners, Red Hat began a journey to develop a new technology offering - designed specifically for the edge - extending our hybrid cloud solution so that our ecosystem will be able to take advantage of intelligence paired with trusted open source technology to tackle their smallest footprint remote edge use cases. Tested in the community, now produced by Red Hat, Red Hat Device Edge is a major next step in harnessing the full range of benefits promised by edge computing.” Justin Taylor, vice president of Artificial Intelligence, Lockheed Martin “Red Hat Device Edge will enable Lockheed Martin to revolutionize artificial intelligence processing for our DOD customers’ most challenging missions. The ability for small military platforms to handle large AI workloads will increase their capacity in the field, ensuring our military can stay ahead of evolving threats. Lockheed Martin is a long-time customer and collaborator to Red Hat and working on Red Hat Device Edge together is a critical next step in this strategic relationship.” Bernhard Eschermann, chief technology officer, ABB Process Automation “ABB is excited to continue working with Red Hat to streamline the transition from automated to autonomous operations for the manufacturing industry. With Red Hat Device Edge, ABB will be able to connect cloud and control environments to improve asset efficiency and operations by aggregating and analyzing data on devices with more limited resources. With this continued collaboration, ABB’s ecosystem will experience the benefit of open source solutions driven by innovative market leaders now and in the future. ” About Red Hat, Inc. Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Read More

Spotlight

IBM and VMware have joined forces in a strategic partnership. Together, they have streamlined hybrid cloud adoption, helping you extend your existing workloads to the cloud through full access to the native VMware stack. Manage workloads in the cloud exactly as you would on-premise. Because nothing is changing, there’s nothing to migrate. No need to modify security, storage, or networking. You don’t have to convert virtual machines. And your IT staff can use their existing tools and skillsets.

Resources