The Darkest Sides of vSphere

In our post “The Lightest Sides of vSphere” we talked a lot about retrospection and introspection, and we asked people what they thought the best feature of vSphere was. Continuing the astronomical theme with longest days and longest nights, today we ask about the dark sides of vSphere. What is the most underrated vSphere feature, a feature you wish more people knew about? How are you backing up your vCenter Server appliance? Are you just taking a snapshot of it or backing it up as an image somehow? vSphere 6.5 introduced File-Based Backup and Restore (FBBR) as a method to protect your vSphere environment from failures. In short, it exports configuration information as a file to a remote file share or system. If you have issues you can restore a vCenter Server appliance using the installer and that file. ;

Spotlight

Icumulus

iCumulus is a new-age global fulfilment platform, leading the revolution of e-commerce, turning it into a totally shared & unified global economy. iCumulus connects your business to a network of global logistics providers, creating a unique shared ecosystem with the most competitive shipping rates. Unlike traditional logistics providers, iCumulus scales not by assets, staff or inventory, but by connecting true demand to supply ensuring best consumer experience. Our comprehensive cloud based e-fulfilment platform manages all distributions, purchasing, into store and consumer deliveries at no additional cost. We want our customers to be free to deal with what's important to them - service their customers, allowing them grow sales and reach new markets.

OTHER ARTICLES
Virtual Desktop Strategies

Discovering SCVMM and Its Features

Article | July 26, 2022

System Center Virtual Machine Manager (SCVMM) is a management tool for Microsoft’s Hyper-V virtualization platform. It is part of Microsoft’s System Center product suite, which also includes Configuration Manager and Operations Manager, among other tools. SCVMM provides a single pane of glass for managing your on-premises and cloud-based Hyper-V infrastructures, and it’s a more capable alternative to Windows Server tools built for the same purpose.

Read More
Virtual Desktop Strategies, Server Hypervisors

VM Applications for Software Development and Secure Testing

Article | April 27, 2023

Contents 1. Introduction 2. Software Development and Secure Testing 3. Using VMs in Software Development and Secure Testing 4. Conclusion 1. Introduction “Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.” —James Bach. Testing software is crucial for identifying and fixing security vulnerabilities. However, meeting quality standards for functionality and performance does not guarantee security. Thus, software testing nowadays is a must to identify and address application security vulnerabilities to maintain the following: Security of data history, databases, information, and servers Customers’ integrity and trust Web application protection from future attacks VMs provide a flexible and isolated environment for software development and security testing. They offer easy replication of complex configurations and testing scenarios, allowing efficient issue resolution. VMs also provide secure testing by isolating applications from the host system and enabling a reset to a previous state. In addition, they facilitate DevOps practices and streamline the development workflow. 2. Software Development and Secure Testing Software Secure Testing: The Approach The following approaches must be considered while preparing and planning for security tests: Architecture Study and Analysis: Understand whether the software meets the necessary requirements. Threat Classification: List all potential threats and risk factors that must be tested. Test Planning: Run the tests based on the identified threats, vulnerabilities, and security risks. Testing Tool Identification: For software security testing tools for web applications, the developer must identify the relevant security tools to test the software for specific use cases. Test-Case Execution: After performing a security test, the developer should fix it using any suitable open-source code or manually. Reports: Prepare a detailed test report of the security tests performed, containing a list of the vulnerabilities, threats, and issues resolved and the ones that are still pending. Ensuring the security of an application that handles essential functions is paramount. This may involve safeguarding databases against malicious attacks or implementing fraud detection mechanisms for incoming leads before integrating them into the platform. Maintaining security is crucial throughout the software development life cycle (SDLC) and must be at the forefront of developers' minds while executing the software's requirements. With consistent effort, the SDLC pipeline addresses security issues before deployment, reducing the risk of discovering application vulnerabilities while minimizing the damage they could cause. A secure SDLC makes developers responsible for critical security. Developers need to be aware of potential security concerns at each step of the process. This requires integrating security into the SDLC in ways that were not needed before. As anyone can potentially access source code, coding with potential vulnerabilities in mind is essential. As such, having a robust and secure SDLC process is critical to ensuring applications are not subject to attacks by hackers. 3. Using VMs in Software Development and Secure Testing: Snapshotting: Snapshotting allows developers to capture a VM's state at a specific point in time and restore it later. This feature is helpful for debugging and enables developers to roll back to a previous state when an error occurs. A virtual machine provides several operations for creating and managing snapshots and snapshot chains. These operations let users create snapshots, revert to any snapshots in the chain, and remove snapshots. In addition, extensive snapshot trees can be created to streamline the flow. Virtual Networking: It allows virtual machines to be connected to virtual networks that simulate complex network topologies, allowing developers to test their applications in different network environments. This allows expanding data centers to cover multiple physical locations, gaining access to a plethora of more efficient options. This empowers them to effortlessly modify the network as per changing requirements without any additional hardware. Moreover, providing the network for specific applications and needs offers greater flexibility. Additionally, it enables workloads to be moved seamlessly across the network infrastructure without compromising on service, security, or availability. Resource Allocation: VMs can be configured with specific resource allocations such as CPU, RAM, and storage, allowing developers to test their applications under different resource constraints. Maintaining a 1:1 ratio between the virtual machine processor and its host or core is highly recommended. It's crucial to refrain from over-subscribing virtual machine processors to a single core, as this could lead to stalled or delayed events, causing significant frustration and dissatisfaction among users. However, it is essential to acknowledge that IT administrators sometimes overallocate virtual machine processors. In such cases, a practical approach is to start with a 2:1 ratio and gradually move towards 4:1, 8:1, 12:1, and so on while bringing virtual allocation into IT infrastructure. This approach ensures a safe and seamless transition towards optimized virtual resource allocation. Containerization within VMs: Containerization within VMs provides an additional layer of isolation and security for applications. Enterprises are finding new use cases for VMs to utilize their in-house and cloud infrastructure to support heavy-duty application and networking workloads. This will also have a positive impact on the environment. DevOps teams use containerization with virtualization to improve software development flexibility. Containers allow multiple apps to run in one container with the necessary components, such as code, system tools, and libraries. For complex applications, both virtual machines and containers are used together. However, while containers are used for the front-end and middleware, VMs are used for the back-end. VM Templates: VM templates are pre-configured virtual machines that can be used as a base for creating new virtual machines, making it easier to set up development and testing environments. A VM template is an image of a virtual machine that serves as a master copy. It includes VM disks, virtual devices, and settings. By using a VM template, cloning a virtual machine multiple times can be achieved. When you clone a VM from a template, the clones are independent and not linked to the template. VM templates are handy when a large number of similar VMs need to be deployed. They preserve VM consistency. To edit a template, convert it to a VM, make the necessary changes, and then convert the edited VM back into a new template. Remote Access: VMs can be accessed remotely, allowing developers and testers to collaborate more effectively from anywhere worldwide. To manage a virtual machine, follow these steps: enable remote access, connect to the virtual machine, and then access the VNC or serial console. Once connected, full permission to manage the virtual machine is granted with the user's approval. Remote access provides a secure way to access VMs, as connections can be encrypted and authenticated to prevent unauthorized access. Additionally, remote access allows for easier management of VMs, as administrators can monitor and control virtual machines from a central location. DevOps Integration: DevOps is a collection of practices, principles, and tools that allow a team to release software quickly and efficiently. Virtualization is vital in DevOps when developing intricate cloud, API, and SOA systems. Virtual machines enable teams to simulate environments for creating, testing, and launching code, ultimately preserving computing resources. While commencing a bug search at the API layer, teams find that virtual machines are suitable for test-driven development (TDD). Virtualization providers handle updates, freeing up DevOps teams, to focus on other areas and increasing productivity by 50 –60%. In addition, VMs allow for simultaneous testing of multiple release and patch levels, improving product compatibility and interoperability. 4. Conclusion The outlook for virtual machine applications is highly promising in the development and testing fields. With the increasing complexity of development and testing processes, VMs can significantly simplify and streamline these operations. In the future, VMs are expected to become even more versatile and potent, providing developers and testers with a broader range of tools and capabilities to facilitate the development process. One potential future development is integrating machine learning and artificial intelligence into VMs. This would enable VMs to automate various tasks, optimize the allocation of resources, and generate recommendations based on performance data. Moreover, VMs may become more agile and lightweight, allowing developers and testers to spin up and spin down instances with greater efficiency. The future of VM applications for software development and security testing looks bright, with continued innovation and development expected to provide developers and testers with even more powerful and flexible tools to improve the software development process.

Read More
Virtual Desktop Tools

Efficient Management of Virtual Machines using Orchestration

Article | August 12, 2022

Contents 1. Introduction 2. What is Orchestration? 3. How Orchestrating Help Optimize VMs Efficiency? 3.1. Resource Optimization 3.2 Dynamic Scaling 3.3 Faster Deployment 3.4 Improved Security 3.5 Multi-Cloud Management 3.6 Improved Collaboration 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs 4.2 Automated Backup and Restore for VMs 4.3 Ensure Replication for VMs 4.4 Setup Data Synchronization for VMs 5. Conclusion 1. Introduction Orchestration is a superset of automation. Cloud orchestration goes beyond automation, providing coordination between multiple automated activities. Cloud orchestration is increasingly essential due to the growth of containerization, which facilitates scaling applications across clouds, both public and private. The demand for both public cloud orchestration and hybrid cloud orchestration has increased as businesses increasingly adopt a hybrid cloud architecture. The quick adoption of containerized, micro-services-based apps that communicate over APIs has fueled the desire for automation in deploying and managing applications across the cloud. This increase in complexity has created a need for VM orchestration that can manage numerous dependencies across various clouds with policy-driven security and management capabilities. 2. What is Orchestration? Orchestration refers to the process of automating, coordinating, and managing complex systems, workflows, or processes. It typically entails the use of automation tools and platforms to streamline and coordinate the deployment, configuration, management of applications and services across different environments. This includes development, testing, staging, and production. Orchestration tools in cloud computing can be used to automate the deployment and administration of containerized applications across multiple servers or clusters. These tools can help automate tasks such as container provisioning, scaling, load balancing, and health monitoring, making it easier to manage complex application environments. Orchestration ensures organizations automate and streamline their workflows, reduce errors and downtime, and improve the efficacy and scalability of their operations. 3. How Orchestrating Help Optimize VMs Efficiency? Orchestration offers enhanced visibility into the resources and processes in use, which helps prevent VM sprawl and helps organizations trace resource usage by department, business unit, or individual user. Fig. Global Market for VNFO by Virtualization Methodology 2022-27($ million) (Source: Insight Research) The above figure shows, VMs have established a solid legacy that will continue to be relevant in the near to mid-term future. These are 6 ways, in which Orchestration helps vin efficient management of VMs: 3.1. Resource Optimization Orchestrating helps optimize resource utilization by automating the provisioning and de-provisioning of VMs, which allows for efficient use of computing resources. By using orchestration tools, IT teams can set up rules and policies for automatically scaling VMs based on criteria such as CPU utilization, memory usage, network traffic, and application performance metrics. Orchestration also enables advanced techniques such as predictive analytics, machine learning, and artificial intelligence to optimize resource utilization. These technologies can analyze historical data and identify patterns in workload demand, allowing the orchestration system to predict future resource needs and automatically provision or de-provision resources accordingly 3.2. Dynamic Scaling Orchestrating helps automate scaling of VMs, enabling organizations to quickly and easily adjust their computing resources based on demand. It enables IT teams to configure scaling policies and regulations for virtual machines based on resource utilization and network traffic along with performance metrics. When the workload demand exceeds a certain threshold, the orchestration system can autonomously provision additional virtual machines to accommodate the increased load. When workload demand decreases, the orchestration system can deprovision VMs to free up resources and reduce costs. 3.3. Faster Deployment Orchestrating can help automate VM deployment of VMs, reducing the time and effort required to provision new resources. By leveraging advanced technologies such as automation, scripting, and APIs, orchestration can further streamline the VM deployment process. It allows IT teams to define workflows and processes that can be automated using scripts, reducing the time and effort required to deploy new resources. In addition, orchestration can integrate with other IT management tools and platforms, such as cloud management platforms, configuration management tools, and monitoring systems. This enables IT teams to leverage various capabilities and services to streamline the VM deployment and improve efficiency. 3.4. Improved Security Orchestrating can help enhance the security of VMs by automating the deployment of security patches and updates. It also helps ensure VMs are deployed with the appropriate security configurations and settings, reducing the risk of misconfiguration and vulnerability. It enables IT teams to define standard security templates and configurations for VMs, which can be automatically applied during deployment. Furthermore, orchestration can integrate with other security tools and platforms, such as intrusion detection systems and firewalls, to provide a comprehensive security solution. It allows IT teams to automate the deployment of security policies and rules, ensuring that workloads remain protected against various security threats. 3.5. Multi-Cloud Management Orchestration helps provide a single pane of glass for VM management, enabling IT teams to monitor and manage VMs across multiple cloud environments from a single platform. This simplifies management and reduces complexity, enabling IT teams to respond more quickly and effectively to changing business requirements. In addition, orchestration also helps to ensure consistency and compliance across multiple cloud environments. Moreover, orchestration can also integrate with other multi-cloud management tools and platforms, such as cloud brokers and cloud management platforms, to provide a comprehensive solution for managing VMs across multiple clouds. 3.6. Improved Collaboration Orchestration helps streamline collaboration by providing a centralized repository for storing and sharing information related to VMs. Moreover, it also automates many of the routine tasks associated with VM management, reducing the workload for IT teams and freeing up time for more complex tasks. This can improve collaboration by enabling IT teams to focus on more strategic initiatives. In addition, orchestration provides advanced analytics and reporting capabilities, enabling IT teams to track performance, identify bottlenecks, and optimize resource utilization. This improves performance by providing a data-driven approach to VM management and allowing IT teams to work collaboratively to identify and address performance issues. 4. Considerations while Orchestrating VMs 4.1. Together Hosting of Containers and VMs Containers and virtual machines exist together within a single infrastructure and are managed by the same platform. This allows for hosting various projects using a unified management point and the ability to adapt gradually based on current needs and opportunities. This provides greater flexibility for teams to host and administer applications using cutting-edge technologies and established standards and methods. Moreover, as there is no need to invest in distinct physical servers for virtual machines (VMs) and containers, this approach can be a great way to maximize infrastructure utilization, resulting in lower TCO and higher ROI. In addition, unified management drastically simplifies processes, requiring fewer human resources and less time. 4.2. Automated Backup and Restore for VMs --Minimize downtime and reduce risk of data loss Organizations should set up automated backup and restore processes for virtual machines, ensuring critical data and applications are protected during a disaster. This involves scheduling regular backups of virtual machines to a secondary location or cloud storage and setting up automated restore processes to recover virtual machines during an outage or disaster quickly. 4.3. Ensure Replication for VMs --Ensure data and applications are available and accessible in the event of a disaster Organizations should set up replication processes for their VMs, allowing them to be automatically copied to a secondary location or cloud infrastructure. This ensures that critical applications and data are available even during a catastrophic failure at the primary site. 4.4. Setup Data Synchronization for VMs --Improve overall resilience and availability of the system VM orchestration tools should be used to set up data synchronization processes between virtual machines, ensuring that data is consistent and up-to-date across multiple locations. This is particularly important in scenarios where data needs to be accessed quickly from various locations, such as in distributed environments. 5. Conclusion Orchestration provides disaster recovery and business continuity, automatic scalability of distributed systems, and inter-service configuration. Cloud orchestration is becoming significant due to the advent of containerization, which permits scaling applications across clouds, both public and private. We expect continued growth and innovation in the field of VM orchestration, with new technologies and tools emerging to support more efficient and effective management of virtual machines in distributed environments. In addition, as organizations increasingly rely on cloud-based infrastructures and distributed systems, VM orchestration will continue to play a vital role in enabling businesses to operate smoothly and recover quickly from disruptions. VM orchestration will remain a critical component of disaster recovery and high availability strategies for years as organizations continue relying on virtualization technologies to power their operations and drive innovation.

Read More

Virtualizing Broadband Networks: Q&A with Tom Cloonan and David Grubb

Article | June 11, 2020

The future of broadband networks is fast, pervasive, reliable, and increasingly, virtual. Dell’Oro predicts that virtual CMTS/CCAP revenue will grow from $90 million in 2019 to $418 million worldwide in 2024. While network virtualization is still in its earliest stages of deployment, many operators have begun building their strategy for virtualizing one or more components of their broadband networks.

Read More

Spotlight

Icumulus

iCumulus is a new-age global fulfilment platform, leading the revolution of e-commerce, turning it into a totally shared & unified global economy. iCumulus connects your business to a network of global logistics providers, creating a unique shared ecosystem with the most competitive shipping rates. Unlike traditional logistics providers, iCumulus scales not by assets, staff or inventory, but by connecting true demand to supply ensuring best consumer experience. Our comprehensive cloud based e-fulfilment platform manages all distributions, purchasing, into store and consumer deliveries at no additional cost. We want our customers to be free to deal with what's important to them - service their customers, allowing them grow sales and reach new markets.

Related News

Virtual Server Infrastructure, Vsphere, Hyper-V

Innovative MSP Trusts Scale Computing to Modernize its IT Infrastructure and Boost its Bottom Line

Prnewswire | April 20, 2023

Scale Computing, the market leader in edge computing, virtualization, and hyperconverged solutions, today announced Managed Services Provider (MSP) customer success with its SC//Fleet Manager solution, the first cloud-hosted monitoring and management tool built for hyperconverged edge computing infrastructure at scale. Integrated directly into the Scale Computing Platform, SC//Fleet Manager consolidates real-time conditions for a fleet of clusters, including storage and compute resources, allowing IT leaders to quickly identify areas of concern from a single pane of glass. Cat-Tec, an MSP based in Ontario, Canada, provides enterprise-level computer network consulting to small and mid-sized businesses. Before modernizing its infrastructure with Scale Computing, Cat-Tec relied on a combination of Nutanix, VMware, and Hyper-V virtualization technologies to run its infrastructure and resell to its customers. However, they soon realized that the cost and complexity of deploying these systems were limiting their ability to scale their business and secure new customers. "While our clients all have very different operating requirements, they all have one thing in common – they want to spend less time troubleshooting their technology stack and more time focused on running their business," said Genito Isabella, co-founder and VP of Systems Integration for Cat-Tec. "Every hour that one of our IT professionals spends manually updating firmware on a server or trying to correctly diagnose a performance issue is time that could be spent on other strategic, revenue-generating projects." To overcome these challenges, Cat-Tec decided it was time to invest in a modern hyperconverged infrastructure (HCI) solution that could help their team manage and scale their distributed infrastructure in a more efficient, cost-effective, and holistic manner. Following a rigorous vendor evaluation process, Cat-Tec selected Scale Computing due to the simplicity of its modular architecture, the ability of SC//Fleet Manager to monitor and manage multiple clusters from a single console, as well as the SC//Platform's native integration with leading third-party backup systems which ensures resilience in the event of a system-wide disruption. "We estimate that the ability to pre-stage hardware has saved us an average of four hours per deployment per client implementation. Meanwhile, using SC//Fleet Manager to proactively troubleshoot issues remotely has dramatically improved our ability to meet our SLAs," continued Isabella. "This means we can now devote more time helping customers solve real problems rather than just constantly having to put out fires. It's not an exaggeration to say that Scale Computing has completely transformed our business." About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease of use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate, G2, and Trust Radius.

Read More

Vsphere

Tintri Continues Exponential Growth with 2H 2021 Earnings

Tintri | February 21, 2022

Tintri, a DDN subsidiary and the leading provider of auto adaptive, workload intelligent platforms, announced 42% global revenue growth from 1H2021 to 2H2021, including a double-digit revenue increase from net new logos. This expansion is fuelled by Tintri's enhanced global executive sales team and continued VMstore innovation, driving the company's mission to deliver hands-off, cutting-edge and highly adaptive technology to help enterprises manage complex infrastructures. Tintri has had exponential growth in the second half of 2021, driven by the upswell in demand from our customers due to the popularity of containerized applications in virtualized environments. The containerization movement is making serious headway into enterprise data processes and platforms, and Tintri is perfectly positioned to service these emerging markets. We've put together an outstanding global sales team to ensure that customers know our technology was architected for these types of workloads from day one." Phil Trickovic, senior vice president of Revenue, Tintri Enhanced Global Executive Sales Team Following the appointment of a new executive team in Q4 2021, Tintri's latest investment to operate under strong management comes with the newly structured global executive sales team. Comprised of Tintri veterans with invaluable experience, this team best understands the unique challenges of data-centric enterprise customers and the ways in which Tintri's technology can help overcome specialized pain points and evolve to continue to meet enterprise's changing needs. The new global executive sales team is comprised of: Zachary Bertamini, vice president of Sales, Americas, whose strategic vision has been a catalyst to fuel Tintri's growth in the Americas over the past year. Josh Marlar, vice president of Global Business Development, who is responsible for building a multimillion-dollar net new pipeline quarter over quarter and brings over a decade of experience in IT business developments. Mark Walsh, vice president of Sales, EMEA, who brings over 30 years of experience in the IT storage sector and re-joins Tintri from IBM. Norimasa Ono, general manager of Sales, Japan, who led start up efforts in the Japanese market and brings strong relationships with local resellers and enterprise customers, as well as the ability to open new markets for Tintri. Continued VMstore Innovation Tintri continues to innovate, constantly enhancing, updating and advancing capabilities for its customers. This dedication is underscored by VMstore's double-digit revenue growth YoY. The latest releases to VMstore, the world's most intelligent virtual data management system, include: vSphere Tag Support – VMstore now recognizes and reports vCenter tags and can be used for filtering objects in the Tintri Global Center (TGC) user interface (UI). vSphere tags can also be used in service groups to ensure protection policies, snapshots, replication. vSphere tags also carry across the Tintri ecosystem and are available for use with Tintri Analytics. Additional Hardware and Software Validation – 2TB, 4TB or 8TB drives are configurable with all VMstore T7000 systems, allowing customers to tailor the configuration to meet specific business needs. The T7000 systems are also now certified for DAC connections, joining MPO-12 configurations. In addition, VMstore is CitrixReady certified with Hypervisor 8.2. Improved Visibility with UI Enhancements – System admins now can configure and filter alerts with notifications and additional parameters, including Engine ID, which can be configured with Single Network Management Protocol (SNMP). A new "Task Manager" in the UI allows customers to track long running activities and monitor status, as well as reporting on advanced battery backup health to provide fortified data protection in the event of a T7000 series system power loss. NFS 4.1 Beta – VMstore T7000 models now support NFS v4.1 for VMware vSphere, which will be made generally available later this year. About Tintri Tintri, a wholly owned subsidiary of DataDirect Networks (DDN) delivers unique outcomes in Enterprise data centers. Tintri's AI-enabled intelligent infrastructure learns your environment to drive automation. Analytical insights help you simplify and accelerate your operations and empower data-driven business insights. Thousands of Tintri customers have saved millions of management hours using Tintri.

Read More

Vsphere

Lightbits Labs Wins 2022 BIG Innovation Award

Lightbits Labs | January 18, 2022

Lightbits Labs (Lightbits), the first software-defined and NVMe-based data platform for any cloud, is pleased to announce it has been named a winner in the 2022 BIG Innovation Awards presented by the Business Intelligence Group. This is recognition for the company’s unique Complete Data Platform, which is an innovative architecture of NVMe/TCP, Intelligent Flash Management, and VMware vSphere 7 Update 3 compatibility that delivers high performance, simplicity, and cost-efficiency for VMware environments. As such, Lightbits, the inventor of NVMe/TCP, is quickly becoming the defacto standard for managing, analyzing, and storing data on any cloud. Leveraging NVMe/TCP protocol and a shared storage architecture, Lightbits delivers the lowest latencies and highest scalability while delivering performance equivalent to local flash. Lightbits solves the storage challenges for cloud-centric applications with greater efficiency than proprietary appliances and when combined with VMWare vSphere is giant leap forward in delivering an end-to-end NVMe solutions ecosystem. This recognition is further validation that Lightbits fills a critical need for modern IT organizations looking to inject efficiency and agility into the data center. We invented NVMe/TCP, its native to the software, and thus delivers all the low latency and high performance of local flash. And Lightbits is the only solution available with Intelligent Flash Management for maximum cost-efficiency. So, while we may pause to appreciate this recognition, we’ll keep making BIG innovations that enable customers to efficiently leverage their IT investments to extract maximum value from their data.” Carol Platz, Vice President of Marketing, Lightbits Labs “Innovation is driving growth in the global economy,” said Maria Jimenez, chief operating officer of the Business Intelligence Group. “We are thrilled to be honoring Lightbits as they are one of the organizations leading this charge and helping humanity progress.” About Lightbits Labs Lightbits Labs (Lightbits) is leading the digital data center transformation by making high-performance elastic block storage available to any cloud. Creators of the NVMe over TCP (NVMe/TCP) protocol, Lightbits software-defined storage is easy to deploy at scale and delivers performance equivalent to local flash to accelerate cloud-native applications in bare metal, virtual, or containerized environments. Backed by leading enterprise investors including Cisco Investments, Dell Technologies Capital, Intel Capital, and Micron, Lightbits is on a mission to make high-performance elastic block storage simple, scalable and cost-efficient for any cloud. The NVMe, and NVMe/TCP word marks are registered or unregistered service marks of the NVM Express organization in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. About Business Intelligence Group The Business Intelligence Group was founded with the mission of recognizing true talent and superior performance in the business world. Unlike other industry award programs, these programs are judged by business executives having experience and knowledge. The organization’s proprietary and unique scoring system selectively measures performance across multiple business domains and then rewards those companies whose achievements stand above those of their peers.

Read More

Virtual Server Infrastructure, Vsphere, Hyper-V

Innovative MSP Trusts Scale Computing to Modernize its IT Infrastructure and Boost its Bottom Line

Prnewswire | April 20, 2023

Scale Computing, the market leader in edge computing, virtualization, and hyperconverged solutions, today announced Managed Services Provider (MSP) customer success with its SC//Fleet Manager solution, the first cloud-hosted monitoring and management tool built for hyperconverged edge computing infrastructure at scale. Integrated directly into the Scale Computing Platform, SC//Fleet Manager consolidates real-time conditions for a fleet of clusters, including storage and compute resources, allowing IT leaders to quickly identify areas of concern from a single pane of glass. Cat-Tec, an MSP based in Ontario, Canada, provides enterprise-level computer network consulting to small and mid-sized businesses. Before modernizing its infrastructure with Scale Computing, Cat-Tec relied on a combination of Nutanix, VMware, and Hyper-V virtualization technologies to run its infrastructure and resell to its customers. However, they soon realized that the cost and complexity of deploying these systems were limiting their ability to scale their business and secure new customers. "While our clients all have very different operating requirements, they all have one thing in common – they want to spend less time troubleshooting their technology stack and more time focused on running their business," said Genito Isabella, co-founder and VP of Systems Integration for Cat-Tec. "Every hour that one of our IT professionals spends manually updating firmware on a server or trying to correctly diagnose a performance issue is time that could be spent on other strategic, revenue-generating projects." To overcome these challenges, Cat-Tec decided it was time to invest in a modern hyperconverged infrastructure (HCI) solution that could help their team manage and scale their distributed infrastructure in a more efficient, cost-effective, and holistic manner. Following a rigorous vendor evaluation process, Cat-Tec selected Scale Computing due to the simplicity of its modular architecture, the ability of SC//Fleet Manager to monitor and manage multiple clusters from a single console, as well as the SC//Platform's native integration with leading third-party backup systems which ensures resilience in the event of a system-wide disruption. "We estimate that the ability to pre-stage hardware has saved us an average of four hours per deployment per client implementation. Meanwhile, using SC//Fleet Manager to proactively troubleshoot issues remotely has dramatically improved our ability to meet our SLAs," continued Isabella. "This means we can now devote more time helping customers solve real problems rather than just constantly having to put out fires. It's not an exaggeration to say that Scale Computing has completely transformed our business." About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease of use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate, G2, and Trust Radius.

Read More

Vsphere

Tintri Continues Exponential Growth with 2H 2021 Earnings

Tintri | February 21, 2022

Tintri, a DDN subsidiary and the leading provider of auto adaptive, workload intelligent platforms, announced 42% global revenue growth from 1H2021 to 2H2021, including a double-digit revenue increase from net new logos. This expansion is fuelled by Tintri's enhanced global executive sales team and continued VMstore innovation, driving the company's mission to deliver hands-off, cutting-edge and highly adaptive technology to help enterprises manage complex infrastructures. Tintri has had exponential growth in the second half of 2021, driven by the upswell in demand from our customers due to the popularity of containerized applications in virtualized environments. The containerization movement is making serious headway into enterprise data processes and platforms, and Tintri is perfectly positioned to service these emerging markets. We've put together an outstanding global sales team to ensure that customers know our technology was architected for these types of workloads from day one." Phil Trickovic, senior vice president of Revenue, Tintri Enhanced Global Executive Sales Team Following the appointment of a new executive team in Q4 2021, Tintri's latest investment to operate under strong management comes with the newly structured global executive sales team. Comprised of Tintri veterans with invaluable experience, this team best understands the unique challenges of data-centric enterprise customers and the ways in which Tintri's technology can help overcome specialized pain points and evolve to continue to meet enterprise's changing needs. The new global executive sales team is comprised of: Zachary Bertamini, vice president of Sales, Americas, whose strategic vision has been a catalyst to fuel Tintri's growth in the Americas over the past year. Josh Marlar, vice president of Global Business Development, who is responsible for building a multimillion-dollar net new pipeline quarter over quarter and brings over a decade of experience in IT business developments. Mark Walsh, vice president of Sales, EMEA, who brings over 30 years of experience in the IT storage sector and re-joins Tintri from IBM. Norimasa Ono, general manager of Sales, Japan, who led start up efforts in the Japanese market and brings strong relationships with local resellers and enterprise customers, as well as the ability to open new markets for Tintri. Continued VMstore Innovation Tintri continues to innovate, constantly enhancing, updating and advancing capabilities for its customers. This dedication is underscored by VMstore's double-digit revenue growth YoY. The latest releases to VMstore, the world's most intelligent virtual data management system, include: vSphere Tag Support – VMstore now recognizes and reports vCenter tags and can be used for filtering objects in the Tintri Global Center (TGC) user interface (UI). vSphere tags can also be used in service groups to ensure protection policies, snapshots, replication. vSphere tags also carry across the Tintri ecosystem and are available for use with Tintri Analytics. Additional Hardware and Software Validation – 2TB, 4TB or 8TB drives are configurable with all VMstore T7000 systems, allowing customers to tailor the configuration to meet specific business needs. The T7000 systems are also now certified for DAC connections, joining MPO-12 configurations. In addition, VMstore is CitrixReady certified with Hypervisor 8.2. Improved Visibility with UI Enhancements – System admins now can configure and filter alerts with notifications and additional parameters, including Engine ID, which can be configured with Single Network Management Protocol (SNMP). A new "Task Manager" in the UI allows customers to track long running activities and monitor status, as well as reporting on advanced battery backup health to provide fortified data protection in the event of a T7000 series system power loss. NFS 4.1 Beta – VMstore T7000 models now support NFS v4.1 for VMware vSphere, which will be made generally available later this year. About Tintri Tintri, a wholly owned subsidiary of DataDirect Networks (DDN) delivers unique outcomes in Enterprise data centers. Tintri's AI-enabled intelligent infrastructure learns your environment to drive automation. Analytical insights help you simplify and accelerate your operations and empower data-driven business insights. Thousands of Tintri customers have saved millions of management hours using Tintri.

Read More

Vsphere

Lightbits Labs Wins 2022 BIG Innovation Award

Lightbits Labs | January 18, 2022

Lightbits Labs (Lightbits), the first software-defined and NVMe-based data platform for any cloud, is pleased to announce it has been named a winner in the 2022 BIG Innovation Awards presented by the Business Intelligence Group. This is recognition for the company’s unique Complete Data Platform, which is an innovative architecture of NVMe/TCP, Intelligent Flash Management, and VMware vSphere 7 Update 3 compatibility that delivers high performance, simplicity, and cost-efficiency for VMware environments. As such, Lightbits, the inventor of NVMe/TCP, is quickly becoming the defacto standard for managing, analyzing, and storing data on any cloud. Leveraging NVMe/TCP protocol and a shared storage architecture, Lightbits delivers the lowest latencies and highest scalability while delivering performance equivalent to local flash. Lightbits solves the storage challenges for cloud-centric applications with greater efficiency than proprietary appliances and when combined with VMWare vSphere is giant leap forward in delivering an end-to-end NVMe solutions ecosystem. This recognition is further validation that Lightbits fills a critical need for modern IT organizations looking to inject efficiency and agility into the data center. We invented NVMe/TCP, its native to the software, and thus delivers all the low latency and high performance of local flash. And Lightbits is the only solution available with Intelligent Flash Management for maximum cost-efficiency. So, while we may pause to appreciate this recognition, we’ll keep making BIG innovations that enable customers to efficiently leverage their IT investments to extract maximum value from their data.” Carol Platz, Vice President of Marketing, Lightbits Labs “Innovation is driving growth in the global economy,” said Maria Jimenez, chief operating officer of the Business Intelligence Group. “We are thrilled to be honoring Lightbits as they are one of the organizations leading this charge and helping humanity progress.” About Lightbits Labs Lightbits Labs (Lightbits) is leading the digital data center transformation by making high-performance elastic block storage available to any cloud. Creators of the NVMe over TCP (NVMe/TCP) protocol, Lightbits software-defined storage is easy to deploy at scale and delivers performance equivalent to local flash to accelerate cloud-native applications in bare metal, virtual, or containerized environments. Backed by leading enterprise investors including Cisco Investments, Dell Technologies Capital, Intel Capital, and Micron, Lightbits is on a mission to make high-performance elastic block storage simple, scalable and cost-efficient for any cloud. The NVMe, and NVMe/TCP word marks are registered or unregistered service marks of the NVM Express organization in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. About Business Intelligence Group The Business Intelligence Group was founded with the mission of recognizing true talent and superior performance in the business world. Unlike other industry award programs, these programs are judged by business executives having experience and knowledge. The organization’s proprietary and unique scoring system selectively measures performance across multiple business domains and then rewards those companies whose achievements stand above those of their peers.

Read More

Events